Leader election is a crucial pattern in distributed systems where multiple instances or nodes compete to perform certain tasks. In a Kubernetes cluster, leader election can be used to ensure that only one instance is responsible for executing leader-specific tasks at any given time. This blog post will explore how to implement a leader election mechanism in Kubernetes using lease locks.
The leader election mechanism implemented in Go code relies on Kubernetes coordination
features, specifically Lease
object in the coordination.k8s.io
API Group. Lease locks provide a way to acquire a lease on a shared resource,
which can be used to determine the leader among a group of nodes.
The example code, used for this blog is available on mjasion/golang-k8s-leader-example GitHub repository.
The main function is the entry point of the program. It reads configuration values from environment
variables and obtains the Kubernetes clientset
by getting access to Kube-Api by ServiceAccount attached to Pod.
The application is written to work in Kubernetes Pod, that’s why it is using rest.InClusterConfig()
function.
The leader election configuration is set up using the LeaderElectionConfig
struct from the Kubernetes
client library. It specifies the lease lock, lease duration, renewal deadline, retry period, and callback
functions for leader-specific tasks.
leaderElectionConfig := leaderelection.LeaderElectionConfig{
Lock: &resourcelock.LeaseLock{
LeaseMeta: metav1.ObjectMeta{
Name: lockName,
Namespace: leaseNamespace,
},
Client: clientset.CoordinationV1(),
LockConfig: resourcelock.ResourceLockConfig{
Identity: os.Getenv("HOSTNAME"),
},
},
LeaseDuration: time.Duration(leaseDuration) * time.Second,
RenewDeadline: time.Duration(renewalDeadline) * time.Second,
RetryPeriod: time.Duration(retryPeriod) * time.Second,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: onStartedLeading,
OnStoppedLeading: onStoppedLeading,
},
ReleaseOnCancel: true,
}
The most important settings are the lease duration, renewal deadline, and retry period:
LeaseDuration
specifies how long the lease is valid.RenewDeadline
specifies the amount
of time that the current node has to renew the lease before it expires.RetryPeriod
specifies the amount of time that the current holder of a lease has last updated the lease.The leader-specific tasks are performed in the onStartedLeading
function, which is called
when the current node becomes the leader. The updateServiceSelectorToCurrentPod
function updates the
service selector to include the current pod’s hostname.
func onStartedLeading(ctx context.Context) {
log.Println("Became leader: ", os.Getenv("HOSTNAME"))
clientset := getKubeClient()
updateServiceSelectorToCurrentPod(clientset)
go func() {
for {
select {
case <-ctx.Done():
log.Println("Stopped leader loop")
return
default:
log.Println("Performing leader tasks...")
time.Sleep(1 * time.Second)
}
}
}()
}
The onStoppedLeading
function is called when the current node stops being the leader. It can be used for cleanup tasks.
func onStoppedLeading() {
log.Println("Stopped being leader")
}
A context and a wait group are created to manage goroutines. A goroutine is started to run the leader
election using the leaderelection.RunOrDie
function.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
leaderelection.RunOrDie(ctx, leaderElectionConfig)
}()
cancel()
wg.Wait()
The program also sets up a Gin router and defines a root endpoint that returns the hostname of the current node, to easily check which Pod is beeing a leader.
In this demo, we will deploy a single Pod to a Kubernetes cluster and observe how the leader election works.
As you can see here, the pod is elected as a leader and performs leader-specific tasks. The lease
object
contains the information about the current leader in the HOLDER
column.
NAME HOLDER AGE
k8s-leader-example k8s-leader-example-8dd646bb7-dsfmq 11s
In this demo, we will deploy multiple Pods to a Kubernetes cluster and observe how the leader election works. The settings used for this demo are as follows:
Setting | Value |
---|---|
Lease Duration | 10 seconds |
Renewal Deadline | 5 seconds |
Retry Period | 1 seconds |
The leader election mechanism will attempt to renew the lease every 5 seconds. If the lease is not renewed within 5 seconds, the leader election mechanism will attempt to acquire the lease. If the lease is not acquired within 1 second, the leader election mechanism will retry to acquire the lease.
Running command kubectl get lease --watch
allows to observe the leader election process. The lease
object
contains first the information about the previous leader, when the leader is killed, and then the information
about the new leader.
Implementing leader election in Kubernetes using lease locks is an effective way to ensure that only one instance or node performs leader-specific tasks at a time. In this blog post, we explored the provided Go code that demonstrates how to implement leader election in a Kubernetes cluster.
By incorporating leader election into your distributed system, you can enhance its reliability and prevent conflicts that may arise from multiple instances attempting to execute the same tasks simultaneously.
]]>This is example how to remove Github Notifications of failed jobs after 7 days:
function cleanupByFilter() {
deleteEmails('from:notifications@github.com older_than:7d subject: "Run failed"')
// For more filters add more `deleteEmails` function executions here
}
function deleteEmails(filter) {
var threads = GmailApp.search(filter);
Logger.log("Deleting " + threads.length + " messsages from filter: " + filter)
for (var i = 0; i < threads.length; i++) {
var messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
var message = messages[j];
message.markRead()
message.moveToTrash()
}
}
}
To setup this see my post here.
By default resources, you launch on the cloud (EC2, RDS, and others) cannot communicate with your local networks like home or office. To allow this you can create a Site-to-Site VPN. This VPN connection will be established between your router and AWS VPC.
Creating VPN between networks is well documented. However, you can have issues configuring your home router. At home, I have Unifi Dream Machine router, which is designed for small networks, but have features which are matching advanced routers for offices. One of them is a Site-to-Site VPN using the IPSec protocol.
The first step is to create a VPN connection on AWS. For this blog I will use… the default VPC 🙂. To configure it AWS requires to define 3 components: Customer Gateway, Virtual Private Gateway and VPN connection.
The Customer Gateway is basically just an entity that holds the information about your home router - public IP. The Virtual Private Gateway is the virtual entity on the VPC side, that allows to configure routing to that gateway. That VPG will is attached to the VPN. So the last entity is the VPN connection which brings it all together and establishes the VPN tunnels between your home or office and VPC.
To start your VPN connection start by defining Customer Gateway.
Go to the VPC tab, find panel Customer Gateway and click t button to Create. The required field is IP address. Write here your public IP address where your router is running. Also, it is good to name this gateway. I named my as home
.
To quickly check you public IP you can open https://ifconfig.co
The next step is to create Virtual Private Gateway. Go to the Virtual Panel Gateway panel and click Create. It just asks for a name. Let’s name it also home
. The VPG state will be detached. I will back to it later.
This is a time to start defining VPN. Open the Site-to-Site VPN connection panel and click Create VPN Connection. The form will have 3 panels: details and tunnel options.
Details start from defining the gateway on the VPC side. Choose Virtual private gateway and in the form select your VPG.
Next select Customer gateway. Here you define with which router the VPN will be established.
There is the last configuration to set: routing. It is a section where you can define to which local networks will the VPN will be used. On the screenshot, I marked this as a point 1.
AWS allows for two options. to configure routing dynamic, based on BGP. And statically defined. In my case, I am using static.
In the prefixes, I am putting my local network prefixes(like 192.168.1.0/24
). You are allowed to put multiple networks here.
Tunnel options are allowing to define the IPSec parameters. AWS creating a VPC creates 2 tunnels, to which you can connect and each of them you can configure differently. Or the same 🙂.
What I am always choosing is:
Those parameters will be crucial for setting up our Unifi router.
When you finish these changes you can create VPN and wait a few seconds. The VPN connection should be ready.
Previously I mentioned that the Virtual Private Gateway state is not attached to any VPC. We can reassign the VPN connection between VPCs by changing attachments.
To attach go back to Virtual Private Gateway and select your VPG and in Actions, button find Attach to VPC. Select your VPC and your VPG should be available to configure routing on VPC.
On VPC left the last thing to configure: routes. As we have assigned VPG, and networks from our home are known(are configured on VPN, with static routes), we can configure automated propagation of those rules to VPC route tables. To configure this perform:
After a few seconds, the new route should be added As you can see, the last route is “Propagated”, and it’s target is my virtual private gateway.
Having configured AWS VPC, left the part to configure our router. In my home, I have Unifi Dream Machine, with the latest software (Network 7.1).
To create a VPN connection:
Start filling out form. The Pre-Shared Key you could configure on Tunnel Options. If you have skipped this, go to the AWS VPN tab, and click Download Configuration. In his file you will find PSK to fill in point 1 and the Remote IP Address(point 4).
In the Remote Gateway/Subnets(point 3) put AWS VPC network addressing. In my case, it was 172.31.0.0/16
.
To align encryption options to Tunnel Options on AWS, select Manual in Advanced configuration and customize. Configure your parameters accordingly to this how you have configured them. I am recommending using AES-256 and higher DH Groups and use above image as an example..
To check if you have a working VPN connection create an EC2 instance on this VPC.
I have created an instance with IP 172.31.34.95
. This instance has a Security Group rule allowing for All Traffic from my home network.
When the VPN works and instance is up, a simple ping
can prove that everything is configured:
$ ping 172.31.34.95 -c 5
PING 172.31.34.95 (172.31.34.95) 56(84) bytes of data.
64 bytes from 172.31.34.95: icmp_seq=1 ttl=63 time=38.1 ms
64 bytes from 172.31.34.95: icmp_seq=2 ttl=63 time=38.2 ms
64 bytes from 172.31.34.95: icmp_seq=3 ttl=63 time=36.4 ms
64 bytes from 172.31.34.95: icmp_seq=4 ttl=63 time=38.3 ms
64 bytes from 172.31.34.95: icmp_seq=5 ttl=63 time=38.1 ms
--- 172.31.34.95 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 36.380/37.810/38.285/0.718 ms
If you are not sure that you are pinging EC2 take a look on ping response times. On local networks they are much lower.
Setting up VPN allows to make your infrastructure secure. You don’t have to expose ports on public internet to have access to cloud machines.
I have two network VLANs at home, should I configure some firewall rules if I don’t want to allow access from one of them?
Let’s assume you have two networks: home
and guest
. If the guest
network should not have access to resources to VPN, on AWS VPN, in Static IP Prefixes configuration you have to set only home
network subnet.
Another thing are Security Groups, where we define allowed networks. If you will not set too wide network range, then it will also block access.
]]>In my previous post I showed how to enable debug logs. Today I want to present how to improve terraform plan
and terraform apply
speed by configuring parallelism.
Terraform by default runs 10
concurrent operations. To reduce execution time on plan or apply operation we can increase this parameter.
By increasing paralellism you can hist rate limit your provider. Some cloud providers (like Cloudflare) informs about number of API request allowed in period of time. Hitting the limit can impact your deployments.
TFE_PARALLELISM
variableThe easiest way to increase parallelism in Terraform Cloud for Remote Execution is TFE_PARALLELISM
variable. It just require a number. To set this you need to perform those steps:
TFE_PARALLELISM
variable:
Ensure you have selected Environment variable button
The change should be available on next execution.
Terraform CLI allows to configure parallelism differently per each command(terrafom plan
,terraform apply
or terraform destroy
). In Terraform Cloud we also can do this. In these cases, use TF_CLI_ARGS_plan="-parallelism=<N>"
or TF_CLI_ARGS_apply="-parallelism=<N>"
environment variables instead TFE_PARALLELISM
.
I prefer this way because it allows to be more granular. I want to run plan fast because it makes request about every resource.
To set TF_CLI_ARGS_plan="-parallelism=<N>"
or TF_CLI_ARGS_apply="-parallelism=<N>"
parameters perform same steps as in instruction written above for TFE_PARALLELISM
.
I showed how to configure variable per each workspace. Terraform Cloud allows to configure Variable set which can be attached to each workspace, and we don’t need to repeat ourself for each workspace.
To configure Variable set do:
What left is to attach variables set to your workspace or you can enable this set to all workspaces in organization.
Variables set has lower precedence than workspace variables. Definition of the same variable in workspace will be used in execution. Here you can read more.
]]>Terraform Cloud is an application that helps teams use Terraform together. I am using it for side projects like my cloud infrastructure. Last time I had to see trace logs to find issue with one of managed resources.
Terraform has detailed logs which can be enabled by setting the TF_LOG
environment variable to any value. This will cause detailed logs to appear on execution.
You can set TF_LOG
to one of the log levels TRACE
, DEBUG
, INFO
, WARN
or ERROR
to change the verbosity of the logs. You can set this variable in two ways. First option is to set variable for shell session:
$ export TF_LOG=TRACE
$ terraform plan
Second option is to change variable before command execution
$ TF_LOG=TRACE terraform plan
If this run is in Terraform Cloud or Terraform Enterprise with Remote Execution, perform these steps:
TF_LOG
variable:
Ensure you have selected Environment variable button
If you are working on your own project, it will be much convenient to disable Remote Execution and execute run locally. Go to workspace Settings and in Execution Mode panel switch button to Local. Then you can run plan from you local machine.
Once the issue is resolved, unset the TF_LOG
environment variable to disable the enhanced logging.
Istio is a complex system. For the applications, the main component is the sidecar container Istio-Proxy, which proxies while traffic from all containers in Pod. And this can lead to some issues.
This post describes one of the most complicated problems I have encountered in my career.
During Istio on a huge system, with more than different 40 microservices, on a single endpoint, QA engineers found a bug. It was POST endpoint, which was returning chunked data.
Istio was returning error 502, in logs were visible additional flag: upstream_reset_before_response_started
. The application logs were confirming, that the result was correct.
In legacy Istio versions of the presented problem Istio were returning
503
error withUC
flag.
Lest see the curl
response and look at Istio-proxy logs:
kubectl exec -it curl-0 -- curl http://http-chunked:8080/wrong -v
< HTTP/1.1 502 Bad Gateway
< content-length: 87
< content-type: text/plain
< date: Sun, 24 Apr 2022 12:28:28 GMT
< server: istio-envoy
< x-envoy-decorator-operation: http-chunked.default.svc.cluster.local:8080/*
upstream connect error or disconnect/reset before headers. reset reason: protocol error
$ kubectl logs http-chunked-0 -c istio-proxy
[2022-04-24T12:23:37.047Z] "GET /wrong HTTP/1.1" 502 UPE upstream_reset_before_response_started{protocol_error} - "-" 0 87 1001 - "-" "curl/7.80.0" "3987a4cb-2e0e-4de6-af66-7e3447600c73" "http-chunked:8080" "10.244.0.17:8080" inbound|8080|| 127.0.0.6:39063 10.244.0.17:8080 10.244.0.14:35500 - default
To analyze the traffic we can use tcpdump
and Wireshark. Istio-proxy runs as a sidecar, which routes whole incoming and outgoing traffic to pod through own proxy.
To sniff traffic there are 3 ways:
istio-proxy
container,kubectl
plugin ksinff
- a plugin to kubectl to dump packets from pod, github repo,root
permission and tcpdump
installed,First will not work by default, because istio-proxy
runs without root permission. The third is the backup if 1 and 2 would not work. Let’s try ksniff.
ksniff
in three words is a plugin that:
Let’s execute it to sniff our application:
kubectl sniff http-chunked-0 -c istio-proxy -p -f '-i lo' -n default
Important parameters
-p
is parameter to support sniffing even if the pod is non-priviledged. See docs,-f '-i lo'
passes filter to tcpdump, we want to sniff localhost interface inside the Pod.
If there is no issue, our system has Wireshark in PATH
, ksniff
should open a new window
Wireshark will continuously follow with new packet records. It makes it hard to figure out our particular call. We can youse filters to help with searching. Knowing the request path, method, response code - we can use it to find our packet using filter:
http.request.uri == "/wrong"
It shows only a single packet, our request. Wireshark allows to show the whole TCP conversation:
Conversation Filter
,TCP
.Wireshark will write a filter to show the whole communication between istio-proxy container and the application container!
Let’s see the above image. The first 3 records are the three-way handshake packets. Later is our GET request. The most interesting happens in the last two packets. Application container returns response HTTP 200 OK. istio-proxy
then closes the connection with RST
packet.
This is what we saw in the logs. The flag was upstream_reset_before_response_started{protocol_error}
. But why? This still does not explain.
It is hard to read the HTTP protocol from multiple packets bodies. But also there Wireshark has a solution. We can see data from L7, the application one. In our case, it is the HTTP protocol.
Click with the right mouse on a single packet, go to the Follow
tab, and select TCP Stream
:
Now we can check what the request from istio-proxy
looked like, and what was the response from the app.
Do you have an idea from the above picture?
Look closer at the response, there is a double Transfer-Encoding
header. One starts from uppercase, the second one does not.
Searching over Istio issues I found this answer. The most important are first 2 points:
- two
transfer-encoding: chunked
is equivalent totransfer-encoding: chunked, chunked
as per RFC,transfer-encoding: chunked, chunked
doesn’t have the same semantic astransfer-encoding: chunked
Why the response was taken as double-chunked? According to Transfer Codings in Section 4, transfer-coding names are case-insensitive.
As you see, Istio stands as a guard 👮♂️ of the HTTP protocol. If the app is returning a double-chunked response, then Istio requires it, otherwise, it rejects processing the request. curl
ignores this inconsistency.
This issue was one of the most difficult tasks, which I ever had :-)
In Github repository I created example infrastructure to reproduce the problem.
Bootstrap of the infrastructure installs ArgoCD, Istio and the App. The sample app exposes two endpoints:
/correct
- endpoint, which creates a streamed response,/wrong
- is doing same as above, but additionally it set value of the Transfer-Encoding
header to Chunked
(uppercase).I would like to thank Przemysław for his help and for showing me how to use Wireshark efficiently during this issue.🤝🏻
]]>By default Kind uses system /etc/resolv.conf
. This points to systemd-resolved
service and some queries might fail. You can mount your network DNS configuration.
Save below config in kind-cluster.yaml
:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /run/systemd/resolve/resolv.conf
containerPath: /etc/resolv.conf
and run the kind
:
$ kind create cluster --config kind-cluster.yaml
Control-plane node should use your network DNS now.
I am learning blockchain and smarcontract. This post will be my note how I am starting my journey to blockchain technology.
In this example I will create a contract for storing who is owning a Pet.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0; //build contract on top of Solidity >=0.8.0 and <0.9.0
contract PetOwner {
mapping (string => Pet) public petOwners;
struct Pet {
string name;
string petType;
}
function addPetOwner(string memory ownerName, string memory _name, string memory _petType) public {
petOwners[ownerName] = Pet({name: _name, petType: _petType});
}
}
Put it to the Remix IDE: https://remix.ethereum.org/. It should compile and we can deploy it on local environment:
When we have contract deployed we can create example contract ownership
and ask petOwners
field for information which Pet is owned by Marcin
.
Ethenerum allows to test our contract test networks. For this example I will use Rinkeby. This a free and for testing smart contracts network.
I am not covering how to install Metamask. Always remember to not share private key and seed
You can always create a new Metamask identity for the your tests.
Javascript VM
to Injected Web3
ETH
coins you can use this Faucet to grab some: https://faucets.chain.link/rinkebyNow I can test my contract. I fill the data
And we will be asked again for confirming the transaction
.
When everything will be done, our transaction should be visible on Etherscan
And at the end we can check if petOwner
field contains our definition:
This post are my notes from Blockchain development tutorial availale here.
]]>KUBECONFIG=$(ls ~/.kube/*.config | tr "\n" ":") kubectl config view --merge --flatten > ~/.kube/config
GitLab allows you to stay informed about what’s happening in your projects sending you the notifications via email. With enabled notifications, you can receive updates about activity in issues, merge requests or build results. All of those emails are sent from a single address which without a doubt makes it harder to do successful filtering and labeling.
However GitLab adds custom headers to every sent notification to allow you to better manage received notification and for example, you could add a label to all emails with pipelines results to mark them as important. Similarly, you could make the same scenario for notification about the issue assigned to you. Some of the headers that you can find in emails are:
Header name | Reason of message |
---|---|
X-GitLab-Project |
Notification from project |
X-GitLab-Issue-ID |
Notification about a change in issue. |
X-GitLab-MergeRequest-ID |
Notification about a change in merge request. |
X-GitLab-Pipeline-Id |
Notification about the result of pipeline. |
As can be seen above headers allow you to create example condition: if the email contains the header X-GitLab-Issue-ID
then add a label “GitLab Issue”.
Of course, there are more headers available. The full list of headers, which GitLab can include to emails is available in the section “Filtering email” of GitLab documentation. Every header also contains a value. Some headers contain an ID, some contain names of projects. You can check out them in the documentation.
To automatically add labels in Gmail you have to create a filter. However, it does not allow to filter by headers. But this is not impossible.
Google provides a special service called Google Apps Scripts. It allows you to write short scripts in TypeScript language, where you can extend default Gmail filtering.
Firstly you have to begin with function, which will be scheduled to query for new emails in the inbox and will execute further message processing:
function processInbox() {
// process all recent threads in the Inbox
var threads = GmailApp.search("newer_than:1h"); // search query is exactly same as in Gmail search box
for (var i = 0; i < threads.length; i++) {
// get all messages in a given thread
var messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
var message = messages[j];
processMessage(message); // function to process the message
}
}
}
As you see, the code is pretty simple. It uses search()
function from GmailApp which allows you to interact with Gmail service. The result of the function is a list of threads from the last hour. After that we have to get the message content. We can do it by writing a loop to get every message from a thread. The getMessages()
function returns a list o Gmail Messages objects. Having them we can implement our actions basing on the content.
To do that you have to call getRawContent()
function on the message object and check if the message contains a string that you are looking for. For example to check that this is a message send by GitLab find in the body string "X-GitLab"
:
var gitlabLabel = GmailApp.getUserLabelByName("GitLab");
var body = message.getRawContent();
if (body.indexOf("X-GitLab") > -1) {
message.getThread().addLabel(gitlabLabel);
}
Now we can implement the processMessage(message)
function adding other conditions and putting it below processInbox()
. As a result, we will get a full script, which will look like this:
function processInbox() {
// process all recent threads in the Inbox (see comment to this answer)
var threads = GmailApp.search("newer_than:1h");
Logger.log(threads.length)
for (var i = 0; i < threads.length; i++) {
// get all messages in a given thread
var messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
var message = messages[j];
processMessage(message);
}
}
}
function processMessage(message) {
// Get label instances
var gitlabLabel = GmailApp.getUserLabelByName("GitLab");
var issueLabel = GmailApp.getUserLabelByName("Gitlab/Issue");
var mrLabel = GmailApp.getUserLabelByName("Gitlab/Merge request");
var buildLabel = GmailApp.getUserLabelByName("Gitlab/Build");
var commitLabel = GmailApp.getUserLabelByName("Gitlab/Commit");
var discussionLabel = GmailApp.getUserLabelByName("Gitlab/Discussion");
// Start message processing
var body = message.getRawContent();
if (body.indexOf("X-GitLab") > -1) {
message.getThread().addLabel(gitlabLabel);
}
if (body.indexOf("X-GitLab-Issue-ID") > -1) {
message.getThread().addLabel(issueLabel);
}
if (body.indexOf("X-GitLab-MergeRequest-ID") > -1) {
message.getThread().addLabel(mrLabel);
}
if (body.indexOf("X-GitLab-Commit-ID") > -1 || body.indexOf("X-GitLab-Author-ID") > -1) {
message.getThread().addLabel(commitLabel);
}
if (body.indexOf("X-GitLab-Author") > -1) {
message.getThread().addLabel(commitLabel);
}
if (body.indexOf("X-GitLab-Pipeline-Id") > -1) {
message.getThread().addLabel(buildLabel)
}
}
You have to create labels before running the function. Otherwise, your script will throw an error.
Go to Google Apps Scripts.
Create a new project, put your code and save.
From the Web IDE you can perform the execution to check for errors. Select function processInbox
and click Play button:
You will be asked to permit a project access to your Gmail data. Choose your account:
After successful authorization, you can re-run the project. It will be immediately executed.
When there is no errors, create a custom trigger. Find button:
Click “Add trigger” button at the bottom of the page.
Select function processInbox
and configure the time source. The execution frequency depends is your choice. If you receive a lot of messages and you will run this script every 1 minute you can hit the limits. In the above script, I am scanning for emails from the last hour so the script can be executed at least once an hour.
Google should now start executing your script and checking for new emails to make actions which you just implemented. The result of running this script is to label new emails from GitLab as you want 🤗.
Gmail filters are sufficient for most user’s usage. However, if your use case is more advanced then for help arrives Google Apps Scripts. It doesn’t require deep programming knowledge and searching in Google you can solve your problems. Summing up remember that you can have multiple scripts to process your inbox, and you can
Did you know about Google App Scripts before? Please share how are you using them in the comments below.
]]>