There are numerous advantages that cloud computing brings to DevOps. These include the possibility of creating both test and production environments, in an automated way and in a short time. However, one of its drawbacks is the cost of using it. This economic cost is usually implied even when not making use of the “lifted” environment in the cloud. That said, it would be an attractive idea to be able to provision these environments only when you need to make use of the environment and thus save unnecessary costs, wouldn’t it?
At Datadope we have provided an answer to the above approach, using Jenkins and Terraform. Going into detail, we have several environments in GCP (Google Cloud Platform) for internal testing and other purposes. In these environments, different GCP services are used, such as GKE (Google Kubernetes Engine), Cloud SQL, Memorystore… These are environments that involve an economic cost during the time they are up and running, so it is a saving to keep these environments up and running only when necessary.
The provisioning of environments is fully automated and can even be scheduled (with a cron-job) through Jenkins. In this article we will discuss the example of automating the provisioning of a GKE cluster and its automated destruction through a Jenkins job. But not only that, this article also discusses how to deploy both applications and entire GKE environments of your choice in an automated way.
On-demand GKE provisioning of both the cluster and its applications is made possible with Jenkins, Terraform, ArgoCD and Kustomize:
– Jenkins is an open source automation server, allowing to automate all kinds of tasks related to building, testing and CI/CD. In our case, Jenkins will be the executor of each of the different steps.
– Terraform is an IaC (Infrastructure as Code) tool that, among other functionalities, allows us to define in code the infrastructure that needs to be deployed. Terraform makes this possible by interacting with the API of the different cloud providers, called providers in Terraform. In our example, Terraform is used to provision the GKE cluster.
– ArgoCD is a Continuous Deployment tool through GitOps for K8s. In our case, ArgoCD is used to deploy the applications to GKE.
– Kustomize is a K8s configuration transformation tool, which allows us to customise YAML files without templates and leave the original files unmodified. In our case, Kustomize is used with ArgoCD to be able to deploy the set of objects that make up the application or environment.
Going into detail about the creation job, the first stage of Jenkins makes use of Terraform to provision the infrastructure, in this case, the GKE cluster. It does this by applying the terraform files that detail the cluster configuration. In order for it to interact with GCP, it needs a serviceaccount that has relevant IAM permissions on the corresponding services. In our example, permissions on GKE are needed, so the serviceaccount used has ‘Kubernetes Engine Administrator’ permissions. At the end of the stage you get the cluster context to be able to interact with the GKE cluster in the following stages of the job:
stage('Terraform GKE Cluster') { withCredentials( [file(credentialsId: 'foo', variable: 'GOOGLE_APPLICATION_CREDENTIALS')] ) { // Create cluster 1 sh 'terraform init -backend-config="prefix=terraform/state-dev-cluster-1" -reconfigure -no-color' sh 'terraform apply -var gke_cluster_name="dev-cluster-1" -auto-approve -no-color' // Get GKE cluster contexts tokenC1 = readFile(file: 'token-dev-cluster-1') enpointC1 = readFile(file: 'endpoint-dev-cluster-1') ctxC1 = "--certificate-authority=ca-dev-cluster-1.pem --server=https://${enpointC1} --token=${tokenC1}" } }
Once the GKE cluster is “up&running”, the next step is to provision the necessary tools or utilities, such as CRDs (Custom Resources Definitions). In this case we start with the sealed secrets CRD discussed in this blog earlier:
stage('Install k8s Sealed-secrets') { wrap([$class: "MaskPasswordsBuildWrapper", varPasswordPairs: [[password: tokenC1]] ]) { // Install pre-defined key and certificate withCredentials( [file(credentialsId: 'foo', variable: 'K8S_SEALED_SECRETS_KEY')] ) { sh "kubectl ${ctxC1} apply -f ${K8S_SEALED_SECRETS_KEY}" } // Install Sealed-secrets sh "kubectl ${ctxC1} apply -f sealed-secrets-installer.yml" } }
And ArgoCD’s CRD:
stage('Install ArgoCD') { wrap([$class: "MaskPasswordsBuildWrapper", varPasswordPairs: [[password: tokenC1]] ]) { // Se aplica el NS de argoCD sh "kubectl ${ctxC1} apply -f argocd-ns.yml" // Se despliega el CRD de Argo sh "kubectl ${ctxC1} apply -n argocd -f argocd-installer.yaml" }
Once ArgoCD is installed on the cluster, Kustomize comes into play. We use the Kustomize base-overlays directory structure. The base directories contain the entire stock of available applications, inside them are all the objects to be deployed, where they are detailed in a file called kustomization.yaml. In overlays, the base directories to be deployed are indicated. This allows certain specific applications to be used depending on the needs of the environment.
An example of the base-overlays folder structure is shown. The base/logstash folder contains a deployment, a configmap and a kustomization.yaml:
This is the content of the base/logstash/kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - configmap.yaml
And in the logstash overlay you have the following kustomisation:
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../../base/logstash
ArgoCD deploys K8S objects as indicated in its apps. These apps indicate details such as the repository, the repository path or the branch to point to. In this example we have the following three ArgoCD apps that each point to an overlay of a repo in which there are deployments of, among other objects, a Filebeat, a Logstash and an Elasticsearch:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: filebeat namespace: argocd spec: destination: server: https://kubernetes.default.svc project: default source: path: overlays/filebeat repoURL: https://gitfoo/repofoo targetRevision: master syncPolicy: automated: prune: true selfHeal: true --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: logstash namespace: argocd spec: destination: server: https://kubernetes.default.svc project: default source: path: overlays/logstash repoURL: https://gitfoo/repofoo targetRevision: master syncPolicy: automated: prune: true selfHeal: true --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: elasticsearch namespace: argocd spec: destination: server: https://kubernetes.default.svc project: default source: path: overlays/elasticsearch repoURL: https://gitfoo/repofoo/ targetRevision: master syncPolicy: automated: prune: true selfHeal: true
Now you could start deploying kubernetes objects through ArgoCD. It is interesting to deploy only the Argo applications that point to the apps you want to use at the time you need to use GKE. This saves unnecessary resource costs that are not going to be used. To be able to do this, the deployment is parameterised through boolean parameters in the jenkins job:
stage('Filebeat') { if(params.filebeat){ // Deploy filebeat sh "kubectl ${ctxC1} apply -f filebeat-app.yml" } } stage('Logstash') { if(params.logstash){ // Deploy logstash sh "kubectl ${ctxC1} apply -f logstash-app.yml" } } stage('Elasticsearch') { if(params.elasticsearch){ // Deploy elasticsearch sh "kubectl ${ctxC1} apply -f elasticsearch-app.yml" } }
The boolean parameters in the Jenkins job are as follows:
And when you run the job, you can select the app you want to deploy at your choice:
In this way, we can automate the entire deployment of GKE, both the cluster itself and the deployment of objects. In addition to being able to select the applications that will be running:
As soon as the cluster is no longer in use, it is no longer necessary to keep the cluster up, as this would be an unnecessary cost. Therefore, the cluster is removed with another job using Terraform:
stage('Destroy Terraform GKE Clusters') { withCredentials([file(credentialsId: 'foo', variable: 'GOOGLE_APPLICATION_CREDENTIALS')]) { // Destroy cluster sh 'terraform init -backend-config="prefix=terraform/state-dev-cluster-1" -reconfigure -no-color' sh 'terraform apply -auto-approve -no-color -destroy' } }
The downside of this is that the persistence of the data is lost. However, there are alternatives such as using storage external to GKE, such as GCS buckets. In short, this dynamic is ideal for cases in which the raised environment is only needed at specific times, as it will save costs and in an automated manner.