Google Kubernetes Engine — for running containerized applications

Teepika R M
10 min readJul 9, 2022

Google App Engine vs Google Kubernetes Engine:

Nowadays developers are shifting towards containerized workflows. Containers are packages of software that contain all of the necessary elements to run in any environment. GKE is a cluster manager and orchestration system for running your Docker containers. Google App Engine(GAE) is basically google managed containers. One of the important highlights of using Google Kubernetes Engine is it lets you be independent from the cloud provider. It makes it easy to migrate the application to your own private cloud or any other cloud provider.

Google App Engine is Platform as a Service and Google Kubernetes Engine is Containers as a Service. GKE requires environment configuration and management overheads but App Engine frees the user from the overhead of managing the environment. GKE gives the user very fine grained control over everything about your cluster, whereas App Engine lets the user run the apps with as little configuration/management as possible.

GKE => More Control => More Work

Google App Engine => Less Control => Less Work

Let’s see a demo for GKE deployment — Containerized Web Application hosted on Google Kubernetes Engine relying on Google Cloud MySQL for data storage.

To access a Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address. Cloud SQL Auth proxy is the recommended way to connect. It provides strong encryption and authentication using IAM, which can help keep your database secure.

Demo Scenario:

Deploy a simple web server containerized application to a Google Kubernetes Engine (GKE) cluster. The containerized application is provided with Google Cloud MySQL for data storage. The connection from the application to the Cloud MySQL instance is secured using Cloud Sql Auth Proxy. The Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.

Once you have activated your google cloud account with a project created(billing enabled), follow the below steps.

Create GKE cluster

Step 1:Connect to the cloud shell and set the project id created

gcloud config set project medium-354300

Step 2: Create GKE Cluster

While creating the cluster, make sure you give enough resources to handle the workload for hosting the application. You can create a GKE cluster either using google cloud console or cloud shell with the gcloud command. For this post, I am using the gcloud command on the gcloud shell to create one. Also I have enabled Workload Identity while creating the cluster itself, but you can update the existing cluster to enable Workload Identity after creation.

Enable Kubernetes Engine API before creating the cluster using the below url,

https://console.cloud.google.com/apis/api/container.googleapis.com/overview

Command used to create GKE cluster:


gcloud beta container — project “medium-354300” clusters create “hello-cluster” — zone “us-west2-a” — no-enable-basic-auth — release-channel “regular” — machine-type “e2-standard-4” — image-type “COS_CONTAINERD” — disk-type “pd-standard” — disk-size “100” — metadata disable-legacy-endpoints=true — scopes “https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" — max-pods-per-node “110” — num-nodes 2 — logging=SYSTEM,WORKLOAD — monitoring=SYSTEM — enable-ip-alias — network “projects/medium-354300/global/networks/default” — subnetwork “projects/medium-354300/regions/us-west2/subnetworks/default” — no-enable-intra-node-visibility — default-max-pods-per-node “110” — enable-autoscaling — min-nodes “0” — max-nodes “1” — no-enable-master-authorized-networks — addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver — enable-autoupgrade — enable-autorepair — max-surge-upgrade 1 — max-unavailable-upgrade 0 — enable-autoprovisioning — min-cpu 4 — max-cpu 8 — min-memory 8 — max-memory 32 — enable-autoprovisioning-autorepair — enable-autoprovisioning-autoupgrade — autoprovisioning-max-surge-upgrade 1 — autoprovisioning-max-unavailable-upgrade 0 — autoscaling-profile optimize-utilization — enable-vertical-pod-autoscaling — enable-shielded-nodes — node-locations “us-west2-a” — workload-pool=medium-354300.svc.id.goog

Once done, run the following command to configure kubectl to use the cluster you created,

gcloud container clusters get-credentials hello-cluster

Create a Google Cloud MySQL Instance with a user set for accessing the tables

Step 1:, Create Cloud SQL Instance

Give the specifications you wish to give for your SQL instance and choose “Create Instance”

Step 2: Get the instance connection name to use it in the application for connecting with SQL instance

In the overview page of the SQL instance, you can find the Instance Connection Name. Copy it.

Note down the instance connection name, database user, and database password that you create.

Create a database in the above created MySQL instance based on your requirements.

Database name: medium_demo

In the overview page of the MySQL instance, click databases and create one by giving the name for the database.

Create a table in the database created

Step 1: In the overview page, click the “open cloud shell” option to connect to the SQL instance using gcloud. Using the shell, you can perform SQL operations on the database created.

Give the following gcloud sql command to establish the connection

gcloud sql connect myinstance — user=root — project=medium-354300

A table “users” is created with USER_ID, firstName, lastName columns.

Create a service account with “Cloud SQL Client” permission

A service account is a special kind of account used by an application, rather than a person. Applications use service accounts to make authorized API calls. In our case, it helps to authenticate our connection to the google cloud SQL Instance.

Step 1: Choose “Create Service Account” under IAM & Admin section.

Step 2: Make sure you give one of the following roles for the service account by choosing under “Select a Role”

  • Cloud SQL > Cloud SQL Client
  • Cloud SQL > Cloud SQL Editor
  • Cloud SQL > Cloud SQL Admin

Step 3: Download the private key for the service account created, by clicking actions on the service account created above and click “create new key”. The private key will get downloaded to your system by choosing “Create” with type as “Json”.

Establishing the connection using the Cloud SQL Auth proxy

Cloud SQL Auth proxy is added to your pod using the sidecar container pattern to let your application connect with the Google Cloud MySQL instance you created in the above steps. It makes the connection between the instance and the application secure.

The following needs to be done before starting the set up for Auth proxy,

  1. Note down Instance Connection Name of the Cloud MySQL Instance. We can find the details in the overview of the Instance page
  2. Note down the location of the private key file associated with the service account created with proper privileges.
  3. Enable Cloud SQL Admin API using the following url, https://console.cloud.google.com/flows/enableapi?apiid=sqladmin&redirect=https://console.cloud.google.com&_ga=2.204640848.1919615695.1656267379-569780488.1654628170

Provide the service account to the Cloud SQL Auth proxy

An identification is needed to represent your application for communicating with the services in the google cloud. A service account is used to provide that identity for your application for any communication with other google cloud services. We need to configure GKE to provide the service account to the Cloud SQL Auth proxy since all the communication from the application to the MySQL instance goes through the Auth proxy. There are two recommended ways to do this: workload identity or a service account key file. For this demo, we will follow the workload identity method.

Workload Identity method

This method allows you to bind a Kubernetes Service Account (KSA) to a Google Service Account (GSA). Again, A Google Service Account (GSA) or Service Account is an IAM identity that represents your application in Google Cloud. In a similar fashion, a Kubernetes Service Account (KSA) is an identity that represents your application in a Google Kubernetes Engine cluster.

Workload Identity binds a KSA to a GSA, causing any deployments with that KSA to authenticate as the GSA in their interactions with Google Cloud.

Step 1: Enable Workload Identity for your cluster if not enabled during creation,

Run the following command on gcloud console to update the cluster configuration,

gcloud container clusters update hello-cluster \ — region=us-west2-a \ — workload-pool=medium-354300.svc.id.goog

Step 2: Create a new node pool

gcloud container node-pools create nodepool \ — cluster=hello-cluster \ — workload-metadata=GKE_METADATA

Step 3: Create a Kubernetes service account for your application to use

Run this command in gcloud shell to create KSA,

kubectl create serviceaccount gkeclusterksa \ — namespace default

Step 4: Enable the IAM binding between your YOUR-GSA-NAME() and YOUR-KSA-NAME:

gcloud iam service-accounts add-iam-policy-binding — role=”roles/iam.workloadIdentityUser” — member=”serviceAccount:medium-354300.svc.id.goog[default/gkeclusterksa]” service-account-cloud-sql@medium-354300.iam.gserviceaccount.com — project=”medium-354300”

Step 5: Add an annotation to YOUR-KSA-NAME to complete the binding

kubectl annotate serviceaccount \gkeclusterksa \iam.gke.io/gcp-service-account=service-account-cloud-sql@medium-354300.iam.gserviceaccount.com

Build and push the Docker image to Google container registry

Step 1: Enable the Cloud Build and Artifact Registry APIs

Step 2: Create a directory in gcloud shell with the source code and dependencies in package.json for building the source image

Step 3: Create a Dockerfile with steps to build

Step 4: Make the source file executable

chmod +x check_deployement.js

Step 5: Build and push the Docker image to Google container registry

gcloud builds submit — tag gcr.io/medium-354300/initial_backend_v1

Please find the files I used for development below and substitute with your values as needed,

package.json

{“name”: “cloudsql-mysql-mysql”,“description”: “Node.js Cloud SQL MySQL Connectivity Sample”,“private”: true,“license”: “Apache-2.0”,“author”: “Google Inc.”,“repository”: {“type”: “git”,“url”: “https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git"},“engines”: {“node”: “>=10.0.0”},“scripts”: {“start”: “node check_deployement.js”,“deploy”: “gcloud app deploy”,“lint”: “samples lint”,“pretest”: “npm run lint”,“test”: “samples test app”},“dependencies”: {“@google-cloud/logging-winston”: “⁴.0.0”,“express”: “⁴.17.1”,“promise-mysql”: “⁵.0.0”,“prompt”: “¹.0.0”,“pug”: “³.0.0”,“winston”: “³.1.0”},“devDependencies”: {“mocha”: “⁸.0.0”,“proxyquire”: “².1.0”,“sinon”: “¹⁰.0.0”,“supertest”: “⁶.0.0”}}

Dockerfile:

FROM node:16# Create app directoryWORKDIR /usr/src/app# Install app dependencies# A wildcard is used to ensure both package.json AND package-lock.json are copied# where available (npm@5+)COPY package*.json ./RUN npm install# If you are building your code for production# RUN npm ci — only=production# Bundle app sourceCOPY . .EXPOSE 8080CMD [ “node”, “check_deployement.js” ]

proxy_with_workload_identity.yaml:

# [START cloud_sql_proxy_k8s_sa]apiVersion: apps/v1kind: Deploymentmetadata:name: kubernetes-cloud-sqlspec:selector:matchLabels:app: initialbackendtemplate:metadata:labels:app: initialbackendspec:serviceAccountName: gkeclusterksa# [END cloud_sql_proxy_k8s_sa]# [START cloud_sql_proxy_k8s_secrets]containers:- name: initialbackendimage: gcr.io/medium-354300/initial_backend_v1ports:- containerPort: 8080# … other container configurationenv:- name: DB_USERvalueFrom:secretKeyRef:name: cloud-sql-secretkey: username- name: DB_PASSvalueFrom:secretKeyRef:name: cloud-sql-secretkey: password- name: DB_NAMEvalueFrom:secretKeyRef:name: cloud-sql-secretkey: database# [END cloud_sql_proxy_k8s_secrets]# [START cloud_sql_proxy_k8s_container]- name: cloud-sql-proxy# It is recommended to use the latest version of the Cloud SQL proxy# Make sure to update on a regular schedule!image: gcr.io/cloudsql-docker/gce-proxy:1.29.0command:- “/cloud_sql_proxy”# If connecting from a VPC-native GKE cluster, you can use the# following flag to have the proxy connect over private IP# — “-ip_address_types=PRIVATE”- “-log_debug_stdout”# Replace DB_PORT with the port the proxy should listen on# Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433- “-instances=medium-354300:us-west2:myinstance=tcp:3306”securityContext:# The default Cloud SQL proxy image runs as the# “nonroot” user and group (uid: 65532) by default.runAsNonRoot: trueresources:requests:# The proxy’s memory use scales linearly with the number of active# connections. Fewer open connections will use less memory. Adjust# this value based on your application’s requirements.memory: “4Gi”# The proxy’s CPU use scales linearly with the amount of IO between# the database and the application. Adjust this value based on your# application’s requirements.cpu: “2”# [END cloud_sql_proxy_k8s_container]

Make the deployment in GKE cluster

kubectl apply -f proxy_with_workload_identity.yaml

Expose the deployment to make it accessible to the outside world

kubectl expose deployment kubernetes-cloud-sql — name my-service — type LoadBalancer — port 8080 — target-port 8080

Note down the external IP and post request to the address to retrieve the results!!

Happy Learning!! Please leave comment for any questions or views :)

--

--

Teepika R M

AWS Certified Big Data Specialty| Linux Certified Kubernetes Application Developer| Hortonworks Certified Spark Developer|Hortonworks Certified Hadoop Developer