ARM64 based Graviton worker node in EKS and run Postgres cluster using statefulset

I am writing this blog in series as covering everything in one blog is difficult. In this blog we will see some encouragement to move to ARM64 along with a proof-of-concept (poc) of running Postgres Cluster on ARM based Graviton ec2 in aws cloud. We will proceed through the running of multi-architecture Docker images to leverage the latest AWS Graviton2 processors. The AWS projected performance and pricing advantages over the latest generation of AWS x86–64 instances are too impressive to ignore. In the next blog will show you performance numbers on using postgres with ARM64 compared with AMD64, as AWS suggests will have 40% performance improvement after migration with 20% less cost https://aws.amazon.com/blogs/containers/eks-on-graviton-generally-available/ and another blog will follow with migration to FreeBSD from Ubuntu which will further 30% improvement on using ARM64.

https://github.com/flomesh-io/fortio/wiki/HOWTO-:-Build-and-run-fortio-on-freebsd13-arm64

Performance, from the AWS Graviton marketing page they claim Graviton2 processors “provide up to 40% better price performance over comparable current-generation x86-based instances”.
Price as indicated by the performance quoted, they claim a 40% better price-performance when compared to their Intel and AMD choices, this comes from a combination of improved performance per core and lower unit cost. On the pricing side a tangible example would be:
t3.large (Intel x86) — $0.0832/hour
t4g.large (Graviton2 arm64) — $0.0672/hour
That’s nearly 20%!

So lets talks some about the environmental lookup as we know climate change and so lets care. I know I am too short here to discuss this but we know the bigger problem already and I don’t want to discuss here. Some estimates based on available information at the time and show potentially substantial power savings at the chip level alone:
m5n (Intel) — 210W
m5a (AMD) — 180W
m6g (Graviton2) — Estimated 80–110W?

Now let’s begin with what does ARM stand for Linux?

An ARM processor is one of a family of CPUs based on the RISC (reduced instruction set computer) architecture developed by Advanced RISC Machines (ARM).
Source: https://frameboxxindore.com/apple/what-is-linux-arm-64-bit.html

Industry support for ARM64 is very active, so a lot of open-source software has already been ported. Now here are for the support of Potgres Kuberentes Operator supported by Bitnami. In one of my development environement I’ll create a seperate nodegroup for Graviton worker nodes for EKS. But for sake of simplicity here I’ll start with creating control plan in eks.

Below is the ClusterConfig yaml say file name “mk-control-plane.yaml” to create a eks cluster.
Note : please update yaml with your regions and vpc.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: poc-cluster
region: my-region-1
vpc:
id: "vpc-xxxxxxxxxxxxxxa" # (optional, must match VPC ID used for each subnet below)
subnets:
private:
my-region-1a:
id: "subnet-xxxxxxxxxxxxxxa"
my-region-1b:
id: "subnet-xxxxxxxxxxxxxxab"
public:
my-region-1b:
id: "subnet-xxxxxxxxxxxxxxc"
my-region-1a:
id: "subnet-xxxxxxxxxxxxxxd"

Use below command to create self manage eks cluster using eksctl

eksctl create cluster -f mk-control-plane.yaml --write-kubeconfig --set-kubeconfig-context

By the time of writing, eks command will create kubernetes v1.21, therefore I have decided to upgrade kuberentes version to v1.22(latest supported by aws).
In order to complete it, I’ll be using aws console. and once the upgrade completely to 1.22 I’ll update kube-proxy, aws-node and coredns using commands below:-

eksctl utils update-aws-node --cluster poc-cluster --approve
eksctl utils update-kube-proxy --cluster poc-cluster --approve
eksctl create addon --name coredns --cluster poc-cluster --force

Now lets add nodegroup using below yaml(mk-app-graviton-worker-node.yaml) where I will be defining to use r6g type instance as worker node:-


apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: poc-clusterregion: my-region-1
nodeGroups:
- name: ng-poc-cluster
availabilityZones:
- my-region-1b
minSize: 1
maxSize: 1
desiredCapacity: 1
volumeSize: 35
kubeletExtraConfig:
kubeReserved:
cpu: "50m"
memory: "100Mi"
ephemeral-storage: "1Gi"
systemReserved:
cpu: "50m"
memory: "100Mi"
ephemeral-storage: "1Gi"
evictionHard:
memory.available: "200Mi"
nodefs.available: "10%"
featureGates:
RotateKubeletServerCertificate: true
ssh:
allow: true
publicKeyName: 'my-publicKeyName'
tags:
'environment:basedomain': 'mydomain.com'
instancesDistribution:
instanceTypes: ["r6g.large"]
onDemandBaseCapacity: 1
onDemandPercentageAboveBaseCapacity: 100
iam:
withAddonPolicies:
autoScaler: true

Run below command using above script

eksctl create nodegroup -f mk-app-graviton-worker-node.yaml

Once the worker node is ready we start with creating a namespace for postgres cluster.

kubectl create namespace graviton

to be continue…