Kubernetes can be installed on top of various cloud infrastructures, whether EKS or GCE. However, each cloud infrastructure has its own way to manage the network, and it is virtually impossible for Kubernetes to support all of them. Kubernetes solves this problem by providing interface modules called cloud providers. Each cloud provider implements a cloud provider interface suitable for its cloud environment and provides it to Kubernetes, and Kubernetes can use this interface to configure load balancer rules, nodes, and networking routes.
However, sometimes there may be a situation where you use EKS but need the features of another load balancer instead of the features provided by ELB. To cater this case, Kubernetes provides a LoadBalancerClass field in service specs starting from version 1.24. By setting the LoadBalancerClass field, you can use a load balancer other than the default set by the cloud provider..
For services with LoadBalancerClass set, the default load balancer does nothing. In order to use the LoadBalancerClass, the user needs to process the LoadBalancerClass and perform tasks, e.g. an application that can connect to other load balancer and configure the rules to the load balancer.
LoxiLB provides the kube-loxilb application to support the LoadBalancerClass. In this post, we will see how to deploy kube-loxilb to EKS and set up LoxiLB.
kube-loxilb
kube-loxilb is an application deployed in Kubernetes as a Deployment. It monitors k8s service creation events and checks whether LoadBalancerClass is specified in the load balancer service specification. If the LoadBalancerClass value is "loxilb.io/loxilb", kube-loxilb allocates an External IP and configures the service in the LoxiLB load balancer.
Topology
This is the topology used for the setup:
We created an EKS consisting of 3 nodes and a LoxiLB node to act as a load balancer node. To enable access to the topology from the outside, a public IP that can be accessed from outside is granted by AWS, and this IP is set on LoxiLB node's eth0 interface. (Kubernetes version 1.24 or higher supports LoadBalancerClass) The OS on each node uses Ubuntu 20.04 LTS version.
Pre-requisites
Also, kubectl is used to create a service or check information on k8s. This post assumes that aws-cli and eksctl are installed on your bastion node. Also, LoxiLB is deployed in the form of a docker container, so docker must be installed on the LoxiLB node.
Install eksctl
We will install eksctl locally to provision EKS. First, install the AWS CLI.
sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Then add the user's access information to the AWS CLI with the command below. For region, we will use ap-northeast-2 in this post.
aws configure
AWS Access Key ID[None]:~~~
AWS Secret Access Key [None]:~~~
Default region name [None]: ap-northeast-2
Default output format [None]: json
Now install eksctl:
# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
sudo mv /tmp/eksctl /usr/local/bin
Create EKS cluster
Let's create EKS Cluster using ekscli with this command:
eksctl create cluster \
--version 1.31 \
--name loxilb-demo \
--vpc-nat-mode Single \
--region ap-northeast-2 \
--node-type t3.medium \
--nodes 3 \
--with-oidc \
--ssh-access \
--ssh-public-key aws-netlox \
--managed
--version : Specifies the Kubernetes version.
--name : Specifies the EKS cluster name.
--vpc-nat-mode : Specified as Single.
--ssh-public-key : Specifies the key for SSH access.
Update kube config
aws eks update-kubeconfig --region ap-northeast-2 --name loxilb-demo
You can get information via kubectl get nodes command.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-29-103.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
ip-192-168-54-159.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
ip-192-168-91-227.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
Create LoxiLB node
Once the EKS cluster is deployed, we now create LoxiLB nodes. We will spawn a new EC2 instance of size t2.large and Ubuntu 20.04 OS. Since traffic coming from outside must go through LoxiLB nodes and go to EKS after load balancing, designate the VPC as the same VPC as the EKS cluster. The subnet will use the public network of the EKS cluster.
Also create security groups for external access. In this post, we will open SSH port and port 8765 for external connection.
For communication between LoxiLB nodes and EKS nodes, add inbound rules to allow all traffic from loxilb-node-sg to security groups of EKS.
For External access, Elastic IP has been allocated.
You can now SSH into the LoxiLB node using the AWS key file.
ssh -i aws-netlox.pem ubuntu@15.164.9.61
Install docker on LoxiLB nodes
#!/bin/bash
sudo apt-get update && apt-get install -y snapd
sudo snap install docker
Running LoxiLB on LoxiLB Nodes
LoxiLB node can be accessed by the public IP provided by AWS.
Run the LoxiLB docker container on the LoxiLB node with the following command:
sudo docker run -u root --cap-add SYS_ADMIN --net=host --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest
Note:
Confirm loxilb EC2 instances are running properly in amazon aws console or using aws cli.
Disable source/dest check of the loxilb EC2 instances.
Deploy kube-loxilb on k8s
Next download kube-loxilb from github.
Please make sure that kubectl is available on the node where you are downloading kube-loxilb.
ubuntu@loxilb:~$ git clone <https://github.com/loxilb-io/kube-loxilb.git>
Cloning into 'kube-loxilb'...
remote: Enumerating objects: 68, done.
remote: Counting objects: 100% (68/68), done.
remote: Compressing objects: 100% (56/56), done.
remote: Total 68 (delta 10), reused 50 (delta 3), pack-reused 0
Unpacking objects: 100% (68/68), 57.74 KiB | 3.40 MiB/s, done.
Go to the kube-loxilb/manifest directory and open the kube-loxilb.yaml file.
ubuntu@loxilb:~$ cd kube-loxilb/manifest/ext-cluster
ubuntu@loxilb:~/kube-loxilb/manifest$ vi kube-loxilb.yaml
Locate the kube-loxilb Deployment entry in the kube-loxilb.yaml file. If you go to spec.container.args, you can check loxiURL, externalCIDR, and setLBMode settings. You must modify these options before deploying kube-loxilb.
terminationGracePeriodSeconds: 0
containers:
- name: kube-loxilb
image: ghcr.io/loxilb-io/kube-loxilb:latest
imagePullPolicy: Always
command:
- /bin/kube-loxilb
args:
- --loxiURL=http://12.12.12.1:11111,http://14.14.14.1:11111
- --cidrPools=defaultPool=123.123.123.1/24
#- --setBGP=true
#- --setLBMode=1
#- --config=/opt/loxilb/agent/kube-loxilb.conf
resources:
requests:
cpu: "100m"
memory: "50Mi"
A little explanation about the three settings:
loxiURL : This specifies LoxiLB API server address. You can specify it in the form of http:{LoxiLB node IP}:11111. Comma separated multiple entries can be specified at the same time (used when configuring multiple LoxiLB instances for HA clustering)
--cidrPools=defaultPool : Specifies the name and range of the global IP CIDR pool that can be used to assign IP address to the service by the load balancer and accessible from outside. The IP range set in cidrPools is assigned to the ExternalIP of the load balancer service using LoxiLB. It can be also set to simply 0.0.0.0/32 which means LB will be performed on any of the nodes where loxilb runs. The decision of which loxilb node/instance will be chosen as ingress in this case can be done by Route53/DNS.
setLBMode : Specifies the NAT mode of the load balancer. Read more here about the supported modes.
In the topology for this post, the LoxiLB node has an IP of 192.168.3.68. We will use this IP address as externalCIDR. We modified the kube_loxilb.yaml file accordingly.
args:
- --loxiURL=http://192.168.3.68:11111
- --cidrPools=defaultPool=0.0.0.0/32
#- --setBGP=true
- --setLBMode=5
#- --config=/opt/loxilb/agent/kube-loxilb.conf
Modify the options and then deploy kube-loxilb using kubectl.
root@loxilb:/home/ubuntu/kube-loxilb/manifest# kubectl apply -f kube-loxilb.yaml
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created
You can verify that the Deployment has been created in the kube-system namespace of k8s with the following command:
root@loxilb:/home/ubuntu/kube-loxilb/manifest# kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 4d2h
kube-loxilb 1/1 1 1 113s
Create a service using LoadBalancerClass
You can now use LoxiLB as a load balancer by specifying a LoadBalancerClass. For testing, let's create nginx.yaml file as follows:
root@loxilb:/home/ubuntu# vi nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-lb1
annotations:
loxilb.io/usepodnetwork : "yes"
spec:
externalTrafficPolicy: Local
loadBalancerClass: loxilb.io/loxilb
selector:
what: nginx-test
ports:
- port: 55002
targetPort: 80
type: LoadBalancer
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-test
labels:
what: nginx-test
spec:
containers:
- name: nginx-test
image: nginx:stable
ports:
- containerPort: 80
The nginx.yaml file creates the nginx-service1 load balancer service and one pod associated with it. The service has loxilb.io/loxilb specified as the loadBalancerClass. By specifying a loadBalancerClass with that name, kube-loxilb will detect the creation of the service and associate it with the LoxiLb load balancer.
Create a service with the following command:
root@loxilb:/home/ubuntu# kubectl apply -f nginx.yaml
service/nginx-service1 created
pod/nginx created
We can see that the IP from externalCIDR has been assigned as ExternalIP:
root@loxilb:/home/ubuntu# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d2h
nginx-service1 LoadBalancer 10.100.124.111 llbanyextip 55005:30055/TCP 4m37s
On the LoxiLB node, the following command confirms that the load balancer rule has been created in LoxiLB as well:
root@loxilb:/home/ubuntu# docker exec -ti loxilb loxicmd get lb -o wide
Check service access from outside
Now you can access the k8s service using the LoxiLB node's external IP ( 15.164.9.61 - Public IP given by AWS in this example). When the packet comes inside the AWS then it will be NATed to LoxiLB local IP/Service IP(192.168.3.68).
Let's test the external connection using curl as follows:
MacBookAir ~ % curl http://15.164.9.61:8765
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="<http://nginx.org/>">nginx.org</a>.<br/>
Commercial support is available at
<a href="<http://nginx.com/>">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
We can see that nginx pods can be accessed successfully from outside.
We hope you liked our blog. For more information, please visit our github page.
[UPDATE] : For latest information about installation steps, Please refer this guide.
Comments