This is the seventh tutorial from the Kubernetes Tutorial Series. In this article we will learn one more way to expose our application and that is Ingress. Other articles from this series:
- Kubernetes Tutorial Series: Kubernetes Architecture and Installation
- Kubernetes Tutorial Series: Kubernetes Objects
- Kubernetes Tutorial Series: Storage in Kubernetes
- Kubernetes Tutorial Series: Resource Allocation for Containers
- Kubernetes Tutorial Series: Autoscaling in Kubernetes
- Kubernetes Tutorial Series: RBAC in Kubernetes
We already took a look at how to expose our application to the outside world using Services like NodePort or LoadBalancer. But, this approach is not scalable as well as expensive, imagine a LoadBalancer for every application that you want to expose to outside world, it can quickly empty your pocket. Kubernetes provides us with another object called Ingress that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name-based virtual hosting.
What is Ingress?
An Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the ingress resource. An ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An Ingress Controller (Installation steps below) is responsible for fulfilling the ingress, usually with a loadbalancer. In order for the ingress resource to work, the cluster must have an ingress controller running.
Ingress Controller Installation
We will use helm to install Ingress Controller. Follow the below steps to install Ingress Controller using helm.
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh $ chmod 700 get_helm.sh $ ./get_helm.sh $ kubectl create serviceaccount --namespace kube-system tiller $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account=tiller $ helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
You should see the Ingress Controller below pods when you run the kubectl get pods command:
And you should also see a Load Balancer through which traffic will be routed to your application.
Diving Deeper into Ingress
Below is a sample Ingress Resource:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: simple-fanout-example annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: service1 servicePort: 4200 - path: /bar backend: serviceName: service2 servicePort: 8080
Let’s decode the above yaml file:
- apiVersion, kind and metadata is similar like any other Kubernetes Object.
- rules under the spec is where the action is happening.
- An optional host. In no host is specified, the rule applies to all inbound HTTP traffic through the IP address is specified. If a host is provided (for example, foo.bar.com), the rules apply to that host.
- A list of paths (for example, /v2), each of which has an associated backend defined with a
servicePort. Both the host and path must match the content of an incoming request before the loadbalancer will direct traffic to the referenced service.
- A backend is a combination of service and port names as described in the services doc. HTTP (and HTTPS) requests to the ingress matching the host and path of the rule will be sent to the listed backend.
Default Backend: An ingress with no rules sends all traffic to a single default backend. The default backend is typically a configuration option of the Ingress Controller and is not specified in your ingress resources. This is the second pod which got spinned up above along with the Ingress Controller.
If none of the hosts or paths match the HTTP request in the ingress objects, the traffic is routed to your default backend. Like the below screenshot.
Let’s deploy an application and see this action. Run the below commands to spin up two applications and two services names web and web2 which exposes our application.
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 kubectl expose deployment web --target-port=8080 --type=NodePort kubectl run web2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080 kubectl expose deployment web2 --target-port=8080 --type=NodePort
After the application and their corresponding services are deployed we will deploy the Ingress Resource which will route traffic to these application based on the path.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fanout-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /* backend: serviceName: web servicePort: 8080 - path: /v2/* backend: serviceName: web2 servicePort: 8080
Save the above file as ingress.yaml and deploy it:
kubectl create -f ingress.yaml
Once everything is setup and deployed, route your traffic to the Load Balancer which is created above when you deployed Ingress Controller. For Ex: a0182286d159911e9b005062ec25139c-377918949.ap-southeast-1.elb.amazonaws.com. You should see version 1 of the application.
Change the URL by appending /v2/ . For Ex: a0182286d159911e9b005062ec25139c-377918949.ap-southeast-1.elb.amazonaws.com/v2/ . Now you should see Version 2 of the application.
With this you have learnt all about Ingress in Kubernetes.
Please let me know if you have any queries in the comments section below.