Services and Networking
Kubernetes Services are essential for ensuring reliable communication between Pods. They abstract the complexities of networking and provide stable endpoints for applications.
Introduction to Kubernetes Services
Why Use Services?
Pods in Kubernetes are ephemeral; they can be created, destroyed, and rescheduled at any time due to various events such as scaling operations, rolling updates, rollbacks, and failures. This makes direct communication with Pods unreliable. Kubernetes Services address this issue by providing a stable endpoint for communication.
How Services Work
Services in Kubernetes provide a front end (DNS name, IP address, and port) that remains constant regardless of the state of the Pods behind it. They use label selectors to dynamically route traffic to healthy Pods that match the specified criteria.
Service Discovery
Kubernetes offers two primary modes of service discovery:
- Environment Variables: When a Pod is created, the kubelet adds environment variables for each active Service. These variables are accessible within the Pod and provide the Service's cluster IP and port.
- DNS: Kubernetes includes a DNS server that automatically assigns DNS names to Services. Pods can use these DNS names to communicate with Services.
Endpoint Management
Services use endpoints to track the IP addresses of the Pods that match their label selector. The kube-proxy component on each node watches for changes to Service and Endpoint objects, updating the iptables rules accordingly to ensure traffic is correctly routed.
Load Balancing
Kubernetes Services provide built-in load balancing across the Pods they manage. The kube-proxy component distributes incoming requests to the available Pods based on the chosen load balancing strategy, ensuring even distribution of traffic.
Types of Kubernetes Services
Kubernetes supports several types of Services, each suited to different use cases:
ClusterIP
Key Points:
- Internal IP and DNS name are automatically created.
- Accessible only from within the cluster (i.e. Pod to Pod).
- Ideal for internal applications that do not need external access.
Label Selector Behavior
When a Service uses label selectors to find Pods (e.g., project: ab
), Pods must have ALL the labels specified in the selector to receive traffic. However, Pods can have additional labels beyond those required by the selector and will still receive traffic. This means that selectors work as an "AND" operation, not an "OR" operation.
Example YAML:
apiVersion: v1
kind: Service
metadata:
name: my-clusterip-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
NodePort
- External client hits node on NodePort.
- Node forwards request to the ClusterIP of the Service.
- The Service picks a Pod from the list of healthy Pods in the EndpointSlice.
- The Pod receives the request.
Key Points:
- Allocates a port from a configurable range (default: 30000-32767).
- Accessible externally via
<NodeIP>:<NodePort>
. - Useful for exposing applications for development and testing purposes.
Important NodePort Behavior
When you access a NodePort service, you can connect to any node in the cluster on the NodePort, even if the target Pod is not running on that specific node. Kubernetes will automatically route the traffic to the appropriate Pod, regardless of which node it's running on.
LoadBalancer
- External client hits LoadBalancer Service on friendly DNS name.
- LoadBalancer forwards request to a NodePort.
- Node forwards request to the ClusterIP of the Service.
- The Service picks a Pod from the EndpointSlice.
- Forwards request to the selected Pod.
Key Points:
- Automatically provisions an external load balancer.
- Provides a single IP address for external access.
- Suitable for production environments where high availability is required.
ExternalName
Key Points:
- Does not use kube-proxy.
- Maps Service to an external DNS name.
- Useful for integrating external services into a cluster.
Comparison of Service Types
Service Type | Internal Access | External Access | Use Case |
---|---|---|---|
ClusterIP | Yes | No | Internal applications |
NodePort | Yes | Yes (via NodeIP) | Development and testing |
LoadBalancer | Yes | Yes | Production environments with high availability |
ExternalName | No | Yes (via DNS) | Integrating external services |
Service Discovery
In terms of how an application then discovers other applications behind a Service, the flow looks like this:
- The new Service is registered with the cluster DNS (Service Registry).
- Your application wants to know the IP address of the Service so it provides the name to the cluster DNS for lookup.
- The cluster DNS returns the IP address of the Service.
- Your application now knows where to direct its request.
Practical Example of Service Discovery
Assume we have two applications on the same cluster - ham
and eggs
. Each application has their Pods fronted by a Service, which in turn each have their own ClusterIP.
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ham-svc ClusterIP 192.168.1.200 443/TCP 5d19h
eggs-svc ClusterIP 192.168.1.208 443/TCP 5d19h
For ham
to communicate with eggs
, it needs to know two things:
- The name of the
eggs
application's Service (eggs-svc). - How to convert that name to an IP address.
Steps for Service Discovery:
- The application container's default gateway routes the traffic to the Node it is running on.
- The Node itself does not have a route to the Service network so it routes the traffic to the node kernel.
- The Node kernel recognizes traffic intended for the service network and routes the traffic to a healthy Pod that matches the label selector of the Service.
Networking in Kubernetes
Networking is a fundamental aspect of Kubernetes, enabling communication between various components within a cluster and with the outside world. This section covers the Container Network Interface (CNI) and popular CNI plugins, as well as network policies for controlling pod communication.
Container Network Interface (CNI)
What is CNI?
The Container Network Interface (CNI) is a specification and a set of libraries for configuring network interfaces in Linux containers. It ensures that when a container is created or deleted, its network resources are allocated and cleaned up properly.
Role of CNI in Kubernetes
Kubernetes uses CNI to manage networking for Pods. When a Pod is created, the CNI plugin is responsible for assigning the Pod an IP address, setting up the network interface, and ensuring connectivity both within the cluster and externally.
Popular CNI Plugins
Several CNI plugins are widely used in Kubernetes environments. Each offers different features and capabilities.
Calico
Calico provides secure network connectivity for containers, virtual machines, and native host-based workloads. It supports a range of features, including:
- Network Policy Enforcement: Allows you to define and enforce network policies.
- BGP for Routing: Uses Border Gateway Protocol (BGP) for high-performance routing.
- IP-in-IP and VXLAN Encapsulation: Supports various encapsulation methods for different networking needs.
Flannel
Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes. It creates an overlay network that allows Pods on different nodes to communicate with each other.
Weave Net
Weave Net provides a simple and secure network for Kubernetes clusters. It supports automatic encryption of Pod traffic and can be used to create a flat network topology.
Understanding Overlay Networking in Kubernetes
Overlay networking is a fundamental concept in Kubernetes that allows for the seamless communication of pods across different nodes within a cluster. This approach abstracts the underlying network infrastructure, providing a virtual network that connects all pods regardless of their physical location.
Key Components of the Overlay Network
- Node Network: The physical network where the Kubernetes nodes are deployed.
- Pod Network: A logically separate, private CIDR block distinct from the node network.
Network Policies
Network Policies allow you to control the communication between Pods. They define rules that specify what traffic is allowed to and from Pods.
Creating Network Policies
Network Policies are created using YAML configuration files that specify the allowed traffic.
Example YAML for Network Policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
role: frontend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: backend
This configuration allows ingress traffic to Pods with the label "role: frontend" from Pods with the label "role: backend."
Ingress
Understanding Ingress
Ingress allows external HTTP and HTTPS traffic to access services within the cluster. It provides a single entry point for multiple services and can manage SSL termination, load balancing, and name-based virtual hosting.
Configuring Ingress
-
Create an Ingress Resource: Define rules for routing traffic to services.
-
Use an Ingress Controller: Deploy an Ingress controller to manage traffic according to the rules.
-
TLS Configuration: Secure traffic using TLS by specifying certificates in the Ingress resource.
Example Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
Advanced Ingress Configuration
Ingress can be configured for advanced use cases, such as:
- Path-Based Routing: Direct traffic based on URL paths to different services.
- Name-Based Virtual Hosting: Host multiple domains on the same IP address.
- Load Balancing: Distribute traffic across multiple backend services.
Example: Path-Based Routing
Here's how to configure path-based routing with Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-example
spec:
rules:
- host: example.com
http:
paths:
- path: /service1
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- path: /service2
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
This configuration routes traffic to service1
and service2
based on the URL path.