Skip to content

Kubernetes Service type: LoadBalancer MetalLB

What's in this guide?

When you use Kubernetes, you’ll probably want to use a load balancer service when you’re exposing services to the internet. In the public cloud, you would simply spin up a network load balancer resource to give you a single IP address that will forward all traffic to your service.

When you use Equinix Metal, it’s up to you to set up your load balancer, but the good news is that MetalLB works really well to do what you need. It’s an open-source implementation that hooks into your Kubernetes cluster and provides a network load balancer.

NOTE: Equinix Metal also has a Kubernetes Cloud Control Manager integration which makes this process quicker. You may like to check out the CCM docs for more information.

You will need

  • An Equinix Metal account
  • Equinix Metal servers

Understanding MetalLB

MetalLB has two features that work together to provide this service: Address Allocation and External Announcement.

Address Allocation

In a cloud-enabled Kubernetes cluster, you request a load balancer and your cloud platform assigns an IP address to you. In a bare metal cluster, MetalLB is responsible for that allocation.

MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use. MetalLB will take care of assigning and unassigning individual addresses as services come and go, but it will only ever hand out IPs that are part of its configured pools.

MetalLB lets you define as many address pools as you want, and doesn’t care what “kind” of addresses you give it, whether they be externally facing or part of an internal network.

External Announcement

Once MetalLB has assigned an IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster. MetalLB uses standard routing protocols to achieve this: ARP, NDP, or BGP.

ARP/NDP (Layer 2 mode)

If you use ARP/NDP (layer 2 mode), one machine in the cluster takes ownership of the service, and uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IP's reachable on the local network. From the LAN’s point of view, the announcing machine simply has multiple IP addresses. When in layer 2 mode, a leader node is elected and that leader will field all load balancing traffic. This is a potential pitfall of using layer 2 node as it causes a bottleneck.

BGP mode (Layer 3 mode)

In BGP mode, all machines in the cluster establish BGP peering sessions with nearby BGP enabled routers that you control and tell those routers how to forward traffic to the service IPs which makes this mode more scalable. Using BGP allows for true load balancing across multiple nodes and fine-grained traffic control thanks to BGP’s policy mechanisms. Since all the worker nodes will respond to the load balancer IP address, this means that even if one of the worker nodes is unavailable, other worker nodes will take up the traffic. You will need a BGP router on your network for this mode.

Install and configure MetalLB

As described above, MetalLB can be used in 2 different ways, with ARP/NDP (Layer 2) or with BGP (Layer 3).

NOTE: We recommend using BGP mode (Option B).

Option A: Using ARP/NDP (Layer 2 mode)

An Equinix Metal server is in a flat Layer 3 network by default. You will need to change this to enable MetalLB to work in Layer 2 mode.

Put your nodes (masters and workers) into a layer 2 or hybrid (Layer 2 or Layer 3) mode by referring to this documentation.

Option B: Using BGP mode (Layer 3 mode)

  1. Enable BGP in your Project in the Equinix Metal portal. Activate BGP

  2. Enable BGP on each worker node. Enable BGP on Workers

  3. Provision an Elastic IP block from the facility where the worker nodes reside. Elastic IPs

Install MetalLB

The following will deploy MetalLB to your cluster, under the metallb-system namespace:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/metallb.yaml

#On the first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
The components in the manifest are:

  • The metallb-system/controller deployment.This is the cluster-wide controller that handles IP address assignments. Below is a diagram on how the MetalLB controller operates when a client exposes a service as type: loadBalancer:

    Cluster Overview

  • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable. Below is a diagram on how the MetalLB speakers advertise an external/elastic IP that a service of type: loadBalancer is configured with which enables the service to be reachable from the Internet (BGP Mode):

    MetalLB SPeakers Overview

  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.

Configure MetalLB

The installation manifest does not include a configuration file. MetalLB’s components will still start but will remain idle until you define and deploy a ConfigMap.

  1. Get Worker Node Details

    Since each worker peers with its own ESR, we will need this router’s information to plug into the MetalLB ConfigMap configuration to make it function properly. On each worker node, execute this command:

    curl https://metadata.packet.net/metadata | jq '.bgp_neighbors[0] | { customer_ip: .customer_ip, customer_as: .customer_as, peer_ips: .peer_ips, peer_as: .peer_as }'
    

    This will provide to you the Peer IP, Peer AS, Customer IP, and Customer AS which you will insert into the ConfigMap.

  2. Create and deploy

    MetalLB remains idle until configured. Enabling MetalLB is accomplished by creating and deploying a ConfigMap into the same namespace (metallb-system) as the deployment

Layer 2 Mode Configuration

  • For Layer 2 Mode, ensure that the addresses that you put in the MetalLB ConfigMap fall into the subnet mask range of the private IPs you use for the Layer 2/VLAN configuration.
  • You will need to set up the ConfigMap differently for Equinix IBX and Equinix Metal Legacy facilities because they use different switches.

Example: MetalLB ConfigMap for Layer 2 mode.

apiVersion: v1
kind: ConfigMap
metadata:
 namespace: metallb-system
 name: config
data:
 config: |
   address-pools:
   - name: my-ip-space
     protocol: layer2
     addresses:
     - 192.168.xxx.xxx-192.168.xxx.xxx

BGP Mode Configuration

Setting the ConfigMap for Equinix Metal Legacy facilities (Juniper switches).

Example: MetalLB ConfigMap for Equinix IBX Facilities (Juniper switches).

kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    peers:
    - peer-address: 10.2.3.4
      peer-asn: 65530
      my-asn: 65000
      node-selectors:
      - match-expressions:
        key: kubernetes.io/hostname
        operator: In
        values: ['worker-0']
    - peer-address: 10.2.3.5
      peer-asn: 65530
      my-asn: 65000
      node-selectors:
      - match-expressions:
        key: kubernetes.io/hostname
        operator: In
        values: ['worker-1']
    address-pools:
    - name: default
      protocol: bgp
      addresses:
      - 147.2.3.4/31

Setting the ConfigMap for Equinix IBX Facilities (Arista switches)

  1. Configure the Workers

    Using the 2 ESR IP addresses that are returned from the curl command in the start of this section, perform the step below to be able to route to these particular ESR endpoints.

    ip route add 169.254.255.1 via <Worker Private IP Gateway> dev bond0
    ip route add 169.254.255.2 via <Worker Private IP Gateway> dev bond0
    
  2. Apply ConfigMap

    Example: MetalLB ConfigMap for Equinix IBX facilities (Arista switches).

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        peers:
        + peer-address: 169.254.255.1
          peer-asn: 65530
          my-asn: 65000
        + peer-address: 169.254.255.2
          peer-asn: 65530
          my-asn: 65000
        address-pools:
        + name: default
          protocol: bgp
          addresses:
          + 136.144.59.92/31
    
  3. (Optional) Adding more Kubernetes workers

    When adding multiple workers in the ConfigMap, split them on their own stanza with the respective router they are peering with as, more than likely, the workers will be peering with different routers. This enables all the workers to serve external IPs in the cluster properly and provides high availability if one of the workers goes down. It allows the rest of the workers configured to take over the advertisement of IPs and provide the appropriate routing.

Watch out for...

Authenticated BGP Sessions

When setting up BGP on the Equinix Metal side, please be aware of any MD5 passwords that were set on your BGP resource as MetalLB needs to be aware of this in order to function properly (or else the worker will fail to connect to its peer. You can enter the BGP password into the MetalLB ConfigMap. Example of BGP password placement in the MetalLB ConfigMap.

So, what's next?