I’ve been using MetalLB for a while now in my Kubernetes Clusters and it’s a great easy way to achieve high availability networking for service endpoints. For an overview of MetalLB, head to this earlier post. When initially deploying MetalLB, I usually configure a fairly small address pool for the Layer 2 mode configuration when playing around in my labs, however in one of my environments I have been deploying a number of new applications that all require a LoadBalanced IP. Because I initially configured a pool of only four or five IPs, I inevitably ran out!

While testing out the Terraform HELM provider, I came across this error:

Looking at the status of the pod, it was Running, but 0/1 Ready and going into more detail, the Describe command suggested a timeout of some sort.

It was at that point that I checked the Service status and found the Drupal service for the LoadBalancer in a <pending> state

So that service is waiting for an IP Address from the MetalLB pool. Knowing that I already had a number of IPs allocated to existing Service endpoints. I checked to make sure everything was running ok across the Kubernetes Nodes, and then looked to edit the configmap of MetalLB to increase the pool size.

As can be seen below, the existing pool of addresses was small.

To expand the pool, I added a new range as shown below. The good thing about MetalLB is that the IP ranged don’t need to be in order.

As soon as that was added the Service took the next available IP from the pool and assigned it to the LoadBalancer

And the Drupal Pods completed initialization and went into a ready state.

Wrap Up

So that was a very quick way to deal with a <pending> LoadBalancer state when using MeltalLB and you might have run out of IPs in the pool. Quick and easy patch and we are back in business.