VMware NSX Advanced Load Balancer — Overview

Load Balancing is Important

Load balancing is an important aspect of network mobility.

How is a network useful if you can’t move around within it?

Network Movements also facilitate migrations between services — as a consumer of a network service, frequent cutovers occur without your knowledge:

As computer networks get more complex — SDN is important for the orchestration of these changes or “movements”. A distributed, off-box, dedicated management and control plane is essential to tracking “customers” in a scalable fashion — but load balancing is special here.

Most of our consumed services today leverage load balancers to “symmetrify” network traffic to accommodate nodes that do not support them. This can solve a lot of problems large enterprises have:

These problems are all solvable by the right load balancer platform — but are infrastructure specific. Load balancers often solve application-specific problems, including:

Stateless apps work perfectly without some form of load balancer/ingress controller but still benefit greatly from a discrete point to ingest data as well.

NSX Advanced Load Balancer Differentiating Points

N.B. I will probably revise this in a later post as I get more familiar with Avi Vantage

Avi Networks was founded in 2012 with the goal of providing a software load balancer designed from the ground up to leverage Software-Defined Networking (SDN) capabilities. Every aspect of the platform’s design appears to eschew this — the company clearly wanted to perform a totally new platform without any need for maintaining legacy platforms. In 2019, VMware acquired Avi Networks and is rebranding the platform to “NSX Advanced Load Balancer”.

Here are some clear differentiating points I have found with the platform so far:

Design Elements

Controller

This is Avi’s brain and the primary reason for using a platform like Vantage — the control and management planes are, by default, managed by an offboard controller. The following functions are available here, with no performance penalty to the data plane:

NB: Avi Release 20.1.4 has <900 Debian packages (based on bullseye/sid), so they are running a little lean but could do more cleanup. 20.1.5 is down to 820 — so they are working on this.

Service Engine

Generally, these components do work. Structurally, these appliances are running Debian bullseye/sid with load balancer processes as Docker images. They’re running the same edition of FRRouting as NSX-T — with the same approximate OS edition.

NB: Avi Release 20.1.5 is much leaner than prior releases, and SEs typically have a much more compressed install base. 515 Debian packages here — almost in line with NSX-T 3.1.2!

IPv6

Deployment Methodology

Management/Control Plane

No orchestrator pre-sets will be used here — per the Avi NSX-T Integration Guide. The primary reason for my doing this is as a more thorough test of this platform — I’ll be deploying 3 “Clouds”:

Avi Vantage designates any grouping of infrastructure presets as a “Cloud”, which can have its own tenancy and routing table. This construct allows us to allocate multiple infrastructures to each administrative tenant or customer. This access is decoupled from “Tenant”, which is the parent for RBAC.

Data Plane Topologies

The Avi Vantage VCF Design Guide 4.1 indicates that service engines should be attached to a tier-1 router as an overlay segment. The primary reason for this has to do with NSX-T and Avi’s integration — in short, the Avi controller invokes the NSX-T API to add and advertise static routes to each service engine to handle dynamic advertisement.

Originally published at https://blog.engyak.net.

I am a network engineer based out of Alaska, pursuing various methods of achieving SRE/NRE