Why Automate? Using Pipelines to Develop and Manage Network Configurations

Now that we have:

  • A method to generate Desired State Configurations, by defining Declaratively what the device config should be, and combining it with what a device config should have
  • A method to apply configurations automatically, without PuTTY Copy-Pasting

We now can achieve Infrastructure As Code, where we can take a few artifacts from source control and turn them into a live, viable network device.

This is handy, but what about maintaining it? CI/CD Pipelines

Pipelines aren’t the only things that a CI tool can do, but there are some pretty big differences between a traditional pipeline and managing a network — for example, there’s no code to compile. Instead, it’s best to map out the steps that we want a CI tool to perform. Jenkins has a project type — Freestyle that lends itself well to applications like this, but it can also get fairly messy/disorganized.

A more comprehensive definition of a pipeline (from Red Hat) is here: https://redhat.com/en/topics/devops/what-cicd-pipeline

Installing Tools

Network infrastructure tends to have inbound access restrictions that most container platforms cannot meet in an auditable, secure method. This capability can be provided with VMware NSX-T or with Project Calico, but these capabilities are pretty advanced. I’d consider containerization an option for those willing to take it on in this case, and am keeping this guide as agnostic as possible.

Perhaps later I’ll build on this and provide a dockerfile. Starring the repository will probably be the best way to keep track!

Executing Continuous Integration / Continuous Delivery

  • The CI Tool should simply execute code, minimally. If we resort to a ton of shell scripting here, it won’t be managed by source control and cannot easily be updated.
  • The CI Tool is responsible for:
  • Execution of written code
  • Logging
  • Notification
  • Testing of written code
  • Scoring of results to assess code viability / production readiness

Steps:

  • Fetch code from GitHub. Execute every five minutes, if a new code commit is available.
  • Lint (syntax validate) all code.
  • Compile Network Configurations, and apply to network infrastructure
  • Test
  • Notify of build success

I have added an example CI Project file to this repository. https://github.com/ngschmidt/vyos-vclos It does not contain testing or validating steps yet, as those are considerably more complex — writing a parsable logger will take quite a bit more time than I feel an individual post is worth.

The CI Project

  • Setting a Git repository to clone from (Under Source Code Management)
  • Setting the Build Trigger to Poll SCM (H/5 * * * *)
  • Execute the playbooks (provided in the GitHub repository). Instead of executing each individual pipeline, I elected to make a main.yml playbook that contains all steps, so that the control aspects of this remain centralized in the Git repository.
  • Automated Evaluation: I provided a yamllint example, eventually this should be tallying the results of each automated test and scoring it.

People want new stuff, and some of it might be new networking Features

Features in this case don’t need to be a large, earth-shaking new capability in more traditional software development parlance. Instead, let’s consider a Feature something smaller:

  • A Feature should be something a consumer wants (DevOps term would be to delight users)
  • A Feature should be a notable change to an information system
  • A Feature should be maintainable or maintainability. A system’s infrastructure administrator/engineer/architect is a consumer as well, and that person’s needs have value, too!

Some Examples of Network Features:

  • Wireless AP-to-AP Roaming: Users like having connectivity stay as they move about. This can vary from 802.11i in Personal Mode, to 802.11i in Enterprise mode with 802.11k/r/v implemented to be truly seamless.
  • Minimum Viable Product would be defined. If the security teams are okay with WPA2-PSK, then that would be it. If not, the roaming capability would be at ~6 seconds, with lots of room for improvement.
  • Roll out 802.11k reports for better AP association decisions
  • Roll out 802.11v for better notifications around Power Saving
  • Roll out 802.11r or OKC for secure hand-off
  • No Rest for the Wicked: Do it all again with WPA3!
  • VPN Capability
  • MVP: IPSec-based VPN with RADIUS authentication
  • TLS Fallback for low-MTU networks or PMTUD
  • Improved authentication mechanisms, like PKI or SAML
  • Client Posture Assessment

In the world of continuous delivery, these can be done out of order, or to a roadmap. When you’re done with a capability, deliver it instead of waiting for the next major code drop.

I’m a network guy, what’s a code drop?

There’s no true hand-off from development to operations, just the people who run the network, and those who don’t. We are afflicted by an industry of either change fear or CAB purgatory where once something is built, it can no longer be improved. This builds up a lot of indebtedness that is rarely fixed by anything short of a forklift. Ideally, we can leverage CI tools in this way:

  • Clean Slate: Delete all workspace files
  • Write Feature Code
  • Build configurations
  • Apply configurations to test nodes
  • Validate (manually or automatically, or both) that the change did what it was supposed to, and that it worked
  • If it fails, go back to step #1
  • Stage Feature release, do paperwork, etc.
  • Release Feature to all applicable managed nodes
  • Work on the next Feature

I have attached a Jenkins Project that performs most of these tasks here. There are some caveats to this method that I’ll cover below.

This should result in much higher quality work being released, and in the networking world, reliability is king. This is the key to becoming free of CAB Purgatory in large organizations.

A Day in the life of a Feature

  • Create a new git branch. This can be achieved with git checkout or via your SCM GUI.
  • Write code for the git branch. Ideally, you’d create a new project for this step against that specific branch, but there is no “production environment” to speak of in my home lab.
  • Commit code. Again, small steps are still the best approach. The biggest change here is to periodically check in on your pipeline to see if anything breaks. This gives you the “fix or backpedal” opportunity at all times, and makes it easy to spot any breakage.
  • Submit a git pull request: This is an opportunity for the team to review your results, so be sure to include some form of linkage to your CI testing/execution data to better make your case.
  • Merge code. This will automatically roll to production at the next available window, and is your release lever.

Example 1: Fix an issue where BGP NLRIs are not being imported due to no policy

Require policy on EBGP [no] bgp ebgp-requires-policy This command requires incoming and outgoing filters to be applied for eBGP sessions. Without the incoming filter, no routes will be accepted. Without the outgoing filter, no routes will be announced. This is enabled by default. When the incoming or outgoing filter is missing you will see "(Policy)" sign under show bgp summary: exit1# show bgp summary IPv4 Unicast Summary: BGP router identifier 10.10.10.1, local AS number 65001 vrf-id 0 BGP table version 4 RIB entries 7, using 1344 bytes of memory Peers 2, using 43 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 192.168.0.2 4 65002 8 10 0 0 0 00:03:09 5 (Policy) fe80:1::2222 4 65002 9 11 0 0 0 00:03:09 (Policy) (Policy)

This was preventing BGP route propagation, and was a result of an upstream change. In Software Development, this is called a “breaking change” because it implements major functional changes that will have potentially negative effects unless action is taken.

To mitigate this, we can develop a solution iteratively, using our lab environment, and test, re-test, and then test again until we get the desired result. 24 Commits later, I’m satisfied with the result. Once a solution is sound (passes automated testing) it is best practice to submit a solution for peer review. Git calls this action a pull request. Here's the one for this change:

Example 2: Roll out IPv6 Dynamic Routing

By code volume, this was about 200 lines, but the real difference here is in the multiplier. From those 284 lines:

  • 100 lines DRY (Don’t Repeat Yourself) highly repetitive code (template)
  • 57 are documentation
  • 38 lines DRY highly repetitive code (variables)

The history of this pull request is publicly available. I made a few mistakes, and then caught them with automated testing, as everyone can see.

About two-thirds of the way through this I realized I was rolling out IPv6 will a pull request. Neat.

Conclusions

Value in Volume

  • 2 Devices may feel dubious
  • 3 Devices will show real value in saved time
  • 4+ the benefits become insane

Value in Consistency

Value in Documentation

Downsides

  • We call it Continuous Delivery for a reason.
  • Automate when it helps you.

We can use this guidance to come up with a plan, for example:

  • Milestone 1: Jinja2-fy your golden configurations, and stop manually generating them
  • Milestone 2: You have the config a device should have, gather_facts the current configuration, and generate a report to see if it's not compliant.
  • Milestone 3: Topical automation replaces manual remediation
  • Milestone 4: Fully mature NETCONF springs forth and saves the day!

People who are at #4 aren’t better than people who have finished #1. Use what’s useful.

Originally published at https://blog.engyak.net.