vRouter Management Automation with Ansible & NETCONF
In a recent blog, we talked about provisioning for bare metal and VM platforms leveraging PXE and cloud-init. Once the vRouter is booted, it can be managed via its NETCONF API using automation tools such as Ansible, Python scripting or higher level orchestration tools.
In this blog post, I will present a practical example of Ansible usage with the vRouter NETCONF API. Ansible is an open-source software provisioning, configuration management, and application deployment tool written in Python. It supports the NETCONF protocol since version 2.4.0.
Ansible is not a provisioning tool, it requires the machines it will configure to be booted and accessible on the network (NETCONF uses TCP port 830). You will find detailed instructions in 6WIND vRouter Getting Started Guide. I have booted 2 vRouter instances into 6WIND’s development network and I gave them DNS hostnames for clarity purposes.
Both machines have two physical network interfaces. One is used for management and the other one is used for production traffic. Mind that this is not a real world use case; I oversimplified it to make the example easier to grasp. Here is an overview of the setup:
Both management interfaces have already been configured automatically on boot by cloud-init and DHCP. The “production” interfaces have the same physical port identifier (pci-b0s4) and are named int0 and ext0 for vrouter1 and vrouter2 respectively. I want to use Ansible to configure the IP addresses of these “production” interfaces and the hostnames of both machines.
To avoid messing up my system packages, I chose to install Ansible into a python virtualenv. In order to support executing arbitrary NETCONF RPCs, Ansible version greater than 2.7.10 along with the additional ncclient and jxmlease python libraries are required.
We need an inventory file that will reference all machines that we want to control with Ansible. Here we are using the YAML inventory format which is more readable than the default INI format.
We also need to write a playbook. Here is a basic example that configures the hostname depending on the Ansible inventory name, and that configures a physical interface on both machines. Then, it runs the ping NETCONF RPC to check that the IP addresses have been properly configured on both machines.
In the playbook, I use Ansible built-in netconf_get, netconf_config and netconf_rpc modules.
Two additional XML files are referenced by the playbook via the lookup template functions. They should be placed next to the playbook file itself.
The structure of config.xml may be generated by running the following CLI commands directly on one of the vRouters:
By default, the contents of the XML node are merged with the current configuration. This is explained extensively in RFC 6241, Section 7.2..
In order to replace or delete some parts of the configuration, the operation XML attribute must be specified on the related XML nodes. The example playbook makes use of this attribute to unset a previously set hostname and replace an IPv4 address.
The structure of filter.xml may be generated from combining the output of the following CLI commands:
The playbook.yml and config.xml files contain templating placeholders that will be replaced by respective host variables when the playbook is executed. See Ansible official documentation for more details.
Once all these files are created, let’s run ansible-playbook as follows:
I hope this example gave you some perspective about what can be done with Ansible and the vRouter NETCONF API. Of course, I only scratched the surface and real world use cases will require more complexity. A lot more information is available in Ansible official documentation and 6WIND vRouter NETCONF API documentation.
Feel free to contact us if you have any further questions or to request an evaluation. We will be happy to hear from you.