6WIND vRouter Provisioning Guide for Bare Metal and VM Deployments

In a past blog, I explained that Deployment is a major feature of Next-Gen Management Frameworks. This includes provisioning, with booting (Day-0) and initial configuration for remote access (Day-1), and configuration and monitoring (Day-2). A vRouter has to provide the right tools and APIs to allow for a fully automated Deployment from the customer’s management framework.

 

Booting (Day-0)

Regarding Day-0, the vRouter supports Bare Metal and Virtual Machine environments, with associated packaging according to the boot method.

In Bare Metal, the vRouter can be installed from a USB stick, a CD-ROM (or virtual CD-ROM) or through the network using PXE.

USB stick: a raw disk image is provided (.img.gz file), to be copied on a USB stick and booted. The image starts on the vRouter CLI and the user can then try the software and install it permanently on the partition of his/her choice from the CLI, wiping the partition and replacing the existing bootloader with grub.

CD-ROM: a CD-ROM image is provided (.iso file), to be burnt to a CD-ROM or used from the server management interface (IPMI, Dell iDRAC, HP ILO, etc.) to boot. Like with the USB stick, the image starts on the vRouter CLI and the user can then try the software and install it permanently.

PXE: for this procedure, which can be used for fully automatic pre-provisioning of the vRouter, an installer is provided along with the vRouter ISO image. A PXE server is required on the network to provide an IP address, the installer and the vRouter ISO to the target system. Using cloud-init, the target system automatically boots and installs the vRouter ISO on a specified partition on the local hard drive, wiping the partition and replacing the existing bootloader with grub.

In VM, the vRouter is provided as a bootable image according to the virtualization environment. This is simpler than the bare metal use case as the VM image embeds its virtual disk image, including the partitioning and the file system. KVM, VMware, Proxmox and AWS are supported (Azure coming soon).

KVM: a bootable qcow2 image is provided, to be directly booted using virsh (QEMU management tool), specifying the NICs, networking mode (passthrough, SR-IOV), number of cores, etc. on the command line.

VMware: a bootable OVA image is provided, to be booted from vSphere, ESXi or vCenter Server.

OpenStack: the qcow2 image is used to create a VM image using glance. The VM parameters (RAM, CPUs) are specified in a flavor.

Proxmox: the ISO image is used to create a virtual CD-ROM by Proxmox and install as described above in the bare metal section.

AWS: a private AMI image link is provided on the user’s AWS account.

 

Connecting (Day-1)

After the vRouter is booted, it can be configured for remote access. Again, this depends on the deployment type and environment. In bare metal, a video or serial console can natively be used. As a VM, in general, the hypervisor emulates a video console access. However, this is not convenient for automation and in general, the user or the management framework expects a basic network configuration with SSH access. In most cases, this involves cloud-init.

USB stick & CD-ROM: the vRouter relies on a DHCP server to get an IP address and the default users are preconfigured with NETCONF and SSH enabled by default. The user or the management framework can connect to the vRouter on the leased IP address to update the user accounts and write a custom start configuration, using NETCONF or the CLI through SSH.

PXE: as explained in the Day-0 section, the PXE server is also used as a cloud-init server. The cloud-init configuration can be customized to provide a simple Day-1 configuration, including a SSH public key and a startup configuration file, as described in the vRouter documentation.

KVM / OpenStack / VMware / Proxmox / AWS: these natively support cloud-init. The vRouter documentation gives examples of using cloud-init for Day-1 configuration with QEMU and OpenStack.

 

Configuration and Monitoring (Day-2)

Once remote access is enabled, the vRouter is ready for day to day operations. The recommended API is NETCONF. It is also possible to automate interaction with the CLI.

Automation tools such as Ansible can be used to configure the vRouter. They usually provide NETCONF support and integration is straightforward. Such tools are very versatile, but they require a certain knowledge of development and Python scripting. Higher level orchestration tools can also be used through their NETCONF southbound APIs.

Regarding monitoring, the vRouter provides “legacy” monitoring features, such as statistics retrieval using NETCONF or SNMP, and sFlow data plane sampling. We also provide modern analytics based on KPIs that are streamed from the vRouter into a Time-Series Database and visualized using an analytics dashboard. We pre-integrated the vRouter with the TIG stack: Telegraf (agent included in the vRouter to stream KPIs), InfluxDB (TSDB) and Grafana (visualization). More details are available on our github.

Other examples of tools used by our customers are:

I hope this blog helped to clarify your vision about automated deployment with next-generation management frameworks. I would be glad to hear your feedback.

A full-featured evaluation version of our vRouter is available if you want to test drive it and try an integration with your own management framework. Feel free to contact us!


Yann Rapaport is Vice President Of Product Management for 6WIND.