Installing Avi Vantage in OpenShift/Kubernetes

Introduction

This guide describes how to integrate Avi Vantage into an OpenShift v3 or Kubernetes cloud. The instructions in this guide can be used for installing Avi Vantage 16.3 and subsequent.

Avi Vantage is a software-based solution that provides real-time analytics as well as elastic application delivery services. Vantage optimizes core web-site functions, including SSL termination and load balancing. Vantage also provides access to network analytics, including end-to-end latency information for traffic between end-users and the load-balanced applications.

When deployed into an OpenShift/Kubernetes cloud, Avi Vantage performs as a fully distributed, virtualized system consisting of the Avi Controller and Avi Service Engines (SEs), each running as a separate container on OpenShift slave nodes.

Also of interest

DEPLOYMENT PREREQUISITES

PHYSICAL NODE REQUIREMENTS

The main components of the Avi Vantage solution, Avi Controllers and Service Engines (SEs), run as containers on OpenShift/Kubernetes minion nodes. For production deployment, a 3-instance Avi Controller cluster is recommended, with the each of the Avi Controller instances running in containers on separate physical nodes. After configuring the Avi Controller cluster for OpensShift/Kubernetes cloud, it deploys one Avi SE container on OpenShift/Kubernetes nodes. The nodes on which an Avi Controller runs must meet at least the minimum system requirements as defined in this article.

Note: Just a single OpenShift/Kubernetes cloud is supported on an Avi Controller cluster.

SYSTEM TIME (NTP) REQUIREMENT

The system time on all nodes must be synchronized. Use of a Network Time Protocol (NTP) server is recommended.

SOFTWARE INFRASTRUCTURE REQUIREMENTS

For deployment of SEs, the following system-level software is required:

  • Each node host OS must be a Linux distribution running systemd.
  • The Avi Controller uses password-less sudo SSH to access all the OpenShift nodes in the cluster and create SEs on those nodes. The SSH user must have password-less sudo access to all three OpenShift nodes hosting the Avi Vantage cluster. The SSH method requires a public-private key pair. You can import an existing private key onto the Avi Controller or generate a new key pair. In either case, the public key must be in the /home/ssh_user/.ssh/authorized_keys file, where ssh_user is the SSH username on all OpenShift nodes. The Avi Controller setup wizard automatically stores the private key on the Avi Controller node when you import or generate the key.

INSTALLING THE AVI CONTROLLER

To install the Avi Controller:

  • Copy the .tgz package onto a node that will host the Avi Controller leader (for a Controller cluster, two followers run on separate nodes).
    scp controller_docker.tgz username@remotehost.com:~/
    Note: Replace username@remotehost.com with your write-access username and password and the IP address or hostname for the host node.
  • Log onto the OpenShift node:
    ssh username@remotehost.com
  • Load the Avi Controller image into the host's local Docker repository:
    sudo docker load < controller_docker.tgz
  • As a best practice, clean up any data that may be lingering from a previous run:
    sudo rm -rf /var/lib/controller/*
  • Use the vi editor to create a new file for spawning the Avi Controller service:
    sudo vi /etc/systemd/system/avicontroller.service
  • Copy the following lines into the file:
[Unit]
Description=AviController
After=docker.service
Requires=docker.service

[Service]
Restart=always
RestartSec=0
TimeoutStartSec=0
TimeoutStopSec=120
StartLimitInterval=0
ExecStartPre=-/usr/bin/docker kill avicontroller
ExecStartPre=-/usr/bin/docker rm avicontroller
ExecStartPre=/usr/bin/bash -c "/usr/bin/docker run --name=avicontroller --privileged=true -p 5098:5098 -p 9080:9080 -p 9443:9443 -p 7443:7443 -p 5054:5054 -p 161:161 -d -t -e NUM_CPU=8 -e NUM_MEMG=24 -e DISK_GB=64 -e HTTP_PORT=9080 -e HTTPS_PORT=9443 -e SYSINT_PORT=7443 -e MANAGEMENT_IP=$$HOST_MANAGEMENT_IP -v /:/hostroot -v /var/lib/controller:/vol -v /var/run/fleet.sock:/var/run/fleet.sock -v /var/run/docker.sock:/var/run/docker.sock avinetworks/controller:$$TAG"
ExecStart=/usr/bin/docker logs -f avicontroller
ExecStop=/usr/bin/docker stop avicontroller

[Install]
WantedBy=multi-user.target
Note: If any of the port numbers for HTTP (9080), HTTPS (9443) or SystemInternal (7443) are already being used by other services on the host, please use alternate port numbers for the Docker port-mappings and update the appropriate environment variable names.
  • Edit the following values in the file:

    • NUM_CPU: Sets the number of CPU cores/threads used by the Controller (8 in this example).
    • NUM_MEMG: Sets the memory allocation (24 GB in this example).
    • DISK_GB: Sets the disk allocation (64 GB in this example).
    • MANAGEMENT_IP: Replace $$HOST_MANAGEMENT_IP with the management IP of current OpenShift Node
    • $$TAG: Replace with tag value of the Avi Vantage image in the Docker repository. For example, “16.3-5079-20160814.122257”.
  • Save and close the file.

STARTING THE AVI CONTROLLER SERVICE

To start the Avi Controller, enter the following command on the node on which you created the Avi Controller:

sudo systemctl enable avicontroller && sudo systemctl start avicontroller

Initial startup and full system initialization takes around 10 minutes.

ACCESSING THE AVI CONTROLLER WEB INTERFACE

To access the Avi Controller web interface, navigate to the following URL:

https://avicontroller-node-ip:9443

Note: avicontroller-node-ip is the management IP of the node on which the Controller is installed.

SETTING UP THE AVI CONTROLLER

This section shows how to perform initial configuration of the Avi Controller using its deployment wizard. You will configure the following settings.

Access Avi Controller UI from a browser and follow below six steps:

Fig1 Step 1. Set a password for the admin user.
Fig2 Step 2. Set DNS and NTP server information.
email_SMTP_settings Step 3. Email and SMTP information
Fig3 Step 4. Select No Orchestrator as infrastructure type.
Step 5. Click Next.
Fig5 Step 6. Respond ‘No’ to the multiple tenants question.

CONFIGURE NETWORKS

Configure a subnet and IP address pool for intra-cluster/east-west traffic and a subnet and IP address pool for external/north-south traffic. These IP addresses will be used as service virtual IPs (VIPs) or cluster IPs. The east-west subnet is an overlay or virtual subnet. The north-south subnet is the underlay subnet to which all the nodes/minions are connected. Use unused or spare IP addresses from the underlay subnet for the north-south VIP address pool.

Configure east-west networks and subnet for virtual services handling east-west traffic and NorthSouth subnet for virtual services handling client / north-south traffic as follows:

Fig6 Step 7. Navigate to Infrastructure > Networks and click Create.
Step 8. Create east-west network and add subnet with static IP range for IPs to be used by east-west virtual services.

Avi provides a drop-in replacement for kube-proxy for east-west services. There are 2 options for the subnet from which virtual IPs for east-west services are allocated.

OpenShift/Kubernetes allocates cluster IPs for east-west services from a virtual subnet.  Avi can use the same cluster IPs allocated by OpenShift/Kubernetes and provider east-west proxy services. Standard tools such as oc/kubernetes display cluster IPs for services, so display and troubleshooting become easier. However, this requires that kube-proxy be disabled in the cluster on all nodes. 

Alternately, Avi can be configured to provide east-west services on a non-overlapping virtual subnet different from the cluster IP subnet.

Kube-proxy is enabled: You must use a different subnet than the kube-proxy’s cluster IP subnet. Please choose a /16 CIDR from the IPv4 private address space (172.16.0.0/16-172.31.0.0/16, 10.0.0.0/16 or 192.168.0.0/24) that doesn’t overlap with any address space that is already in use in your OpenShift nodes.

Kube-proxy is disabled: This KB explains how to disable kube-proxy. With kube-proxy disabled, there’s a choice of either using a separate subnet for east-west VIPs or using the same VIPs as cluster IPs allocated by OpenShift/Kubernetes.

 To use the same VIPs as cluster IPs: Enter the same subnet as the cluster IP subnet e.g. 172.30.0.0/16 with no static IP address pool. East-west services simply use the allocated cluster IP as VIPs.

 To use a different subnet for VIPs for east-west services: Enter the subnet information and create a IP address pool from the subnet. East-west services will be allocated VIPs from this IPAM pool.

Fig8 Step 8. Create NorthSouth network and add subnet with static IP range for IPs to be used by north-south virtual services.

CONFIGURE IPAM/DNS PROFILE

The Avi Controller provides internal IPAM and DNS services for VIP allocation and service discovery. Configure the IPAM/DNS profile as follows:

Fig9 Step 9. Navigate to Templates > Profile > IPAM/DNS Profile and click Create.

Create the EastWest profile:

Fig10 Step 10. Give the profile the name EastWest. Select Type: Avi Vantage DNS. Fill in the required Domain Name field. Change the default TTL for all domains or just for this particular domain if desired. Click Save.

Create the NorthSouth profile:

Fig12-1 Step 11. Give the profile NorthSouth. Select Type: Avi Vantage DNS. Fill in the required Domain Name field. Change the default TTL for all domains or just for this particular domain if desired. Click Save.

CONFIGURE SSH USER

The Avi Controller needs to be configured with an SSH key pair that provides passwordless sudo access to all the nodes. On OpenShift, this key pair can be the same private key which is used to install OpenShift. These keys are used by the Avi Controller to SSH to OpenShift nodes and deploy Avi Service Engines. The private key is usually located at ~/.ssh/id_rsa, for example, to copy out the default key from the OpenShift master:

  • SSH to Master node.
ssh username@os_master_ip
  • Run below command and copy the contents of key file (id_rsa).
cat ~/.ssh/id_rsa
  • On Avi Controller, navigate to Administration > Settings > SSH Key Settings and click Create.
  • Enter the SSH username.
  • Select Import Private Key.
  • As shown below, paste the key copied in step above.Fig16
  • Click Save.

AUTHENTICATION

Avi Vantage supports two means by which to authenticate, certificates and service account tokens.

CONFIGURE CERTIFICATES

  • Use SCP to copy SSL client OpenShift certificate files from the master node. On OpenShift master nodes, the certificates are installed at /etc/origin/master. On Kubernetes master nodes, the certificates are installed at /etc/kubernetes/pki. If the Kubernetes API server is unauthenticated, this step can be skipped.
scp username@os_master_ip:/etc/origin/master/admin.crt
scp username@os_master_ip:/etc/origin/master/admin.key
scp username@os_master_ip:/etc/origin/master/ca.crt .
  • On Avi Controller, navigate to Templates > Security > SSL/TLS Certificates. Fig17
  • Click Create and select Root/Intermediate CA.
    • Name the cert and upload ca.crt file.
    • Click Validate. Fig18
    • Click Import to save.
  • Click Create and select Application Certificate.
    • Name the certificate
    • Select Import under Type
    • Under ‘Key (PEM) or PKCS12’ upload admin.key
    • Under ‘Certificate’ upload admin.crt
    • Click Validate. Fig19
    • Click Import to save.

SERVICE ACCOUNT TOKENS

Please refer to the guide below, corresponding to the orchestrator.

CONFIGURE OPENSHIFT/KUBERNETES CLOUD

This section describes the configuration of the OpenShift/Kubernetes cloud. Here we are assuming that kube-proxy is disabled on OpenShift/Kubernetes nodes and the user is using Avi’s internal DNS/IPAM.

  • Navigate to Infrastructure > Clouds.
  • Edit Default-Cloud.                                Fig14
  • Select OpenShift as infrastructure type and click Next.                           Fig15
  • Make sure ‘Enable Event Subscription’ is selected.
  • Select ‘Client TLS Key and Certificate’ and ‘CA TLS Key and Certificate’ from dropdown, if authentication is enabled.
  • Enter the OpenShift/Kubernetes API URL in the format shown below.
  • Click Next.
  • Select the SSH user configured previously.
  • Check “Cluster user overlay SDN” for overlay-based networking for the cluster, such as OpenShift(ovs), Nuage, Flannel and Weave. Uncheck for routed containers.
  • Click Next.
  • Set ‘Proxy Service Placement Subnet’ same as EastWest subnet configured earlier.
  • If kube-proxy is disabled, check the “Use Cluster IP of service as VIP for East/West”. If kube-proxy is enabled, uncheck “Use Cluster IP of service as VIP for East/West”. This KBdescribes how kube-proxy can be disabled.
  • Check “Always use Layer4 Health Monitoring” to use TCP health monitoring even for HTTP/HTTPS applications. Enable this setting if there are HTTP/HTTPS applications that do not respond to the default health monitor of “GET /” and it is inconvenient to override with custom health monitors using annotations.
  • Set IPAM Profile from the dropdown menu as shown.
  • Click Save.

The cloud status will show green (placement ready).Screen Shot 2016-08-25 at 2.14.50 PM

It will take around 5 min for the Avi Controller to download the SE docker image and start the containers.

Screen Shot 2016-08-25 at 2.17.04 PM

NEXT STEPS

Refer to “Replace kube-proxy in a OpenShift Environment with Avi Vantage” to learn how to disable kube-proxy in a OpenShift environment

Refer to “OpenShift/Kubernetes Service Configuration on Vantage” to learn how to create services and test traffic.

Refer to “OpenShift Routes Virtual Service configuration” to learn how to create and test traffic with OpenShift routes.

Updated: 2017-09-22 01:37:51 +0000