ISO-based Configuration

The ISO-based configuration can be used to set up either a single-node or multi-node CVP instance(s). Before configuring and starting CVP, the following tasks must be completed.

Quick Start Steps:

Create a YAML Document

Create a YAML document describing the node(s) (one or three) in your CVP deployment. When creating a YAML document, the following should be considered:
  • The version field is required and must be 2.
  • The dns and ntp entries are lists of values.
  • The mode parameter is required.Options are:mode: singlenode or mode: multinode
  • The dns, and ntp parameters are optional, but recommended to use.
    Note: The parameters, which are the same for all nodes, can be specified only once in the common section of the YAML.For example, default_route can be specified only once in the common section and not three times, once for each node.

    Example:

    The following example of a YAML document shows the use of separate (different) interfaces for cluster and device-facing networks. These parameters are explained in the previous section. For a single-node deployment, remove the sections for node2 and node3 (assuming all nodes are on the same subnet and have the same default route).

    >cat multinode.yaml
    version: 2
    common:
     aeris_ingest_key: magickey
     cluster_interface: eth0
     default_route: 172.31.0.1
     mode: multinode
     device_interface: eth0
     dns:
     - 172.22.22.40
     ntp:
     - ntp.aristanetworks.com
    node1:
    hostname: cvp6.sjc.aristanetworks.com
    interfaces:
    eth0:
    	ip_address: 172.31.3.236
    	netmask: 255.255.0.0
     vmname: cvp6
    
    node2:
     vmname: cvp9
     hostname : cvp9.sjc.aristanetworks.com
     interfaces:
    eth0:
     ip_address: 172.31.3.239
     netmask: 255.255.0.0
    eth1:
     ip_address: 10.0.0.2
     netmask: 255.255.255.0
    node3:
     vmname: cvp10
     hostname: cvp10.sjc.aristanetworks.com
     interfaces:
    eth0:
     ip_address: 172.31.3.240
     netmask: 255.255.0.0
    eth1:
     ip_address: 10.0.0.3
     netmask: 255.255.255.0

Feed the YAML File into the geniso.py Tool

Once you have created the YAML file, you are ready to feed it into the tool so that you can generate the ISO files for the CVP nodes. The root password can be provided at the command line or prompted from the user. If password is empty, no password will be set for root.

Note: The geniso.py tool is provided by cvp-tools-1.0.1.tgz which can be found at https://www.arista.com/en/support/software-download. The package also contains a README file with more details and requirements for geniso.py.

Complete the following steps:

  1. Run the yum install mkisofs command.
  2. Feed the YAML document into the geniso.py tool.

    The system generates the ISO files for the nodes using the input of the YAML document.

    Example:
    • In this example, you are prompted for the root password.
      > mkdir tools
      > tar zxf cvp-tools-1.0.1.tgz -C tools
      > cd tools
      
      ...<edit multinode.yaml>...
      
      > ./geniso.py -y multinode.yaml
      Please enter a password for root user on cvp
      Password:
      Please re-enter the password:
      Building ISO for node1 cvp1: cvp.iso.2015-11-04_00:16:23/node1-cvp1.iso
      Building ISO for node2 cvp2: cvp.iso.2015-11-04_00:16:23/node2-cvp2.iso
      Building ISO for node3 cvp3: cvp.iso.2015-11-04_00:16:23/node3-cvp3.iso
  3. In case of using KVM as a hypervisor in a multi-node setup, copy the following ISO files to the corresponding nodes:
    • SCP node2's ISO to node 2
      [root@localhost cvp]# scp node2-cvp-appliance-2.iso 이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.://data/cvp/
      이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.'s password:
      node2-cvp-appliance-2.iso
      100%360KB57.5MB/s 00:00
    • SCP node3's ISO to node 3
      [root@localhost cvp]# scp node3-cvp-appliance-3.iso 이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.://data/cvp/
      이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.'s password:
      node3-cvp-appliance-3.iso 
      100%360KB54.7MB/s 00:00
    Note: The script has to be run on one machine only. This generates three ISO images which contains the same ssh keys, thus allowing the nodes to send files without a password. If the script is run individually on each node, it result in images containing different ssh keys and the deployment process fails, until the user manually adds the ssh keys in ~/.ssh/authorized_keys.

Map ISO to the VM's CD-ROM Drive

You can map the ISO to the VM's CD-ROM drive through either ESXi or KVM.

Note: The following script was created with Python 2.7.
On all hosts:
  1. Create the folder where the ISO will be stored.
    mkdir -p /data/ISO
  2. Create the folder where the VM will be stored. (If this procedure is used to re-install a CVP cluster on CVA appliances then make sure to remove old files from the /data/cvp folder)
    mkdir -p /data/cvp
    cd /data/cvp
  3. Download the CVP image you want to deploy.
    wget http://dist/release/cvp/2018.2.5/final/cvp-2018.2.5-kvm.tgz
  4. Unarchive it.
    tar -xvf cvp-2018.2.5-kvm.tgz
  5. Download the CVP tools.
    wget http://dist/release/cvp/2018.2.5/final/cvp-tools-2018.2.5.tgz
  6. Unarchive it.
    tar -xvf cvp-tools-2018.2.5.tgz
On the primary:
  1. Modify the multinode.yaml file extracted from cvp-tools.It should look something like:
    common:
    cluster_interface: eth0
    device_interface: eth0
    dns:
    - 172.22.22.10
    ntp:
    - 172.22.22.50
    node1:
    default_route: 172.28.160.1
    hostname: cvp-applicance-1.sjc.aristanetworks.com
    interfaces:
    eth0:
    ip_address: 172.28.161.168
    netmask: 255.255.252.0
    vmname: cvp-appliance-1
    node2:
    default_route: 172.28.160.1
    hostname: cvp-applicance-2.sjc.aristanetworks.com
    interfaces:
    eth0:
    ip_address: 172.28.161.169
    netmask: 255.255.252.0
    vmname: cvp-appliance-2
    node3:
    default_route: 172.28.160.1
    hostname: cvp-applicance-3.sjc.aristanetworks.com
    interfaces:
    eth0:
    ip_address: 172.28.161.170
    netmask: 255.255.252.0
    vmname: cvp-appliance-3
    version: 2
    Note:The example above is from CVP 2018.2.5, more recent versions might have different key value pairs so it is always best to log into an existing VM and check /cvpi/cvp-config.yaml.
  2. Use the geniso.py script extracted from CVP-tools to generate the images for ISO based installation and feed the yaml file into it:
    [root@localhost cvp]# ./geniso.py -y multinode.yaml
    Please enter a password for root user on cvp
    Password:
    Please re-enter the password:
    Building ISO for node1 cvp-appliance-1: cvp.iso.2019-07-26_17:01:14/node1-cvp-appliance-1.iso
    Building ISO for node2 cvp-appliance-2: cvp.iso.2019-07-26_17:01:14/node2-cvp-appliance-2.iso
    Building ISO for node3 cvp-appliance-3: cvp.iso.2019-07-26_17:01:14/node3-cvp-appliance-3.iso
  3. SCP the generated ISOs to the corresponding nodes.
    mv node1-cvp-appliance-1.iso /data/ISO
    scp node2-cvp-appliance-2.iso 이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.://data/ISO/
    scp node3-cvp-appliance-3.iso 이 이메일 주소가 스팸봇으로부터 보호됩니다. 확인하려면 자바스크립트 활성화가 필요합니다.://data/ISO/
    
  4. On each node generate the xml file for KVM.
    ./generateXmlForKvm.py -n cvp --device-bridge devicebr -k 1 -i 
    cvpTemplate.xml -o qemuout.xml -x "/data/cvp/disk1.qcow2" -y 
    "/data/cvp/disk2.qcow2" -b 22528 -p 8 -e "/usr/libexec/qemu-kvm"
    
    Note: The above will generate the VM specs with 8 CPU and 22GB of RAM, for production use please refer to our Release Notes.
    Note: The above command will only work on RHEL based systems, on Ubuntu for instance the binary might be in /usr/bin/kvm.
    To use both bridges (devicebr and clusterbr) the command would look like this:
    ./generateXmlForKvm.py -n cvp --device-bridge devicebr
    --cluster-bridge clusterbr -k 1 -i cvpTemplate.xml -o 
    qemuout.xml -x "/data/cvp/disk1.qcow2" -y 
    "/data/cvp/disk2.qcow2" -b 54525 -p 28 -e 
    "/usr/libexec/qemu-kvm
    or using raw disk format
    ./generateXmlForKvm.py -n cvp --device-bridge devicebr
    --cluster-bridge clusterbr -k 1 -i cvpTemplate.xml -o 
    qemuout.xml -x "/data/cvp/disk1.img" -y "/data/cvp/disk2.img" -b 
    54525 -p 28 -e "/usr/libexec/qemu-kvm"
  5. Define the VM.
    virsh define qemuout.xml
    
  6. Start the VM.
    virsh start cvp
  7. Add the ISO image to the VM.
    1. Node1
      ./addIsoToVM.py -n cvp -c /data/ISO/node1-cvp-appliance-1.iso
      
    2. Node2
      ./addIsoToVM.py -n cvp -c /data/ISO/node2-cvp-appliance-2.iso
    3. Node3
      ./addIsoToVM.py -n cvp -c /data/ISO/node3-cvp-appliance-3.iso

    The VM will be rebooted and configured automatically, so you just have to login and wait until the components come up

    virsh console cvp

    [root@localhost cvp]# virsh console cvp
    Connected to domain cvp
    Escape character is ^]
    [ 30.729182] Ebtables v2.0 unregistered
    [ 31.253141] ip_tables: (C) 2000-2006 Netfilter Core Team
    [ 31.290314] ip6_tables: (C) 2000-2006 Netfilter Core Team
    [ 31.338226] Ebtables v2.0 registered
    [ 31.401887] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
    [124.829593] FS-Cache: Loaded
    [124.881829] FS-Cache: Netfs 'nfs' registered for caching
    
    CentOS Linux 7 (Core)
    Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64
    
    cvp-applicance-1 login:
    CentOS Linux 7 (Core)
    Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64
    
    cvp-applicance-1 login:
    CentOS Linux 7 (Core)
    Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64
    
    cvp-applicance-1 login: root
    Password:
    [root@cvp-applicance-1 ~]# su cvp
    c[cvp@cvp-applicance-1 root]$ cvpi status all
    
    Current Running Command: [/cvpi/bin/cvpi -v=3 start all]
    Current Command Running Node: primary
    Executing command. This may take a few seconds...
    primary 	18/75 components running, 57 failed
    secondary 	16/86 components running, 70 failed
    tertiary 	13/42 components running, 29 failed
    A few minutes later:
    [cvp@cvp-applicance-1 root]$ cvpi status all
    
    Current Running Command: None
    Executing command. This may take a few seconds...
    primary 	75/75 components running
    secondary 	86/86 components running
    tertiary 	42/42 components running
CVP 2020.3.1 sample config (single node)
common:
aeris_ingest_key: arista
cluster_interface: eth0
cv_wifi_enabled: 'no'
device_interface: eth0
dns:
- 172.22.22.40
dns_domains:
- ire.aristanetworks.com
- sjc.aristanetworks.com
ntp:
- ntp.aristanetworks.com
node1:
default_route: 10.83.12.1
hostname: cvp-2019-test.ire.aristanetworks.com
interfaces:
eth0:
ip_address: 10.83.12.79
netmask: 255.255.255.0
num_static_route: '0'
tacacs:
ip_address: 10.83.12.22
key: arista
port: '49'
version: 2
CVP 2021.1.0 sample config (multi node)
common:
aeris_ingest_key: arista123
cluster_interface: eth1
cv_wifi_enabled: 'no'
device_interface: eth0
dns:
- 172.22.22.10
kube_cluster_network: 10.42.0.0/16
ntp:
- ntp1.aristanetworks.com
node1:
default_route: 10.81.45.1
hostname: cvp11.nh.aristanetworks.com
interfaces:
eth0:
ip_address: 10.81.45.243
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.11
netmask: 255.255.255.0
node2:
default_route: 10.81.45.1
hostname: cvp12.nh.aristanetworks.com
interfaces:
eth0:
 ip_address: 10.81.45.247
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.12
netmask: 255.255.255.0
node3:
default_route: 10.81.45.1
hostname: cvp13.nh.aristanetworks.com
interfaces:
eth0:
ip_address: 10.81.45.251
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.13
netmask: 255.255.255.0
version: 2
CVP 2023.1.0 sample config (single node)
common:
aeris_ingest_key: arista
cluster_interface: eth0
cv_wifi_enabled: 'no'
deployment_model: DEFAULT
device_interface: eth0
dns:
- 172.22.22.10
dns_domains:
- ire.aristanetworks.com
kube_cluster_network: 10.42.0.0/16
ntp_servers:
- auth: 'no'
server: ntp1.aristanetworks.com
num_ntp_servers: '1'
node1:
default_route: 10.83.12.1
hostname: cvp-2019-test.ire.aristanetworks.com
interfaces:
eth0:
ip_address: 10.83.12.79
netmask: 255.255.255.0
num_static_route: '1'
primary_ip: 10.83.12.79
static_routes:
- interface: eth0
nexthop: 10.83.13.139
route: 192.168.10.0/24
version: 2