ISO-based Configuration
The ISO-based configuration can be used to set up either a single-node or multi-node CVP instance(s). Before configuring and starting CVP, the following tasks must be completed.
- Launch the VM (see Deploying CVP OVA on ESX or Deploying CVP on KVM).
- Create a YAML Document
- Feed the YAML File into the geniso.py Tool
- Map ISO to the VM's CD-ROM Drive
- Verify the host name, reachability of the name server, and VM connectivity.
Create a YAML Document
- The version field is required and must be 2.
- The dns and ntp entries are lists of values.
- The mode parameter is required.Options are:mode: singlenode or mode: multinode
- The dns, and
ntp parameters are optional, but recommended to use. Note: The parameters, which are the same for all nodes, can be specified only once in the common section of the YAML.For example, default_route can be specified only once in the common section and not three times, once for each node.
Example:
The following example of a YAML document shows the use of separate (different) interfaces for cluster and device-facing networks. These parameters are explained in the previous section. For a single-node deployment, remove the sections for node2 and node3 (assuming all nodes are on the same subnet and have the same default route).
>cat multinode.yaml version: 2 common: aeris_ingest_key: magickey cluster_interface: eth0 default_route: 172.31.0.1 mode: multinode device_interface: eth0 dns: - 172.22.22.40 ntp: - ntp.aristanetworks.com node1: hostname: cvp6.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.31.3.236 netmask: 255.255.0.0 vmname: cvp6 node2: vmname: cvp9 hostname : cvp9.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.31.3.239 netmask: 255.255.0.0 eth1: ip_address: 10.0.0.2 netmask: 255.255.255.0 node3: vmname: cvp10 hostname: cvp10.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.31.3.240 netmask: 255.255.0.0 eth1: ip_address: 10.0.0.3 netmask: 255.255.255.0
Feed the YAML File into the geniso.py Tool
Once you have created the YAML file, you are ready to feed it into the tool so that you can generate the ISO files for the CVP nodes. The root password can be provided at the command line or prompted from the user. If password is empty, no password will be set for root.
Complete the following steps:
Map ISO to the VM's CD-ROM Drive
You can map the ISO to the VM's CD-ROM drive through either ESXi or KVM.
- Create the folder where the ISO will be
stored.
mkdir -p /data/ISO
- Create the folder where the VM will be stored. (If this procedure is used to re-install
a CVP cluster on CVA appliances then make sure to remove old files from the /data/cvp
folder)
mkdir -p /data/cvp cd /data/cvp
- Download the CVP image you want to
deploy.
wget http://dist/release/cvp/2018.2.5/final/cvp-2018.2.5-kvm.tgz
- Unarchive it.
tar -xvf cvp-2018.2.5-kvm.tgz
- Download the CVP
tools.
wget http://dist/release/cvp/2018.2.5/final/cvp-tools-2018.2.5.tgz
- Unarchive it.
tar -xvf cvp-tools-2018.2.5.tgz
- Modify the multinode.yaml file extracted from cvp-tools.It should look something like:
common: cluster_interface: eth0 device_interface: eth0 dns: - 172.22.22.10 ntp: - 172.22.22.50 node1: default_route: 172.28.160.1 hostname: cvp-applicance-1.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.28.161.168 netmask: 255.255.252.0 vmname: cvp-appliance-1 node2: default_route: 172.28.160.1 hostname: cvp-applicance-2.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.28.161.169 netmask: 255.255.252.0 vmname: cvp-appliance-2 node3: default_route: 172.28.160.1 hostname: cvp-applicance-3.sjc.aristanetworks.com interfaces: eth0: ip_address: 172.28.161.170 netmask: 255.255.252.0 vmname: cvp-appliance-3 version: 2
Note:The example above is from CVP 2018.2.5, more recent versions might have different key value pairs so it is always best to log into an existing VM and check /cvpi/cvp-config.yaml. - Use the geniso.py script extracted from CVP-tools to generate the images for ISO based
installation and feed the yaml file into
it:
[root@localhost cvp]# ./geniso.py -y multinode.yaml Please enter a password for root user on cvp Password: Please re-enter the password: Building ISO for node1 cvp-appliance-1: cvp.iso.2019-07-26_17:01:14/node1-cvp-appliance-1.iso Building ISO for node2 cvp-appliance-2: cvp.iso.2019-07-26_17:01:14/node2-cvp-appliance-2.iso Building ISO for node3 cvp-appliance-3: cvp.iso.2019-07-26_17:01:14/node3-cvp-appliance-3.iso
- SCP the generated ISOs to the corresponding
nodes.
mv node1-cvp-appliance-1.iso /data/ISO scp node2-cvp-appliance-2.iso このメールアドレスはスパムボットから保護されています。閲覧するにはJavaScriptを有効にする必要があります。://data/ISO/ scp node3-cvp-appliance-3.iso このメールアドレスはスパムボットから保護されています。閲覧するにはJavaScriptを有効にする必要があります。://data/ISO/
- On each node generate the xml file for KVM.
./generateXmlForKvm.py -n cvp --device-bridge devicebr -k 1 -i cvpTemplate.xml -o qemuout.xml -x "/data/cvp/disk1.qcow2" -y "/data/cvp/disk2.qcow2" -b 22528 -p 8 -e "/usr/libexec/qemu-kvm"
Note: The above will generate the VM specs with 8 CPU and 22GB of RAM, for production use please refer to our Release Notes.Note: The above command will only work on RHEL based systems, on Ubuntu for instance the binary might be in/usr/bin/kvm
.To use both bridges (devicebr and clusterbr) the command would look like this:./generateXmlForKvm.py -n cvp --device-bridge devicebr --cluster-bridge clusterbr -k 1 -i cvpTemplate.xml -o qemuout.xml -x "/data/cvp/disk1.qcow2" -y "/data/cvp/disk2.qcow2" -b 54525 -p 28 -e "/usr/libexec/qemu-kvm
or using raw disk format./generateXmlForKvm.py -n cvp --device-bridge devicebr --cluster-bridge clusterbr -k 1 -i cvpTemplate.xml -o qemuout.xml -x "/data/cvp/disk1.img" -y "/data/cvp/disk2.img" -b 54525 -p 28 -e "/usr/libexec/qemu-kvm"
- Define the VM.
virsh define qemuout.xml
- Start the VM.
virsh start cvp
- Add the ISO image to the VM.
- Node1
./addIsoToVM.py -n cvp -c /data/ISO/node1-cvp-appliance-1.iso
- Node2
./addIsoToVM.py -n cvp -c /data/ISO/node2-cvp-appliance-2.iso
- Node3
./addIsoToVM.py -n cvp -c /data/ISO/node3-cvp-appliance-3.iso
The VM will be rebooted and configured automatically, so you just have to login and wait until the components come up
virsh console cvp
[root@localhost cvp]# virsh console cvp Connected to domain cvp Escape character is ^] [ 30.729182] Ebtables v2.0 unregistered [ 31.253141] ip_tables: (C) 2000-2006 Netfilter Core Team [ 31.290314] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 31.338226] Ebtables v2.0 registered [ 31.401887] nf_conntrack version 0.5.0 (65536 buckets, 262144 max) [124.829593] FS-Cache: Loaded [124.881829] FS-Cache: Netfs 'nfs' registered for caching CentOS Linux 7 (Core) Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64 cvp-applicance-1 login: CentOS Linux 7 (Core) Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64 cvp-applicance-1 login: CentOS Linux 7 (Core) Kernel 3.10.0-957.1.3.el7.x86_64 on an x86_64 cvp-applicance-1 login: root Password: [root@cvp-applicance-1 ~]# su cvp c[cvp@cvp-applicance-1 root]$ cvpi status all Current Running Command: [/cvpi/bin/cvpi -v=3 start all] Current Command Running Node: primary Executing command. This may take a few seconds... primary 18/75 components running, 57 failed secondary 16/86 components running, 70 failed tertiary 13/42 components running, 29 failed
A few minutes later:[cvp@cvp-applicance-1 root]$ cvpi status all Current Running Command: None Executing command. This may take a few seconds... primary 75/75 components running secondary 86/86 components running tertiary 42/42 components running
- Node1
common:
aeris_ingest_key: arista
cluster_interface: eth0
cv_wifi_enabled: 'no'
device_interface: eth0
dns:
- 172.22.22.40
dns_domains:
- ire.aristanetworks.com
- sjc.aristanetworks.com
ntp:
- ntp.aristanetworks.com
node1:
default_route: 10.83.12.1
hostname: cvp-2019-test.ire.aristanetworks.com
interfaces:
eth0:
ip_address: 10.83.12.79
netmask: 255.255.255.0
num_static_route: '0'
tacacs:
ip_address: 10.83.12.22
key: arista
port: '49'
version: 2
common:
aeris_ingest_key: arista123
cluster_interface: eth1
cv_wifi_enabled: 'no'
device_interface: eth0
dns:
- 172.22.22.10
kube_cluster_network: 10.42.0.0/16
ntp:
- ntp1.aristanetworks.com
node1:
default_route: 10.81.45.1
hostname: cvp11.nh.aristanetworks.com
interfaces:
eth0:
ip_address: 10.81.45.243
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.11
netmask: 255.255.255.0
node2:
default_route: 10.81.45.1
hostname: cvp12.nh.aristanetworks.com
interfaces:
eth0:
ip_address: 10.81.45.247
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.12
netmask: 255.255.255.0
node3:
default_route: 10.81.45.1
hostname: cvp13.nh.aristanetworks.com
interfaces:
eth0:
ip_address: 10.81.45.251
netmask: 255.255.255.0
eth1:
ip_address: 192.168.1.13
netmask: 255.255.255.0
version: 2
common:
aeris_ingest_key: arista
cluster_interface: eth0
cv_wifi_enabled: 'no'
deployment_model: DEFAULT
device_interface: eth0
dns:
- 172.22.22.10
dns_domains:
- ire.aristanetworks.com
kube_cluster_network: 10.42.0.0/16
ntp_servers:
- auth: 'no'
server: ntp1.aristanetworks.com
num_ntp_servers: '1'
node1:
default_route: 10.83.12.1
hostname: cvp-2019-test.ire.aristanetworks.com
interfaces:
eth0:
ip_address: 10.83.12.79
netmask: 255.255.255.0
num_static_route: '1'
primary_ip: 10.83.12.79
static_routes:
- interface: eth0
nexthop: 10.83.13.139
route: 192.168.10.0/24
version: 2