Shell-based Configuration

The shell-based configuration can be used to set up either a single-node CVP instance or multi-node CVP instances. The steps you use vary depending on whether you are setting up a single-node instance or a multi-node instance.

Cluster and device interfaces

A cluster interface is the interface that is able to reach the other two nodes in a multi-node installation. A device interface is the interface used by managed devices to connect to CVP. The ZTP configuration file is served over this interface. These two parameters are optional and default to eth0. Configuring these two interfaces is useful in deployments where a private network is used between the managed devices and a public-facing network is used to reach the other two cluster nodes and the GUI.

Configuring a Single-Node CVP Instance using CVP Shell

After initial bootup, CVP can be configured at the VM's console using the CVP config shell. At points during the configuration, you must start the network, NTPD, and CVP services. Starting these services may take some time to complete before moving on to the next step in the process.

Pre-requisites:

Before you begin the configuration process, make sure that you:

To configure CVP using the CVP config shell:

  1. Login at the VM console as cvpadmin.
  2. Enter your configuration and apply it (see the following example).
    In this example, the root password is not set (it is notset by default). In this example of a CVP shell, the bold text is entered by the cvpadmin user.

    Accept the default or choose a custom internal cluster network, for the internal kubernetes clustering.

    Note: To skip NAT and static routes, simply press Enter when prompted.
    localhost login: cvpadmin
    Changing password for user root.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    CVP Installation Menu
    ────────────────────────────────────────────
    [q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
    >s
    Enter the configuration for CloudVision Portal and apply it when done.
    Entries marked with '*' are required.
    
    
    Common Configuration:
    ────────────────────────────────────────────
     CloudVision Deployment Model [d]efault [w]ifi_analytics: d
    DNS Server Addresses (IPv4 Only): 172.22.22.40
    
    
    DNS Domain Search List: sjc.aristanetworks.com, ire.aristanetworks.com
    Number of NTP Servers: 1
    NTP Server Address (IPv4 or FQDN) #1: ntp.aristanetworks.com
    Is Auth enabled for NTP Server #1: no
    Cluster Interface Name: eth0
    Device Interface Name: eth0
    CloudVision WiFi Enabled: no
    
    
     *Enter a private IP range for the internal cluster network (overlay): 10.42.0.0/16
     *FIPS mode: no
    Node Configuration:
    ─────────────────────────────────────────────
     *Hostname (FQDN): cvp80.sjc.aristanetworks.com
     *IP Address of eth0: 172.31.0.168
     *Netmask of eth0: 255.255.0.0
    NAT IP Address of eth0:
     *Default Gateway: 172.31.0.1Number of Static Routes: 1
     Route for Static Route #1: 1.1.1.0
     TACACS Server IP Address:
    
     Singlenode Configuration Menu
    ────────────────────────────────────────────
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
    >v
    Valid config format.
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    [189.568543] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [189.576571] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [203.860624] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [203.863878] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [204.865253] Ebtables v2.0 unregistered
    [205.312888] ip_tables: (C) 2000-2006 Netfilter Core Team
    [205.331703] ip6_tables: (C) 2000-2006 Netfilter Core Team
    [205.355522] Ebtables v2.0 registered
    [205.398575] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [206.856170] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [206.858797] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [206.860627] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [207.096883] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [211.086390] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [211.089157] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [211.091084] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [211.092424] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    [211.245437] warning: `/bin/ping' has both setuid-root and effective capabilities. Therefore not raising all capabilities.
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
    
    
    

    Validating the Configuration

    
    >v
    Valid config format.
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    Stopping: network
    Running : /bin/sudo /bin/systemctl stop network
    Running : /bin/sudo /bin/systemctl is-active network
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    

    Applying the Configuration

    
    >a
    Valid config format.
    saved config to /cvpi/cvp-config.yaml
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    Stopping: network
    Running : /bin/sudo /bin/systemctl stop network
    Running : /bin/sudo /bin/systemctl is-active network
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    Running : cvpConfig.py tool...
    Stopping: network
    Running : /bin/sudo /bin/systemctl stop network
    Running : /bin/sudo /bin/systemctl is-active network
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network
    Running : /bin/sudo /bin/systemctl is-active etcd
    Internal error, unknown service 'etcd'
    Running :/bin/sudo /bin/systemctl stop kube-cluster.path on 172.30.41.190
    Running :/bin/sudo /bin/systemctl stop kube-cluster.service on 172.30.41.190
    Checking if interface flannelbr0 is present
    Run cmd: sudo -u cvp -- ssh 172.30.41.190 /usr/sbin/ip link show flannelbr0 0.18
    Checking if interface flannel.1 is present
    Run cmd: sudo -u cvp -- ssh 172.30.41.190 /usr/sbin/ip link show flannel.1 0.17
    Running : /bin/sudo /bin/systemctl is-active zookeeper
    Starting: systemd services
    Running : cvpConfig.py tool...
    Stopping: cvpi
    Running : /bin/sudo /bin/systemctl stop cvpi
    Running : /bin/sudo /bin/systemctl is-active cvpi
    Running : /bin/sudo /bin/systemctl is-active cvpi
    Stopping: cvpi-config
    Running : /bin/sudo /bin/systemctl stop cvpi-config
    Running : /bin/sudo /bin/systemctl is-active cvpi-config
    Running : /bin/sudo /bin/systemctl is-active cvpi-config
    Stopping: zookeeper
    Running : /bin/sudo /bin/systemctl stop zookeeper
    Running : /bin/sudo /bin/systemctl is-active zookeeper
    Running : /bin/sudo /bin/systemctl is-active zookeeper
    Stopping: cvpi-check
    Running : /bin/sudo /bin/systemctl stop cvpi-check
    Running : /bin/sudo /bin/systemctl is-active cvpi-check
    Running : /bin/sudo /bin/systemctl is-active cvpi-check
    Stopping: ntpd
    Running : /bin/sudo /bin/systemctl stop ntpd
    Running : /bin/sudo /bin/systemctl is-active ntpd
    Running : /bin/sudo /bin/systemctl is-active ntpd
    Starting: ntpd
    Running : /bin/sudo /bin/systemctl start ntpd
    Starting: cvpi-check
    Running : /bin/sudo /bin/systemctl start cvpi-check
    Starting: zookeeper
    Running : /bin/sudo /bin/systemctl start zookeeper
    Starting: cvpi-config
    Running : /bin/sudo /bin/systemctl start cvpi-config
    Starting: cvpi
    Running : /bin/sudo /bin/systemctl start cvpi
    Running : /bin/sudo /bin/systemctl start cvpi-watchdog.timer
    Running : /bin/sudo /bin/systemctl enable cert-rotate.timer
    Running : /bin/sudo /bin/systemctl start cert-rotate.timer
    Running : /bin/sudo /bin/systemctl enable ambassador-cert-rotate.timer
    Running : /bin/sudo /bin/systemctl start ambassador-cert-rotate.timer
    Running : /bin/sudo /bin/systemctl enable ssl-cert-expiry.timer
    Running : /bin/sudo /bin/systemctl start ssl-cert-expiry.timer
    Running : /bin/sudo /bin/systemctl enable docker containerd
    Running : /bin/sudo /bin/systemctl start docker containerd
    Running :/bin/sudo /bin/systemctl enable kube-cluster.path on 172.30.41.190
    Running :/bin/sudo /bin/systemctl start kube-cluster.path on 172.30.41.190
    Waiting for all components to start. This may take few minutes.
    Still waiting for flannel coredns descheduler fluent-bit mutating-webhook-server mutating-webhook clickhouse namenode datanode nfs3 ... (total 217)
    Still waiting for clickhouse hbasemaster regionserver hbase kafka dispatcher apiserver nginx-init-V1 nginx-app apiserver-www ... (total 203)
    Still waiting for dispatcher apiserver nginx-init-V1 nginx-app apiserver-www local-provider radius-provider tacacs-provider aaa disk-usage-monitor ... (total 198)
    Still waiting for nginx-app apiserver-www local-provider radius-provider tacacs-provider aaa disk-usage-monitor ingest elasticsearch-server elasticsearch-exporter ... (total 195)
    Still waiting for nginx-app apiserver-www local-provider radius-provider tacacs-provider aaa ingest elasticsearch-server elasticsearch-exporter elasticsearch-dispatcher ... (total 194)
    Still waiting for apiserver-www aaa ingest elasticsearch-server elasticsearch-exporter elasticsearch-dispatcher elasticsearch-recorder service-accesscontrol aerisdiskmonitor ambassador ... (total 190)
    Still waiting for ingest enroll-www turbine-accumulator-seg-sec-1m turbine-aggregate-connectivity-monitor-15m turbine-aggregate-counter-rate-15m turbine-aggregate-counter-rate-1m turbine-aggregate-dom-metrics-15m turbine-aggregate-dom-metrics-sfp-1m turbine-aggregate-hardware-table-usage-15m turbine-aggregate-hardware-table-usage-1m ... (total 92)
    Still waiting for ingest enroll-www turbine-count-dot1x-auth-status-per-intf turbine-device-aggregate-seg-sec-count-1m turbine-entities-dot1x-wired turbine-eos-links turbine-event-cusum-stats-connectivity-monitor turbine-event-ipsec-connectivity-down turbine-event-lin-predictor-stats-hardware turbine-event-threshold-analytics-errors ... (total 82)
    Still waiting for ingest enroll-www turbine-network-node-event-mapper turbine-network-topology-tagger turbine-network-vxlan-neighbors turbine-rate-bandwidth turbine-rate-intf-counters turbine-rate-openconfig-intf-counters turbine-rate-port-channel-counters turbine-rate-seg-sec-counters ... (total 53)
    Still waiting for ingest enroll-www turbine-windfarm-count-bgp-peer turbine-windfarm-count-intf-roles turbine-windfarm-device-resource-aggregate turbine-windfarm-dom-metrics-qsfp turbine-windfarm-dom-metrics-sfp turbine-windfarm-eos-version turbine-windfarm-event-change-control turbine-windfarm-event-intf-status ... (total 24)
    Still waiting for kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet ingest docker enroll-www etcd turbine-windfarm-lanz-data ... (total 19)
    Running : cvpConfig.py tool...
    Stopping wifimanager
    Running : su - cvp -c "cvpi stop wifimanager 2>&1"
    Stopping aware
    Running : su - cvp -c "cvpi stop aware 2>&1"
    Disabling wifimanager
    Running : su - cvp -c "cvpi disable wifimanager 2>&1"
    Disabling aware
    Running : su - cvp -c "cvpi disable aware 2>&1"
    CVP installation successful
    

Configuring Multi-node CVP Instances Using the CVP Shell

Use this procedure to configure multi-node CVP instances using the CVP shell. This procedure includes the steps to set up a primary, secondary, and tertiary node, which is the number of nodes required for redundancy. It also includes the steps to verify and apply the configuration of each node.

The sequence of steps in this procedure follow the process described in the basic steps in the process

Pre-requisites:

Before you begin the configuration process, make sure that you:

Complete the following steps to configure multi-node CVP instances:

  1. Login at the VM console for the primary node as cvpadmin.
  2. At the cvp installation mode prompt, type m to select a multi-node configuration.
  3. At the prompt to select a role for the node, type p to select primary node.
    Note: You must select primary first. You cannot configure one of the other nodes before you configure the primary node.
  4. Follow the CloudVision Portal prompts to specify the configuration options for the primary node.All options with an asterisk (*) are required.The options include:
    • Root password (*)
    • Default route (*)
    • DNS (*)
    • NTP (*)
    • Telemetry Ingest Key
    • Cluster interface name (*)
    • Device interface name (*)
    • Hostname (*)
    • IP address (*)
    • Netmask (*)
    • Number of static routes
    • Route for each static route
    • Interface for static route
    • TACACS server ip address
    • TACACS server key/port
    • IP address of primary (*) for secondary/tertiary only
    Note: If there are separate cluster and device interfaces (the interfaces have different IP addresses), make sure that you enter the hostname of the cluster interface. If the cluster and device interface are the same (for example, they are eth0), make sure you enter the IP address of eth0 for the hostname.
    Note: The following is an example of the configuration information that requires verification. A CVP cluster MUST be able to resolve A and PTR records in DNS for each cluster node. This forward and reverse DNS lookup MUST be verified. Perform nslookup to verify the forward and reverse lookup. This is an important step to CVP forming the cluster during initial setup. For more information on how to use nslookup, refer to Connectivity Requirements.
    Note:NTP synchronization is important for CVP cluster nodes, and for EOS streaming telemetry to CVP. NTP service verified using a tool such as ntpdate. For more information on how to use ntpdate, refer to Connectivity Requirements.
  5. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose.

    If the configuration is valid, the system shows a Valid config status message.

  6. Type a to apply the configuration for the primary node and wait for the line Waiting for other nodes to send their hostname and ip with spinning wheel.

    The system automatically saves the configuration as a YAML document and shows the configuration settings in pane 1 of the shell.)

  7. When Waiting for other nodes to send their hostname and ip line is printed by the primary node, go to the shell for the secondary node, and specify the configuration settings for the secondary node (All options with an asterisk (*) are required, including primary node IP address)
  8. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose. 

    If the configuration is valid, the system shows a Valid config status message.

  9. Type a to apply the configuration for the primary node and wait for the line Waiting for other nodes to send their hostname and IP.
    The system automatically saves the configuration as a YAML document and displays the configuration settings in pane 1 of the shell.
  10. At the Primary's root password prompt, type (enter) the password for the primary node, and then press Enter.
  11. Go to the shell for the tertiary node, and specify the configuration settings for the node. (All options with an asterisk (*) are required.)
  12. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose.

    If the configuration is valid, the system shows a Valid config status message.

  13. At the Primary IP prompt, type the IP address of the primary node.
  14. At the Primarys root password prompt, press Enter.

    The system automatically completes the CVP installation for all nodes (this is done by the primary node). A message appears indicating that the other nodes are waiting for the primary node to complete the CVP installation.

    When the CVP installation is successfully completed for a particular node, a message appears in the appropriate pane to indicate the installation was successful. (This message is repeated in each pane.)

  15. Go to shell for the primary node, and type q to quit the installation.
  16. At the cvp login prompt, login as root.
  17. At the [root@cvplogin]# prompt, switch to the cvp user account by typing su cvp, and then press Enter.
  18. Run the cvpi status all command, and press Enter.

    The system automatically checks the status of the installation for each node and provides status information in each pane for CVP. The information shown includes some of the configuration settings for each node.

Rules for the Number and Type of Nodes

Three nodes are required for multi-node CVP instances, where a node is identified as either the primary, secondary, or tertiary. You define the node type (primary, secondary, or tertiary) for each node during the configuration.

The Basic Steps in the Process

All multi-node configurations follow the same basic process. The basic steps are:

  1. Specify the settings for the nodes in the following sequence (you apply the configuration later in the process):
    • Primary node
    • Secondary node
    • Tertiary node
  2. Verify and then apply the configuration for the primary node. (During this step, the system automatically saves the configuration for the primary node as a YAML document. In addition, the system shows the configuration settings.)

    Once the system applies the configuration for the primary node, the other nodes need to send their hostname and IP address to the primary node.

  3. Verify and then apply the configuration for the secondary node.

    As part of this step, the system automatically pushes the hostname, IP address, and public key of the secondary node to the primary node. The primary node also sends a consolidated YAML to the secondary node, which is required to complete the configuration of the secondary node.

    Note: To ensure the environment variables are generated, only apply configuration when the following messages are displayed.

    Only apply the secondary and tertiary nodes if the primary has finished its configuration and displays: "Waiting for other nodes to send their hostname and ip."

    The secondary and tertiary nodes will display the following message: "Please wait for primary to show "Waiting for other nodes to send their hostname and ip" before applying."

    If the configuration is applied before the message is displayed, the environment variables will not be generated.

  4. The previous step (verifying and applying the configuration) is repeated for the tertiary node. (The automated processing of data described for the secondary node is also repeated for the tertiary node.)

    Once the configuration for all nodes has been applied (steps 1 through 4 above), the system automatically attempts to complete the CVP installation for all nodes (this is done by the primary node). A message appears indicating that the other nodes are waiting for the primary node to complete the CVP installation.

  5. You quit the installation, then login as root and check the status of CVP.

    The system automatically checks the status and provides status information in each pane for the CVP service.

The CVP Shell

For multi-node configurations, you need to open 3 CVP consoles (one for each node). Each console is shown in it's own pane. You use each console to configure one of the nodes (primary, secondary, or tertiary).

The system also provides status messages and all of the options required to complete the multi-node configuration. The status messages and options are presented in the panes of the shell that correspond to the node type.

Figure 1 shows three CVP Console shells for multi-node configurations. Each shell corresponds to a CVP Console for each node being configured.

 

Figure 1. CVP Console Shells for Multi-node Configurations

Examples

 

The following examples show the commands used to configure (set up) the primary, secondary, and tertiary nodes, and apply the configurations to the nodes. Examples are also included of the system output shown as CVP completes the installation for each of the nodes.

Primary Node Configuration

This example shows the commands used to configure (set up) the primary node.

localhost login: cvpadmin
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>p

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
CV-CUE Enabled: no
CV-CUE HA cluster IP:
Cluster Interface name: eth0
Device Interface name: eth0
node configuration:
 *hostname (fqdn): cvp57.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.186
 *Netmask of eth0: 255.255.0.0
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Secondary Node Configuration

This example shows the commands used to configure (set up) the secondary node.

Note: To ensure the environment variables are generated, only apply configuration when the following messages are displayed.

Only apply the secondary and tertiary nodes if the primary has finished its configuration and displays: "Waiting for other nodes to send their hostname and ip."

The secondary and tertiary nodes will display the following message: "Please wait for primary to show "Waiting for other nodes to send their hostname and ip" before applying."S

If the configuration is applied before the message is displayed, the environment variables will not be generated.

localhost login: cvpadmin
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>s

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
CV-CUE Enabled: no
CV-CUE HA cluster IP:
Cluster Interface name: eth0
Device Interface name: eth0
*IP address of primary: 172.31.0.186
node configuration:
 *hostname (fqdn): cvp65.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.153
 *Netmask of eth0: 255.255.0.0
>

Tertiary Node Configuration

This example shows the commands used to configure (set up) the tertiary node.

Note: To ensure the environment variables are generated, only apply configuration when the following messages are displayed.

Only apply the secondary and tertiary nodes if the primary has finished its configuration and displays: "Waiting for other nodes to send their hostname and ip."

The secondary and tertiary nodes will display the following message: "Please wait for primary to show "Waiting for other nodes to send their hostname and ip" before applying."

If the configuration is applied before the message is displayed, the environment variables will not be generated.

Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>t

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
Cluster Interface name: eth0
Device Interface name: eth0
 *IP address of primary: 172.31.0.186
node configuration:
hostname (fqdn): cvp84.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.213
 *Netmask of eth0: 255.255.0.0
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Verifying the Primary Node Configuration and Applying it to the Node

This example shows the commands used to verify the configuration of the primary node and apply the configuration to the node.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>v
Valid config format.
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[ 8608.509056] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 8608.520693] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 8622.807169] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 8622.810214] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[ 8624.027029] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 8624.030254] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 8624.032643] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 8624.238995] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 8638.294690] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 8638.297973] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 8638.300454] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 8638.302186] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[ 8638.489266] warning: `/bin/ping' has both setuid-root and effective capabilities. Therefore not raising all capabilities.
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Verifying the Tertiary Node Configurations and Applying them to the Nodes

This example shows the commands used to verify the configurations of the tertiary nodes and apply the configurations to the nodes.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>v
Valid config format.
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[ 9195.362192] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 9195.365069] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 9195.367043] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 9195.652382] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 9209.588173] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 9209.590896] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 9209.592887] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 9209.594222] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[ 9210.561940] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 9210.564602] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 9224.805267] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 9224.808891] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 9224.811150] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 9224.812899] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Waiting for the Primary Node Installation to Finish

These examples show the system output shown as CVP completes the installation for the primary node.
  • Waiting for primary node installation to pause until other nodes send files
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
    >a
    Valid config format.
    saved config to /cvpi/cvp-config.yaml
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    [15266.575899] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15266.588500] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15266.591751] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15266.672644] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15280.937599] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15280.941764] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15280.944883] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15280.947038] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [15282.581713] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15282.585367] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15282.588072] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15282.948613] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15296.871658] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15296.875871] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15296.879003] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15296.881456] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    Running : cvpConfig.py tool...
    [15324.884887] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15324.889169] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15324.893217] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15324.981682] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15339.240237] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15339.243999] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15339.247119] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15339.249370] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [15340.946583] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15340.950891] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15340.953786] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15341.251648] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15355.225649] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15355.229400] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15355.232674] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15355.234725] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Waiting for other nodes to send their hostname and ip
    \
  • Waiting for the primary node installation to finish
    Waiting for other nodes to send their hostname and ip
    -
    Running : cvpConfig.py tool...
    [15707.665618] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9
    vectors allocated
    [15707.669167] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15707.672109] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15708.643628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15722.985876] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9
    vectors allocated
    [15722.990116] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15722.993221] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15722.995325] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    [15724.245523] Ebtables v2.0 unregistered
    [15724.940390] ip_tables: (C) 2000-2006 Netfilter Core Team
    [15724.971820] ip6_tables: (C) 2000-2006 Netfilter Core Team
    [15725.011963] Ebtables v2.0 registered
    [15725.077660] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
    Stopping: ntpd
    Running : /bin/sudo /sbin/service ntpd stop
    Running : /bin/sudo /bin/systemctl is-active ntpd
    Starting: ntpd
    Running : /bin/sudo /bin/systemctl start ntpd.service
    --
    Verifying configuration on the secondary node
    Verifying configuration on the tertiary node
    Starting: systemd services
    Starting: cvpi-check
    Running : /bin/sudo /bin/systemctl start cvpi-check.service
    Starting: zookeeper
    Running : /bin/sudo /bin/systemctl start zookeeper.service
    Starting: cvpi-config
    Running : /bin/sudo /bin/systemctl start cvpi-config.service
    Starting: cvpi
    Running : /bin/sudo /bin/systemctl start cvpi.service
    Running : /bin/sudo /bin/systemctl enable zookeeper
    Running : /bin/sudo /bin/systemctl start cvpi-watchdog.timer
    Running : /bin/sudo /bin/systemctl enable docker
    Running : /bin/sudo /bin/systemctl start docker
    Running : /bin/sudo /bin/systemctl enable kube-cluster.path
    Running : /bin/sudo /bin/systemctl start kube-cluster.path
    Waiting for all components to start. This may take few minutes.
    Still waiting for aaa aeriadiakmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit aware ... {total 271)
    Still waiting for aaa aerisdisknonitor alertmanager-multinode-service anbassador apiserver apiserver-www apiserver-www apiserver-www audit bapmaintmode ... (total 235)
    Still waiting for asa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www spiserver-www apiserver-www audit bgpmaintmode ... (total 236)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... {total 235)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... {total 235)
    Still waiting for aaa aeriasdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 235)
    Still waiting for aaa aerisdisknenitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-wwww audit bgpmaintmode ... (total 236)
    Still waiting for eae aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-wrw apiserver-www audit bgpmaintmode ... (total 229)
    Still waiting for aaa aerisdisknonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 228)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 213)
    Still waiting for aaa alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode bugalerts-query-tagger ... (total 199)
    Still waiting for aaa alertmanager-multinode-service ambassador apiserver apiaserver apiserver apiserver-www apiserver-www apiserver-www audit ... (total 181)
    Still waiting for ase ambassador spisercver-www apiserver-www episerver-www audit bgpmaintmode bugalerts-update ccapi cemgr ... (total 121)
    Still waiting for aaa ambassador apiserver-www apiserver-www apiserver-www audit bgpmaintmode ccapi ccmgr certs ... (total 78)
    Still waiting for saa ambassador apiserver-www apiserver-www apiserver-www audit certs cloudmanager compliance cvp-backend ... (total 44)
    Still waiting for aaa ambassador apiserver-www apiserver-www apiserver-www certs cloudmanager cloudmanager cloudmanager compliance ... (total 35)
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa evp-frontend evp-frontend evp-frontend cvp-www evp-www cvp-www inventory ztp
    Still waiting for cvp-frontend cvp-frontend cvp-frontend
    CVP installation successful
    Running : cvpConfig.py tool...
    Stopping wifimanager
    Running : su - cvp -c "cvpi stop wifimanager"
    Stopping aware
    Running : su - cvp -c "cvpi stop aware"
    Disabling wifimanager
    Running : su - cvp -c "cvpi disable wifimanager"
    Disabling aware
    Running 1 su - cvp -c "cvpi disable aware"
    
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r)bose

Waiting for the Secondary and Tertiary Node Installation to Finish

This example shows the system output displayed as CVP completes the installation for the secondary and tertiary nodes.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>a
Valid config format.
saved config to /cvpi/cvp-config.yaml
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[15492.903419] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15492.908473] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15492.910297] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15493.289569] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15507.118778] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15507.121579] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15507.123648] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15507.125051] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[15508.105909] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15508.108752] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15522.301114] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15522.303766] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15522.305580] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15522.306866] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
Running : cvpConfig.py tool...
[15549.664989] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15549.667899] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15549.669783] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15550.046552] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15563.933328] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15563.937507] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15563.940501] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15563.942113] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[15565.218666] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15565.222324] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15565.225193] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15565.945531] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15579.419911] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15579.422707] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15579.424636] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15579.425962] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Running : cvpConfig.py tool...
[15600.608075] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15600.610946] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15600.613687] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15600.986529] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15615.840426] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15615.843207] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15615.845197] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15615.846633] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[15616.732733] Ebtables v2.0 unregistered
[15617.213057] ip_tables: (C) 2000-2006 Netfilter Core Team
[15617.233688] ip6_tables: (C) 2000-2006 Netfilter Core Team
[15617.261149] Ebtables v2.0 registered
[15617.309743] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Stopping: ntpd
Running : /bin/sudo /sbin/service ntpd stop
Running : /bin/sudo /bin/systemctl is-active ntpd
Starting: ntpd
Running : /bin/sudo /bin/systemctl start ntpd.service
Pushing hostname, ip address and public key to the primary node
Primary's root password:
Transferred files
Receiving public key of the primary node
-
Waiting for primary to send consolidated yaml
-
Received authorized keys and consolidated yaml files
Running : /bin/sudo /bin/systemctl start cvpi-watchdog.timer
Running : cvpConfig.py tool...
[15748.205170] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15748.208393] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15748.210206] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15748.591559] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15752.406867] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15752.409789] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15752.412015] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15752.413603] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: zookeeper
Running : /bin/sudo /sbin/service zookeeper stop
Running : /bin/sudo /bin/systemctl is-active zookeeper
Stopping: cvpi-check
Running : /bin/sudo /sbin/service cvpi-check stop
Running : /bin/sudo /bin/systemctl is-active cvpi-check
Stopping: ntpd
Running : /bin/sudo /sbin/service ntpd stop
Running : /bin/sudo /bin/systemctl is-active ntpd
Starting: ntpd
Running : /bin/sudo /bin/systemctl start ntpd.service
Starting: cvpi-check
Running : /bin/sudo /bin/systemctl start cvpi-check.service
Starting: zookeeper
Running : /bin/sudo /bin/systemctl start zookeeper.service
Running : /bin/sudo /bin/systemctl enable docker
Running : /bin/sudo /bin/systemctl start docker
Running : /bin/sudo /bin/systemctl enable kube-cluster.path
Running : /bin/sudo /bin/systemctl start kube-cluster.path
Running : /bin/sudo /bin/systemctl enable zookeeper
Running : /bin/sudo /bin/systemctl enable cvpi
Waiting for primary to finish configuring cvp.
-
Please wait for primary to complete cvp installation.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>