Using the DMF Service Node Appliance

This chapter describes to configure the managed services provided by the DANZ Monitoring Fabric (DMF) Service Node Appliance.

Overview

The DANZ Monitoring Fabric (DMF) Service Node has multiple interfaces connected to traffic for processing and analysis. Each interface can be programmed independently to provide any supported managed-service actions.

To create a managed service, identify a switch interface connected to the service node, specify the service action, and configure the service action options.

Configure a DMF policy to use the managed service by name. This action causes the Controller to forward traffic the policy selects to the service node. The processed traffic is returned to the monitoring fabric using the same interface and sent to the tools (delivery interfaces) defined in the DMF policy.

If the traffic volume the policy selects is too much for a single service node interface, define an LAG on the switch connected to the service node, then use the LAG interface when defining the managed service. All service node interfaces connected to the LAG are configured to perform the same action. The traffic the policy selects is automatically load-balanced among the LAG member interfaces and distributes the return traffic similarly.

Changing the Service Node Default Configuration

Configuration settings are automatically downloaded to the service node from the DANZ Monitoring Fabric (DMF) Controller to eliminate the need for box-by-box configuration. However, the option exists to override the default configuration for a service node from the config-service-node submode for any service node.
Note: These options are available only from the CLI and are not included in the DMF GUI.
To change the CLI mode to config-service-node, enter the following command from config mode on the Active DMF controller:
controller-1(config)# service-node <service_node_alias>
controller-1(config-service-node)#

Replace service_node_alias with the alias to use for the service node. This alias is affiliated with the hardware MAC address of the service node using the mac command. The hardware MAC address configuration is mandatory for the service node to interact with the DMF Controller.

Use any of the following commands from the config-service-node submode to override the default configuration for the associated service node:
  • admin password: set the password to log in to the service node as an admin user.
  • banner: set the service node pre-login banner message.
  • description: set a brief description.
  • logging: enable service node logging to the Controller.
  • mac: configure a MAC address for the service node.
  • ntp: configure the service node to override default parameters.
  • snmp-server: configure an SNMP trap host to receive SNMP traps from the service node.

Using SNMP to Monitor DPDK Service Node Interfaces

Directly fetch the counters and status of the service node interfaces handling traffic (DPDK interfaces). The following are the supported OIDs.
interfaces MIB: ❵.1.3.6.1.2.1.2❵
ifMIBObjects MIB: ❵.1.3.6.1.2.1.31.1❵
Note: A three-digit number between 101 and 116 identifies SNI DPDK (traffic) interfaces.
In the following example, interface sni5 (105) handles data traffic. To fetch the packet count, use the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.6.105
          IF-MIB::ifHCInOctets.105 = Counter64: 10008
To fetch the counters for packets exiting the service node interface, enter the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.10.105
          IF-MIB::ifHCOutOctets.105 = Counter64: 42721
To fetch Link Up and Down status, enter the following command:
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.109
IF-MIB::ifOperStatus.109 = INTEGER: down(2)
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.105
IF-MIB::ifOperStatus.105 = INTEGER: up(1)

Configuring Managed Services

To view, edit, or create DANZ Monitoring Fabric (DMF) managed services, select the Monitoring > Managed Services option.
Figure 1. Managed Services

This page displays the service node appliance devices connected to the DMF Controller and the services configured on the Controller.

Using the GUI to Define a Managed Service

To create a new managed service, perform the following steps:

  1. Click the Provision control (+) in the Managed Services table. The system displays the Create Managed Service dialog, shown in the following figure.
    Figure 2. Create Managed Service: Info
  2. Assign a name to the managed service.
  3. (Optional) Provide a text description of the managed service.
  4. Select the switch and interface providing the service.
    The Show Managed Device Switches Only checkbox, enabled by default, limits the switch selection list to service node appliances. Enable the Show Connected Switches Only checkbox to limit the display to connected switches.
  5. Select the action from the Action selection list, which provides the following options.
    • Application ID
    • Deduplication: Deduplicate selected traffic, including NATed traffic.
    • GTP Correlation
    • Header Strip: Remove bytes of packet starting from zero till selected Anchor and offset bytes
    • Header Strip Cisco Fabric Path Header: Remove the Cisco Fabric Path encapsulation header
    • Header Strip ERSPAN Header: Remove Encapsulated Remote Switch Port Analyzer Encapsulation header
    • Header Strip Genev1 Header: Remove Generic Network Virtualization Encapsulation header
    • Header Strip L3 MPLS Header: Remove Layer 3 MPLS encapsulation header
    • Header Strip LISP Header: Remove Locator Separation Protocol Encapsulation header
    • Header Strip VXLAN Header: Remove Virtual Extensible LAN Encapsulation header
    • IPFIX: Generate IPFIX by selecting matching traffic and forwarding it to specified collectors.
    • Mask: Mask sensitive information as specified by the user in packet fields.
    • NetFlow: Generate a NetFlow by selecting matching traffic and forwarding it to specified collectors.
    • Pattern-Drop: Drop matching traffic.
    • Pattern Match: Forward matching traffic.
    • Session Slice: Slice TCP sessions.
    • Slice: Slice the given number of bytes based on the specified starting point in the packet.
    • TCP Analysis
    • Timestamp: Identify the time that the service node receives the packet.
    • UDP Replication: Copy UDP messages to multiple IP destinations, such as Syslog or NetFlow messages.
  6. (Optional) Identify the starting point for service actions.
    Identify the start point for the deduplication, mask, pattern-match, pattern-drop services, or slice services using one of the keywords listed below.
    • packet-start: add the number of bytes specified by the integer value to the first byte in the packet.
    • l3-header-start: add the number of bytes specified by the integer value to the first byte in the Layer 3 header.
    • l4-header-start: add the number of bytes specified by the integer value to the first byte in the layer-4 header.
    • l4-payload-start: add the number of bytes specified by the integer value to the first byte in the layer-4 user data.
    • integer: specify the number of bytes to offset for determining the start location for the service action relative to the specified start keyword.
  7. To assign a managed service to a policy, enable the checkbox on the Managed Services page of the Create Policy or Edit Policy dialog.
  8. Select the backup service from the Backup Service selection list to create a backup service. The backup service is used when the primary service is not available.

Using the CLI to Define a Managed Service

Note: When connecting a LAG interface to the DANZ Monitoring Fabric (DMF) service node appliance, member links should be of the same speed and can span across multiple service nodes. The maximum number of supported member links per LAG interface is 32, which varies based on the switch platform. Please refer to the hardware guide for the exact details of the supported configuration.

To configure a service to direct traffic to a DMF service node, complete the following steps:

  1. Define an identifier for the managed service by entering the following command:
    controller-1(config)# managed-service DEDUPLICATE-1
    controller-1(config-managed-srv)#

    This step enters the config-managed-srv submode to configure a DMF-managed service.

  2. (Optional) Configure a description for the current managed service by entering the following command:
    controller-1(config-managed-srv)# description “managed service for policy DEDUPLICATE-1”
    The following are the commands available from this submode:
    • description: provide a service description
    • post-service-match: select traffic after applying the header strip service
    • Action sequence number in the range [1 - 20000]: identifier of service action
    • service-interface: associate an interface with the service
  3. Use a number in the range [1 - 20000] to identify a service action for a managed service.
    The following summarizes the available service actions. See the subsequent sections for details and examples for specific service actions.
    • dedup {anchor-offset | full-packet | routed-packet}
    • header-strip {l4-header-start | l4-payload-start | packet-start }[offset]
    • decap-cisco-fp {drop}
    • decap-erspan {drop}
    • decap-geneve {drop}
    • decap-l3-mpls {drop}
    • decap-lisp {drop}
    • decap-vxlan {drop}
    • mask {mask/pattern} [{packet-start | l3-header-start | l4-header-start | l4-payload-start} mask/offset] [mask/mask-start mask/mask-end]}
    • netflow Delivery_interface Name
    • ipfix Delivery_interface Name
    • udp-replicate Delivery_interface Name
    • tcp-analysis Delivery_interface Name
    Note: The IPFIX, NetFlow, and udp-replicate service actions enable a separate submode for defining one or more specific configurations. One of these services must be the last service applied to the traffic selected by the policy.
    • pattern-drop pattern [{l3-header-start | l4-header-start | packet-start }]
    • pattern-match pattern [{l3-header-start | l4-header-start | packet-start }] |
    • slice {packet-start | l3-header-start | l4-header-start | l4-payload-start} integer}
    • timestamp
    For example, the following command enables packet deduplication on the routed packet:
    controller-1(config-managed-srv)# 1 dedup routed-packet
  4. Optionally, identify the start point for the mask, pattern-match, pattern-drop services, or slice services.
  5. Identify the service interface for the managed service by entering the following command:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 ethernet40
    Use a port channel instead of an interface to increase the bandwidth available to the managed service. The following example enables lag-interface1 for the service interface:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 lag1
  6. Apply the managed service within a policy like any other service, as shown in the following examples for deduplication, NetFlow, pattern matching (forwarding), and packet slicing services.
Note: Multiple DMF policies can use the same managed service, for example, a packet slicing managed service.

Monitoring Managed Services

To identify managed services bound to a service node interface and the health status of the respective interface, use the following commands:
controller-1# show managed-service-device <SN-Name> interfaces
controller-1# show managed-service-device <SN-Name> stats

For example, the following command shows the managed services handled by the Service Node Interface (SNI):


Note:The show managed-service-device <SN-Name> stats <Managed-service-name> command filters the statistics of a specific managed service.
The Load column shows no, low, moderate, high, and critical health indicators. These health indicators are represented by green, yellow, and red under DANZ Monitoring Fabric > Managed Services > Devices > Service Stats. They reflect the processor load on the service node interface at that instant but do not show the bandwidth of the respective data port (SNI) handling traffic, as shown in the following sample snapshot of the Service Stats output.
Figure 3. Service Node Interface Load Indicator

Deduplication Action

The DANZ Monitoring Fabric (DMF) Service Node enhances the efficiency of network monitoring tools by eliminating duplicate packets. Duplicate packets can be introduced into the out-of-band monitoring data stream by receiving the same flow from multiple TAP or SPAN ports spread across the production network. Deduplication eliminates these duplicate packets and enables more efficient use of passive monitoring tools.

The DMF Service Node provides three modes of deduplication for different types of duplicate packets.
  • Full packet deduplication: deduplicates incoming packets that are identical at the L2/L3/L4 layers.
  • Routed packet deduplication: as packets traverse an IP network, the MAC address changes from hop to hop. Routed packet deduplication enables users to match packet contents starting from the L3 header.
  • NATed packet deduplication: to perform NATed deduplication, the service node compares packets in the configured window that are identical starting from the L4 payload. To use NATed packet deduplication, perform the following fields as required:
    • Anchor: Packet Start, L2 Header Start, L3 Header Start, or L3 Payload Start fields.
    • Offset: the number of bytes from the anchor where the deduplication check begins.

The time window in which the service looks for duplicate packets is configurable. Select a value among these choices: 2ms (the default), 4ms, 6ms, and 8ms.

GUI Configuration

Figure 4. Create Managed Service > Action: Deduplication Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-DEDUP-FULL-PACKET
! managed-service
managed-service MS-DEDUP-FULL-PACKET
description 'This is a service that does Full Packet Deduplication'
1 dedup full-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/1
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-ROUTED-PACKET
! managed-service
managed-service MS-DEDUP-ROUTED-PACKET
description 'This is a service that does Routed Packet Deduplication'
1 dedup routed-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/2
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-NATTED-PACKET
! managed-service
managed-service MS-DEDUP-NATTED-PACKET
description 'This is a service that does Natted Packet Deduplication'
1 dedup anchor-offset l4-payload-start 0 window 8
service-interface switch CORE-SWITCH-1 ethernet13/3
Controller-1(config)#
Note: The existing command is augmented to show the deduplication percentage. The command syntax is show managed-service-device <Service-Node-Name> stats <dedup-service-name> dedup
Controller-1(config)# show managed-service-device DMF-SN-R740-1 stats MS-DEDUP dedup
~~~~~~~~~~~~~~~~ Stats ~~~~~~~~~~~~~~~~
Interface Name : sni16
Function : dedup
Service Name : MS-DEDUP
Rx packets : 9924950
Rx bytes : 4216466684
Rx Bit Rate : 1.40Gbps
Applied packets : 9923032
Applied bytes : 4216337540
Applied Bit Rate : 1.40Gbps
Tx packets : 9796381
Tx bytes : 4207106113
Tx Bit Rate : 1.39Gbps
Deduped frame count : 126651
Deduped percent : 1.2763336851075358
Load : low
Controller-1(config)#

Header Strip Action

This action removes specific headers from the traffic selected by the associated DANZ Monitoring Fabric (DMF) policy. Alternatively, define custom header stripping based on the starting position of the Layer-3 header, the Layer-4 header, the Layer-4 payload, or the first byte in the packet.

Use the following decap actions isolated from the header-strip configuration stanza:
  • decap-erspan: remove the Encapsulated Remote Switch Port Analyzer (ERSPAN) header.
  • decap-cisco-fabric-path: remove the Cisco FabricPath protocol header.
  • decap-l3-mpls: remove the Layer-3 Multi-protocol Label Switching (MPLS) header.
  • decap-lisp: remove the LISP header.
  • decap-vxlan [udp-port vxlan port]: remove the Virtual Extensible LAN (VXLAN) header.
  • decap-geneve: remove the Geneve header.
Note:For the Header Strip and Decap actions, apply post-service rules to select traffic after stripping the original headers.
To customize the header-strip action, use one of the following keywords to strip up to the specified location in each packet:
  • l3-header-start
  • l4-header-start
  • l4-payload-start
  • packet-start

Input a positive integer representing the offset from which the strip action begins. When omitting an offset, the header stripping starts from the first byte in the packet.

GUI Configuration

Figure 5. Create Managed Service: Header Strip Action

After assigning the required actions to the header stripping service, click Next or Post-Service Match.

The system displays the Post Service Match page, used in conjunction with the header strip service action.
Figure 6. Create Managed Service: Post Service Match for Header Strip Action

CLI Configuration

The header-strip service action strips the header and replaces it in one of the following ways:

  • Add the original L2 src-mac, and dst-mac.
  • Add the original L2 src-mac, dst-mac, and ether-type.
  • Specify and adda custom src-mac, dst-mac, and ether-type.

The following are examples of custom header stripping:

This example strips the header and replaces it with the original L2 src-mac and dst-mac.
! managed-service
managed-service MS-HEADER-STRIP-1
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac
service-interface switch CORE-SWITCH-1 ethernet13/1
This example adds the original L2 src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-2
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac-ethertype
service-interface switch CORE-SWITCH-1 ethernet13/2
This example specifies the addition ofa customized src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-3
1 header-strip packet-start 20 add-custom-l2-header 00:11:01:02:03:04 00:12:01:02:03:04
0x800
service-interface switch CORE-SWITCH-1 ethernet13/3

Configuring the Post-service Match

The post-service match configuration option enables matching on inner packet fields after the DANZ Monitoring Fabric (DMF) Service Node performs header stripping. This option is applied on the post-service interface after the service node completes the strip service action. Feature benefits include the following:
  • The fabric can remain in L3/L4 mode. It is not necessary to change to offset match mode.
  • Easier configuration.
  • All match conditions are available for the inner packet.
  • The policy requires only one managed service to perform the strip service action.
With this feature enabled, DMF knows exactly where to apply the post-service match. The following example illustrates this configuration.
! managed-service
managed-service MS-HEADER-STRIP-4
service-interface switch CORE-SWITCH-1 interface ethernet1
1 decap-l3-mpls
!
post-service-match
1 match ip src-ip 1.1.1.1
2 match tcp dst-ip 2.2.2.0 255.255.255.0
! policy
policy POLICY-1
filter-interface TAP-1
delivery-interface TOOL-1
use-managed-service MS-HEADER-STRIP-4 sequence 1

IPFIX and Netflow Actions

IP Flow Information Export (IP FIX), also known as NetFlow v10, is an IETF standard defined in RFC 7011. The IPFIX generator (agent) gathers and transmits information about flows, which are sets of packets that contain all the keys specified by the IPFIX template. The generator observes the packets received in each flow and forwards the information to the IPFIX collector (server) in the form as a flowset.

Starting with the DANZ Monitoring Fabric (DMF)-7.1.0 release, NetFlow v9 (Cisco proprietary) and IPFIX/NetFlow v10 are both supported. Configuration of the IPFIX managed service is similar to configuration for earlier versions of NetFlow except for the UDP port definition. NetFlow v5 collectors typically listen over UDP port 2055, while IFPIX collectors listen over UDP port 4739.

NetFlow records are typically exported using User Datagram Protocol (UDP) and collected using a flow collector. For a NetFlow service, the service node takes incoming traffic and generates NetFlow records. The service node drops the original packets, and the generated flow records, containing metadata about each flow, are forwarded out of the service node interface.

IPFIX Template

The IPXIF template consists of the key element IDs representing IP flow, field element IDs representing actions the exporter has to perform over IP flows matching key element IDs, the template ID number for uniqueness, collector information, and eviction timers.

To define a template, configure keys of interest representing the IP flow and fields that identify the values measured by the exporter, the exporter information, and the eviction timers. To define the template, select the Monitoring > Managed Service > IPFIX Template option from the DANZ Monitoring Fabric (DMF) GUI or enter the ipfix-template template-name command in config mode, replacing template-name with a unique identifier for the template instance.

IPFIX Keys

Use an IPFIX key to specify the characteristics of the traffic to monitor, such as source and destination MAC or IP address, VLAN ID, Layer-4 port number, and QoS marking. The generator includes flows in a flow set having all the attributes specified by the keys in the template applied. The flowset is updated only for packets that have all the specified attributes. If a single key is missing, the packet is ignored. To see a listing of the keys supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI or type help key in config-ipxif-template submode. The following are the keys supported in the current release:
  • destination-ipv4-address
  • destination-ipv6-address
  • destination-mac-address
  • destination-transport-port
  • dot1q-priority
  • dot1q-vlan-id
  • ethernet-type
  • icmp-type-code-ipv4
  • icmp-type-code-ipv6
  • ip-class-of-service
  • ip-diff-serv-code-point
  • ip-protocol-identifier
  • ip-ttl
  • ip-version
  • policy-vlan-id
  • records-per-dmf-interface
  • source-ipv4-address
  • source-ipv6-address
  • source-mac-address
  • source-transport-port
  • vlan id
Note: The policy-vlan-id and records-per-dmf-interface keys are Arista Proprietary Flow elements. The policy-vlan-id key helps to query per-policy flow information at Arista Analytics-node (Collector) in push-per-policy deployment mode. The records-per-dmf-interface key helps to identify filter interfaces tapping the traffic. The following limitations apply at the time of IPFIX template creation:
  • The Controller will not allow the key combination of source-mac-address and records-per-dmf-interface in push-per-policy mode.
  • The Controller will not allow the key combinations of policy-vlan-id and records-per-dmf-interface in push-per-filter mode.

IPFIX Fields

A field defines each value updated for the packets the generator receives that match the specified keys. For example, include fields in the template to record the number of packets, the largest and smallest packet sizes, or the start and end times of the flows. To see a listing of the fields supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI, or type help in config-ipxif-template submode. The following are the fields supported:

  • flow-end-milliseconds
  • flow-end-reason
  • flow-end-seconds
  • flow-start-milliseconds
  • flow-start-seconds
  • maximum-ip-total-length
  • maximum-layer2-total-length
  • maximum-ttl
  • minimum-ip-total-length
  • minimum-layer2-total-length
  • minimum-ttl
  • octet-delta-count
  • packet-delta-count
  • tcp-control-bits

Active and Inactive Timers

After the number of minutes specified by the active timer, the flow set is closed and forwarded to the IPFIX collector. The default active timer is one minute. During the number of seconds set by the inactive timer, if no packets that match the flow definition are received, the flow set is closed and forwarded without waiting for the active timer to expire. The default value for the inactive time is 15 seconds.

Example Flowset

The following is a Wireshark view of an IPFIX flowset.
Figure 7. Example IPFIX Flowset in Wireshark

The following is a running-config that shows the IPFIX template used to generate this flowset.

Example IPFIX Template

! ipfix-template
ipfix-template Perf-temp
template-id 22222
key destination-ipv4-address
key destination-transport-port
key dot1q-vlan-id
key source-ipv4-address
key source-transport-port
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field maximum-ttl
field minimum-ttl
field packet-delta-count

Using the GUI to Define an IPFIX Template

To define an IPFIX template, perform the following steps:
  1. Select the Monitoring > Managed Services option.
  2. On the DMF Managed Services page, select IPFIX Templates.
    The system displays the IPFIX Templates section.
    Figure 8. IPFIX Templates
  3. To create a new template, click the provision (+) icon in the IPFIX Templates section.
    Figure 9. Create IPFIX Template
  4. To add an IPFIX key to the template, click the Settings control in the Keys section. The system displays the following dialog.
    Figure 10. Select IPFIX Keys
  5. Enable each checkbox for the keys to add to the template and click Select.
  6. To add an IPFIX field to the template, click the Settings control in the Fields section. The system displays the following dialog:
    Figure 11. Select IPFIX Fields
  7. Enable the checkbox for each field to add to the template and click Select.
  8. On the Create IPFIX Template page, click Save.
The new template is added to the IPFIX Templates table, with each key and field listed in the appropriate column. Use this customized template to apply when defining an IPFIX-managed service.

Using the CLI to Define an IPFIX Template

  1. Create an IPFIX template.
    controller-1(config)# ipfix-template IPFIX-IP
    controller-1(config-ipfix-template)#

    This changes the CLI prompt to the config-ipfix-template submode.

  2. Define the keys to use for the current template, using the following command:

    [ no ] key { ethernet-type | source-mac-address | destination-mac-address | dot1q-vlan-id | dot1q-priority | ip-version | ip-protocol-identifier | ip-class-of-service | ip-diff-serv-code-point | ip-ttl | sourceipv4-address | destination-ipv4-address | icmp-type-code-ipv4 | source-ipv6-address | destination-ipv6-address | icmp-type-code-ipv6 | source-transport-port | destination-transport-port }

    The keys specify the attributes of the flows to be included in the flowset measurements.

  3. Define the fields to use for the current template, using the following command:
    [ no ] field { packet-delta-count | octet-delta-count | minimum-ip-total-length | maximum-ip- total-length | flow-start-seconds | flow-end-seconds | flow-end-reason | flow-start-milliseconds | flow-end-milliseconds | minimum-layer2-total-length | maximum-layer2-total- length | minimum-ttl | maximum-ttl }

    The fields specify the measurements to be included in the flowset.

Use the template when defining the IPFIX action.

Using the GUI to Define an IPFIX Service Action

Select IPFIX from the Action selection list on the Create Managed Service > Action page.

Figure 12. Selecting IPFIX Action in Create Managed Service
Enter the following required configuration details:
  • Assign a delivery interface.
  • Configure the collector IP address.
  • Identify the IPFIX template.
The following configuration is optional:
  • Inactive timeout: the interval of inactivity that marks a flow inactive.
  • Active timeout: length of time between each IPFIX flows for a specific flow.
  • Source IP: source address to use for the IPFIX flowsets.
  • UDP port: UDP port to use for sending IPFIX flowsets.
  • MTU: MTU to use for sending IPFIX flowsets.

After completing the configuration, click Next, and then click Save.

Using the CLI to Define an IPFIX Service Action

Define a managed service and define the IPFIX action.
controller(config)# managed-service MS-IPFIX-SERVICE
controller(config-managed-srv)# 1 ipfix TO-DELIVERY-INTERFACE
controller(config-managed-srv-ipfix)# collector 10.106.1.60
controller(config-managed-srv-ipfix)# template IPFIX-TEMPLATE

The active-timeout and inactive-timeout commands are optional

To view the running-config for a managed service using the IPFIX action, enter the following command:
controller1# show running-config managed-service MS-IPFIX-ACTIVE
! managed-service
managed-service MS-IPFIX-ACTIVE
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 ipfix TO-DELIVERY-INTERFACE
collector 10.106.1.60
template IPFIX-TEMPLATE
To view the IPFIX templates, enter the following command:
config# show running-config ipfix-template
! ipfix-template
ipfix-template IPFIX-IP
template-id 1974
key destination-ipv4-address
key destination-ipv6-address
key ethernet-type
key source-ipv4-address
key source-ipv6-address
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field minimum-ttl
field tcp-control-bits
------------------------output truncated------------------------

Records Per Interface Netflow using DST-MAC Rewrite

Destination MAC rewrite for the records-per-interface NetFlow and IPFIX feature is the default setting and applies to switches running Extensible Operating System (EOS) and SWL and is supported on all platforms.

A configuration option exists for using src-mac when overwriting the dst-mac isn't preferred.

Configurations using the CLI

Global Configuration

The global configuration is a central place to choose which rewrite option to use for the records-per-interface. The following example illustrates using rewrite-src-mac or rewrite-dst-mac in conjunction with the filter-mac-rewrite command.
c1(config)# filter-mac-rewrite rewrite-src-mac
c1(config)# filter-mac-rewrite rewrite-dst-mac

Netflow Configuration

The following example illustrates a NetFlow configuration.
c1(config)# managed-service ms1
c1(config-managed-srv)# 1 netflow
c1(config-managed-srv-netflow)# collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface

IPFIX Configuration

The following example illustrates an IPFIX configuration.
c1(config)# ipfix-template i1
c1(config-ipfix-template)# field maximum-ttl 
c1(config-ipfix-template)# key records-per-dmf-interface
c1(config-ipfix-template)# template-id 300

c1(config)# managed-service ms1
c1(config-managed-srv)# 1 ipfix
c1(config-managed-srv-ipfix)# template i1

Show Commands

NetFlow Show Commands

Use the show running-config managed-service command to view the NetFlow settings.
c1(config)# show running-config managed-service 
! managed-service
managed-service ms1
!
1 netflow
collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface

IPFIX Show Commands

Use the show ipfix-template i1 command to view the IPFIX settings.
c1(config)# show ipfix-template i1
~~~~~~~~~~~~~~~~~~ Ipfix-templates~~~~~~~~~~~~~~~~~~
# Template Name KeysFields
-|-------------|-------------------------|-----------|
1 i1records-per-dmf-interface maximum-ttl

c1(config)# show running-config managed-service 
! managed-service
managed-service ms1
!
1 ipfix
template i1

Limitations

  • The filter-mac-rewrite rewrite-src-mac command cannot be used on the filter interface that is part of the policy using timestamping replace-src-mac. However, the command works when using a timestamping add-header-after-l2 configuration.

Packet-masking Action

The packet-masking action can hide specific characters in a packet, such as a password or credit card number, based on offsets from different anchors and by matching characters using regular (regex) expressions.

The mask service action applies the specified mask to the matched packet region.

GUI Configuration

Figure 13. Create Managed Service: Packet Masking

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service masks pattern matching an email address in payload with X"
1 mask ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Arista Analytics Node Capability

Arista Analytics Node capabilities are enhanced to handle NetFlow V5/V9 and IPFIX Packets. All these flow data are represented with the Netflow index.

Note: NetFlow flow record generation is enhanced for selecting VXLAN traffic. For VXLAN traffic, flow processing is based on inner headers, with the VNI as part of the key for flow lookup because IP addresses can overlap between VNIs.
Figure 14. NetFlow Managed Service

NetFlow records are exported using User Datagram Protocol (UDP) to one or more specified NetFlow collectors. Use the DMF Service Node to configure the NetFlow collector IP address and the destination UDP port. The default UDP port is 2055.

Note: No other service action, except the UDP replication service, can be applied after a NetFlow service action because part of the NetFlow action is to drop the packets.

Configuring the Arista Analytics Node Using the GUI

From the Arista Analytics Node dashboard, apply filter rules to display specific flow information.

The following are the options available on this page:
  • Delivery interface: interface to use for delivering NetFlow records to collectors.
    Note: The next-hop address must be resolved for the service to be active.
  • Collector IP: identify the NetFlow collector IP address.
  • Inactive timeout: use the inactive-timeout command to configure the interval of inactivity before NetFlow times out. The default is 15 seconds.
  • Source IP: specify a source IP address to use as the source of the NetFlow packets.
  • Active timeout: use active timeout to configure a period that a NetFlow can be generated continuously before it is automatically terminated. The default is one minute.
  • UDP port: change the UDP port number used for the NetFlow packets. The default is 2055.
  • Flows: specify the maximum number of NetFlow packets allowed. The allowed range is 32768 to 1048576. The default is 262144.
  • Per-interface records: identify the filter interface where the NetFlow packets were originally received. This information can be used to identify the hop-by-hop path from the filter interface to the NetFlow collector.
  • MTU: change the Maximum Transmission Unit (MTU) used for NetFlow packets.
Figure 15. Create Managed Service: NetFlow Action

Configuring the Arista Analytics Node Using the CLI

Use the show managed-services command to display the ARP resolution status.
Note: The DANZ Monitoring Fabric (DMF) Controller resolves ARP messages for each NetFlow collector IP address on the delivery interface that matches the defined subnet. The subnets defined on the delivery interfaces cannot overlap and must be unique for each delivery interface.

Enter the 1 netflow command and identify the configuration name and the submode changes to the config-managed-srv-netflow mode for viewing and configuring a specific NetFlow configuration.

The DMF Service Node replicates NetFlow packets received without changing the source IP address. Packets that do not match the specified destination IP address and packets that are not IPv4 or UDP are passed through. To configure a NetFlow-managed service, perform the following steps:

  1. Configure the IP address on the delivery interface.
    This IP address is the next-hop IP address from the DANZ Monitoring Fabric towards the NetFlow collector.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
  2. Configure the rate-limit for the NetFlow delivery interface.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
    CONTROLLER-1(config-switch-if)# rate-limit 256000
    Note: The rate limit must be configured when enabling Netflow. When upgrading from a version of DMF before release 6.3.1, the Netflow configuration is not applied until a rate limit is applied to the delivery interface.
  3. Configure the NetFlow managed service using the 1 netflow command followed by an identifier for the specific NetFlow configuration.
    
    CONTROLLER-1(config)# managed-service MS-NETFLOW-SERVICE CONTROLLER-1
    (config-managed-srv)# 1 netflow NETFLOW-DELIVERY-PORT CONTROLLER-1
    (config-managed-srv-netflow)#
    The following commands are available in this submode:
    • active-timeout: configure the maximum length of time the NetFlow is transmitted before it is ended (in minutes).
    • collector: configure the collector IP address, and change the UDP port number or the MTU.
    • inactive-timeout: configure the length of time that the NetFlow is inactive before it is ended (in seconds).
    • max-flows: configure the maximum number of flows managed.

    An option exists to limit the number of flows or change the inactivity timeout using the max-flows or active timeout, or inactive timeout commands.

  4. Configure the NetFlow collector IP address using the following command:
    collector <ip4-address>[udp-port<integer>][mtu <integer>][records-per-interface]
    

    The IP address, in IPV4 dotted-decimal notation, is required. The MTU and UDP port are required when changing these parameters from the defaults. Enable the records-per-interface option to allow identification of the filter interfaces from which the Netflow originated. Configure the Arista Analytics Node to display this information, as described in the DMF User Guide.

    The following example illustrates changing the Netflow UDPF port to 9991.
    collector 10.181.19.31 udp-port 9991
    Note: The IP address must be in the same subnet as the configured next hop and unique. It cannot be the same as the Controller, service node, or any monitoring fabric switch IP address.
  5. Configure the DMF policy with the forward action and add the managed service to the policy.
    Note: A DMF policy does not require any configuration related to a delivery interface for NetFlow policies because the DMF Controller automatically assigns the delivery interface.
    The example below shows the configuration required to implement two NetFlow service instances (MS-NETFLOW-1 and MS-NETFLOW-1).
    ! switch
    switch DMF-DELIVERY-SWITCH-1
    !
    interface ethernet1
    role delivery interface-name NETFLOW-DELIVERY-PORT-1 ip-address 10.3.1.1
    nexthop-ip 10.3.1.2 255.255.255.0
    interface ethernet2
    role delivery interface-name NETFLOW-DELIVERY-PORT-2 ip-address 10.3.2.1
    nexthop-ip 10.3.2.2 255.255.255.0
    ! managed-service
    managed-service MS-NETFLOW-1
    service-interface switch DMF-CORE-SWITCH-1 interface ethernet11/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.1.60 udp-port 2055 mtu 1024
    managed-service MS-NETFLOW-2
    service-interface switch DMF-CORE-SWITCH-2 interface ethernet12/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.2.60 udp-port 2055 mtu 1024
    ! policy
    policy GENERATE-NETFLOW-1
    action forward
    filter-interface TAP-INTF-DC1-1
    filter-interface TAP-INTF-DC1-2
    use-managed-service MS-NETFLOW-1 sequence 1
    1 match any
    policy GENERATE-NETFLOW-2
    action forward
    filter-interface TAP-INTF-DC2-1
    filter-interface TAP-INTF-DC2-2
    use-managed-service MS-NETFLOW-2 sequence 1
    1 match any

Pattern-drop Action

The pattern-drop service action drops matching traffic.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters, including GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 16. Create Managed Service: Pattern Drop Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service drops traffic that has an email address in its payload"
1 pattern-drop ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Pattern-match Action

The pattern-match service action matches and forwards matching traffic and is similar to the pattern-drop service action.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters including, GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)
  • Pattern matching allows Session Aware Adaptive Packet Filtering and can identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and can separate control traffic from user data traffic.

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 17. Create Managed Service: Pattern Match Action

CLI Configuration

Use the pattern-match pattern keyword to enable the pattern-matching service action. Specify the pattern to match for packets to submit to the packet slicing operation.

The following example matches traffic with the string Windows NT 5.(0-1) anywhere in the packet and delivers the packets to the delivery interface TOOL-PORT-TO-WIRESHARK-1. This service is optional and is applied to TCP traffic to destination port 80.
! managed-service
managed-service MS-PATTERN-MATCH
description 'regular expression filtering'
1 pattern-match 'Windows\\sNT\\s5\\.[0-1]'
service-interface switch CORE-SWITCH-1 ethernet13/1
! policy
policy PATTERN-MATCH
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'match regular expression pattern'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-PATTERN-MATCH sequence 1 optional
1 match tcp dst-port 80

Slice Action

The slice service action slices the given number of packets based on the specified starting point in the packet. Packet slicing reduces packet size to increase processing and monitoring throughput. Passive monitoring tools process fewer bits while maintaining each packet's vital, relevant portions. Packet slicing can significantly increase the capacity of forensic recording tools. Apply packet slicing by specifying the number of bytes to forward based on an offset from the following locations in the packet:
  • Packet start
  • L3 header start
  • L4 header start
  • L4 payload start

GUI Configuration

Figure 18. Create Managed Service: Slice Action

This page allows inserting an additional header containing the original header length.

CLI Configuration

Use the slice keyword to enable the packet slicing service action and insert an additional header containing the original header length, as shown in the following example:
! managed-service
managed-service my-service-name
1 slice l3-header-start 20 insert-original-packet-length
service-interface switch DMF-CORE-SWITCH-1 ethernet20/1
The following example truncates the packet from the first byte of the Layer-4 payload, preserving just the original Ethernet header. The service is optional and is applied to all TCP traffic from port 80 with the destination IP address 10.2.19.119
! managed-service
managed-service MS-SLICE-1
description 'slicing service'
1 slice l4-payload-start 1
service-interface switch DMF-CORE-SWITCH-1 ethernet40/1
! policy
policy slicing-policy
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'remove payload'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-SLICE-1 sequence 1 optional
1 match tcp dst-ip 10.2.19.119 255.255.255.255 src-port 80

Packet Slicing on the 7280 Switch

This feature removes unwanted or unneeded bytes from a packet at a configurable byte position (offset). This approach is beneficial when the data of interest is situated within the headers or early in the packet payload. This action reduces the volume of the monitoring stream, particularly in cases where payload data is not necessary.

Another use case for packet slicing (slice action) can be removing payload data to ensure compliance with the captured traffic.

Within the DANZ Monitoring Fabric (DMF) fabric, two types of slice-managed services (packet slicing service) now exist. These types are distinguished based on whether installing the service on a service node or on an interface of a supported switch. The scope of this document is limited to the slice-managed service configured on a switch. The managed service interface is the switch interface used to configure this service.

All DMF 8.4 and above compatible 7280 switches support this feature. Use the show switch all property command to check which switch in DMF fabric supports this feature. The feature is supported if the Min Truncate Offset and Max Truncate Offset properties have a non-zero value.

# show switch all property
# Switch Min Truncate Offset...Max Truncate Offset
-|------|-------------------| ... |---------------------------------
1 7280 100...9236
2 core1 ... 
Note: The CLI output example above is truncated for illustrative purposes. The actual output will differ.

Using the CLI to Configure Packet Slicing - 7280 Switch

Configure a slice-managed service on a switch using the following steps.
  1. Create a managed service using the managed-service service name command.
  2. Add slice action with packet-start anchor and an offset value between the supported range as reported by the show switch all property command.
  3. Configure the service interface under the config-managed-srv submode using the service-interface switch switch-name interface-name command as shown in the following example.
    > enable
    # config
    (config)# managed-service slice-action-7280-J2-J2C
    (config-managed-srv)# 1 slice packet-start 101
    (config-managed-srv)# service-interface switch 7280-J2-J2C Ethernet10/1
This feature requires the service interface to be in MAC loopback mode.
  1. To set the service interface in MAC loopback mode, navigate to the config-switch-if submode and configure using the loopback-mode mac command, as shown in the following example.
    (config)# switch 7280-J2-J2C
    (config-switch)# interface Ethernet10/1
    (config-switch-if)# loopback-mode mac
Once a managed service for slice action exists, any policy can use it.
  1. Enter the config-policy submode, and chain the managed service using the use-managed-service service same sequence sequence command.
    (config)# policy timestamping-policy
    (config-policy)# use-managed-service slice-action-7280-J2-J2C sequence 1

Key points to consider while configuring the slice action on a supported switch:

  1. Only the packet-start anchor is supported.
  2. Ensure the offset is within the Min/Max truncate size bounds reported by the show switch all property command. If the configured value is beyond the bound, then DMF chooses the closest value of the range.

    For example, if a user configures the offset as 64, and the min truncate offset reported by switch properties is 100, then the offset used is 100. If the configured offset is 10,000 and the max truncate offset reported by the switch properties is 9236, then the offset used is 9236.

  3. A configured offset for slice-managed service includes FCS when programmed on a switch interface, which means an offset of 100 will result in a packet size of 96 bytes (accounting for 4-byte FCS).
  4. Configuring an offset below 17 is not allowed.
  5. The same service interface cannot chain multiple managed services.
  6. The insert-original-packet-length option is not applicable for switch-based slice-managed service.

CLI Show Commands

Use the show policy policy name command to see the runtime state of a policy using the slice-managed service. The command shows the service interface information and stats.

Controller# show policy packet-slicing-policy
Policy Name: packet-slicing-policy
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: packet-slicing-7280
Installed Time : 2023-08-09 19:00:40 UTC
Installed Duration : 1 hour, 17 minutes
~ Match Rules ~
# Rule
-|-----------|
1 1 match any

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 7280 Ethernet2/1 uprx0 0 0-2023-08-09 19:00:40.305000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 7280 Ethernet3/1 uptx0 0 0-2023-08-09 19:00:40.306000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service nameRole Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|-------------------|----|------|------------|-----|---|-------|-----|--------|--------|------------------------------|
1 packet-slicing-7280 pre7280 Ethernet10/1 uptx0 0 0-2023-08-09 19:00:40.305000 UTC
2 packet-slicing-7280 post 7280 Ethernet10/1 uprx0 0 0-2023-08-09 19:00:40.306000 UTC

~ Core Interface(s) ~
None.

~ Failed Path(s) ~
None.

Use the show managed-services command to view the status of all the managed services, including the packet-slicing managed service on a switch.

Controller# show managed-services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-services ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSwitch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|-------------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 packet-slicing-7280 7280 Ethernet10/1 True400Gbps 400Gbps80bps 80bps

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Actions of Service Names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSequence Service Action Slice Anchor Insert original packet length Slice Offset
-|-------------------|--------|--------------|------------|-----------------------------|------------|
1 packet-slicing-7280 1slicepacket-start False 101

Using the GUI to Configure Packet Slicing - 7820 Switch

Perform the following steps to configure or edit a managed service.

Managed Service Configuration

  1. To configure or edit a managed service, navigate to the DMF Managed Services page from the Monitoring menu and click Managed Services.
    Figure 19. DANZ Monitoring Fabric (DMF) Managed Services
    Figure 20. DMF Managed Services Add Managed Service
  2. Configure a managed service interface on a switch that supports packet slicing. Make sure to deselect the Show Managed Device Switches Only checkbox.
    Figure 21. Create Managed Service
  3. Configure a new managed service action using Add Managed service action. The action chain supports only one action when configuring packet slicing on a switch.
    Figure 22. Add Managed service action
  4. Use Action > Slice with Anchor > Packet Start to configure the packet slicing managed service on a switch.
    Figure 23. Configure Managed Service Action
  5. Click Append to continue. The slice action appears on the Managed Services page.
    Figure 24. Slice Action Added
Interface Loopback Configuration

The managed service interface used for slice action must be in MAC loopback mode.

  1. Configure the loopback mode in the Fabric > Interfaces page by clicking on the configuration icon of the interface.
    Figure 25. Interfaces
    Note: The image above has been edited for documentation purposes. The actual output will differ.
  2. Enable the toggle for MAC Loopback Mode (set the toggle to Yes).
    Figure 26. Edit Interface
  3. After all configuration changes are done Save the changes.
Policy Configuration
  1. Create a new policy from the DMF Policies page.
    Figure 27. DMF Policies Page
  2. Add the previously configured packet slicing managed service.
    Figure 28. Create Policy
  3. Select Add Service under the + Add Service(s) option shown above.
    Figure 29. Add Service
    Figure 30. Service Type - Service - slice action
  4. Click Add 1 Service and the slice-managed service (packet-slicing-policy) appears in the Create Policy page.
    Figure 31. Manage Service Added
  5. Click Create Policy and the new policy appears in DMF Policies.
    Figure 32. DMF Policy Configured
    Note: The images above have been edited for documentation purposes. The actual outputs may differ.

Troubleshooting Packet Slicing

The show switch all property command provides upper and lower bounds of packet slicing action’s offset. If bounds are present, the feature is supported; otherwise, the switch does not support the packet slicing feature.

The show fabric errors managed-service-error command provides information when DANZ Monitoring Fabric (DMF) fails to install a configured packet slicing managed service on a switch.

The following are some of the failure cases:
  1. The managed service interface is down.
  2. More than one action is configured on a managed service interface of the switch.
  3. The managed service interface on a switch is neither a physical interface nor a LAG port.
  4. A non-slice managed service is configured on a managed service interface of a switch.
  5. The switch does not support packet slicing managed service, and its interface is configured with slice action.
  6. Slice action configured on a switch interface is not using a packet-start anchor.
  7. The managed service interface is not in MAC loopback mode.

Use the following commands to troubleshoot packet-slicing issues.

Controller# show fabric errors managed-service-error
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed Service related error~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Error Service Name
-|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
1 Pre-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is downpacket-slicing-7280
2 Post-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is down packet-slicing-7280

The show switch switch name interface interface name dmf-stats command provides Rx and Tx rate information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 dmf-stats
# Switch DPID Name State Rx Rate Pkt Rate Peak Rate Peak Pkt Rate TX Rate Pkt Rate Peak Rate Peak Pkt Rate Pkt Drop Rate
-|-----------|------------|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|-------------|
1 7280Ethernet10/1 down- 0128bps0 - 0128bps0 0

The show switch switch name interface interface name stats command provides Rx and Tx counter information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 stats
# Name Rx Pkts Rx Bytes Rx Drop Tx Pkts Tx Bytes Tx Drop
-|------------|-------|--------|-------|-------|--------|-------|
1 Ethernet10/1 22843477 0 5140845937 0

Considerations

  1. Managed service action chaining is not supported when using a switch interface as a managed service interface.
  2. When configured for a supported switch, the managed service interface for slice action can only be a physical interface or a LAG.
  3. When using packet slicing managed service, packets ingressing on the managed service interface are not counted in the ingress interface counters, affecting the output of the show switch switch name interface interface name stats and show switch switch name interface interface name dmf-stats commands. This issue does not impact byte counters; all byte counters will show the original packet size, not the truncated size.

VXLAN Stripping on the 7280R3 Switch

Virtual Extensible LAN Header Stripping

Virtual Extensible LAN (VXLAN) Header Stripping supports the delivery of decapsulated packets to tools and devices in a DANZ Monitoring Fabric (DMF) fabric. This feature removes the VXLAN header, previously established in a tunnel for reaching the TAP Aggregation switch or inherent to the tapped traffic within the DMF. Within the fabric, DMF supports the installation of the strip VXLAN service on a filter interface or a filter-and-delivery interface of a supported switch.

Platform Compatibility

For DMF deployments, the target platform is DCS-7280R3.

Use the show switch all property command to verify which switch in the DMF fabric supports this feature.

The feature is supported if the Strip Header Supported property has the value BSN_STRIP_HEADER_CAPS_VXLAN.
Note: The following example is displayed differently for documentation purposes than what appears when using the CLI.
# show switch all property
#: 1
Switch : lyd599
Max Phys Port: 1000000
Min Lag Port : 1000001
Max Lag Port : 1000256
Min Tunnel Port: 15000001
Max Tunnel Port: 15001024
Max Lag Comps: 64
Tunnel Supported : BSN_TUNNEL_L2GRE
UDF Supported: BSN_UDF_6X2_BYTES
Enhanced Hash Supported: BSN_ENHANCED_HASH_L2GRE,BSN_ENHANCED_HASH_L3,BSN_ENHANCED_HASH_L2,
 BSN_ENHANCED_HASH_MPLS,BSN_ENHANCED_HASH_SYMMETRIC
Strip Header Supported : BSN_STRIP_HEADER_CAPS_VXLAN
Min Rate Limit : 1Mbps
Max Multicast Replication Groups : 0
Max Multicast Replication Entries: 0
PTP Timestamp Supported Capabilities : ptp-timestamp-cap-replace-smac, ptp-timestamp-cap-header-64bit,
 ptp-timestamp-cap-header-48bit, ptp-timestamp-cap-flow-based,
 ptp-timestamp-cap-add-header-after-l2
Min Truncate Offset: 100
Max Truncate Offset: 9236

Using the CLI to Configure VXLAN Header Stripping

Configuration

Use the following steps to configure strip-vxlan on a switch:

  1. Set the optional field strip-vxlan-udp-port at switch configuration, and the default udp-port for strip-vxlan is 4789.
  2. Enable or disable strip-vxlan on a filter or both-filter-and-delivery interface using the role both-filter-and-delivery interface-name filter-interface strip-vxlan command.
    > enable
    # config
    (config)# switch switch-name
    (config-switch)# strip-vxlan-udp-port udp-port-number
    (config-switch)# interface interface-name
    (config-switch-if)# role both-filter-and-delivery interface-name filter-interface strip-vxlan
    (config-switch-if)# role both-filter-and-delivery interface-name filter-interface no-strip-vxlan
    (config)# show running-config
  3. After enabling a filter interface with strip-vxlan, any policy can use it. From the config-policy submode, add the filter-interface to the policy:
    (config)# policy p1
    (config-policy)# filter-interface filter-interface

Show Commands

Use the show policy policy name command to see the runtime state of a policy using a filter interface with strip-vxlan configured. It will also show the service interface information and stats.
# show policy strip-vxlan 
Policy Name: strip-vxlan
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Installed Time : 2024-05-02 19:54:27 UTC
Installed Duration : 1 minute, 18 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time 
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 lyd598 Ethernet1/1 uprx0 0 0-2024-05-02 19:54:27.141000 UTC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time 
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 lyd598 Ethernet2/1 uptx0 0 0-2024-05-02 19:54:27.141000 UTC
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.

Using the GUI to Configure VXLAN Header Stripping

Filter Interface Configuration

To configure or edit a filter interface, proceed to Interfaces from the Monitoring menu and select Interfaces > Filter Interfaces.

Figure 33. Filter Interfaces
Figure 34. DMF Interfaces

Configure a filter interface on a switch that supports strip-vxlan.

Figure 35. Configure Filter Interface

Enable or Disable Strip VXLAN.

Figure 36. Enable Strip VXLAN
Figure 37. DMF Interfaces Updated

Policy Configuration

Create a new policy using DMF Policies and add the filter interface with strip VXLAN enabled.

Figure 38. Create Policy
Figure 39. Strip VXLAN Header

Select Add port(s) under the Traffic Sources option and add the Filter Interface.

Figure 40. Selected Traffic Sources

Add another delivery interface and create the policy.

Figure 41. Policy Created

Syslog Messages

There are no syslog messages associated with this feature.

Troubleshooting

The show switch all property command provides the Strip Header Supported property of the switch. If the value BSN_STRIP_HEADER_CAPS_VXLAN is present, the feature is supported; otherwise, the switch does not support this feature.

The show fabric warnings feature-unsupported-on-device command provides information when DMF fails to enable strip-vxlan on an unsupported switch.

The show switch switch-name table strip-vxlan-header command provides the gentable details.

The following are examples of several failure cases:

  1. The filter interface is down.
  2. The interface with strip-vxlan is neither a filter interface nor a filter-and-delivery interface.
  3. The switch does not support strip-vxlan.
  4. Tunneling / UDF is enabled simultaneously with strip-vxlan.
  5. Unsupported pipeline mode with strip-vxlan enabled (strip-vxlan requires a specific pipeline mode strip-vxlan-match-push-vlan).

Limitations

  • When configured for a supported switch, the filter interface for decap-vxlan action can only be a physical interface or a LAG.
  • It is not possible to enable strip-vxlan simultaneously with tunneling / UDF.
  • When enabling strip-vxlan on one or more switch interfaces on the same switch, other filter interfaces on the same switch cannot be matched on the VXLAN header.

Session Slicing for TCP and UDP Sessions

Session-slice keeps track of TCP and UDP sessions (distinguished by source and destination IP address and port) and counts the number of packets sent in each direction (client-to-server and vice versa). After recognizing the session, the action transmits a user-configured number of packets to the tool node.

For TCP packets, session-slice tracks the number of packets sent in each direction after establishing the TCP handshake. Slicing begins after the packet count in a direction has reached the configured threshold in both directions.

For UDP packets, slicing begins after reaching the configured threshold in either direction.

By default, session-slice will operate on both TCP and UDP sessions but is configurable to operate on only one or the other.

Note: The count of packets in one direction may exceed the user-configured threshold because fewer packets have arrived in the other direction. Counts in both directions must be greater than or equal to the threshold before dropping packets.

Refer to the DANZ Monitoring Fabric (DMF) Verified Scale Guide for session-slicing performance numbers.

Configure session-slice in managed services through the Controller as a Service Node action.

Using the CLI to Configure Session Slicing

Configure session-slice in managed services through the Controller as a Service Node action.

Configuration Steps

  1. Create a managed service and enter the service interface.
  2. Choose the session-slice service action with the command: <seq num> session-slice
    Note: The <seq num> session-slice command opens the session-slice submode, which supports two configuration parameters: slice-after and idle-timeout.
  3. Use slice-after to configure the packet threshold, after which the Service Node will stop forwarding packets to tool nodes.
  4. Use idle-timeout to configure the timeout in milliseconds before an idle connection is removed from the cache. idle-timeout is an optional command with a default value of 60000 ms.
    dmf-controller-1(config)# managed-service managed_service_1
    dmf-controller-1(config-managed-srv)# 1 session-slice
    dmf-controller-1(config-managed-srv-ssn-slice)# slice-after 1000
    dmf-controller-1(config-managed-srv-ssn-slice)# idle-timeout 60000
Show Commands

The following show commands provide helpful information.

The show running-config managed-service managed service command helps verify whether the session-slice configuration is complete.
dmf-controller-1(config)# show running-config managed-service managed_service_1 

! managed-service
managed-service managed_service_1
!
1 session-slice
slice-after 1000
idle-timeout 60000
The show managed-services managed service command provides status information about the service.
dmf-controller-1(config)# show managed-services managed_service_1
# Service NameSwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW 
-|-----------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 managed_service_1 DCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps

Using the GUI to Configure Session Slicing

Perform the following steps to configure session slicing.
  1. Navigate to Monitoring > Managed Services > Managed Services
    .
    Figure 42. Managed Services
  2. Select the + icon to create a new managed service.
    Figure 43. Create Managed Service
  3. Enter a Name for the managed service.
    Figure 44. Managed Service Name
  4. Select a Switch from the drop-down list.
    Figure 45. Manage Service Switch
    Figure 46. Managed Service Switch Added
  5. Select an Interface from the drop-down list.
    Figure 47. Managed Service Interface Added
  6. Select Actions or Next.
     
    Figure 48. Actions Menu
  7. Click the + icon to select a managed service action.
    Figure 49. Configure Managed Service Action List
  8. Choose Session Slice from the drop-down list. Adjust the Slice After and Idle Timeout parameters, as required.
     
    Figure 50. Configure Managed Service Action Session Slice
  9. Select Append and then Save to add the session slice managed service.
    Figure 51. Managed Service Session Slice

Timestamp Action

The timestamp service action identifies and timestamps every packet it receives with the time the service node receives the packet for matching traffic.

GUI Configuration

Figure 52. Create Managed Service: Timestamp Action

CLI Configuration

! managed-service
managed-service MS-TIMESTAMP-1
1 timestamp
service-interface switch CORE-SWITCH-1 ethernet15/3

UDP-replication Action

The UDP-replication service action copies UDP messages, such as Syslog or NetFlow messages, and sends the copied packets to a new destination IP address.

Configure a rate limit when enabling UDP replication. When upgrading from a version of DANZ Monitoring Fabric (DMF) before release 6.3.1, the UDP-replication configuration is not applied until a rate limit is applied to the delivery interface.

The following example illustrates applying a rate limit to a delivery interface used for UDP replication:
CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
CONTROLLER-1(config-switch)# interface ethernet1
CONTROLLER-1(config-switch-if)# role delivery interface-name udp-delivery-1
CONTROLLER-1(config-switch-if)# rate-limit 256000
Note: No other service action can be applied after a UDP-replication service action.

GUI Configuration

Use the UDP-replication service to copy UDP traffic, such as Syslog messages or NetFlow packets, and send the copied packets to a new destination IP address. This function sends traffic to more destination syslog servers or NetFlow collectors than would otherwise be allowed.

Enable the checkbox for the destination for the copied output, or click the provision control (+) and add the IP address in the dialog that appears.
Figure 53. Configure Output Packet Destination IP

For the header-strip service action only, configure the policy rules for matching traffic after applying the header-strip service action. After completing pages 1-4, click Append and enable the checkbox to apply the policy.

Click Save to save the managed service.

CLI Configuration

Enter the 1 udp-replicate command and identify the configuration name (the submode changes to the config-managed-srv-udp-replicate submode) to view and configure a specific UDP-replication configuration.
controller-1(config)# managed-service MS-UDP-REPLICATE-1
controller-1(config-managed-srv)# 1 udp-replicate DELIVERY-INTF-TO-COLLECTOR
controller-1(config-managed-srv-udp-replicate)#
From this submode, define the destination address of the packets to copy and the destination address for sending the copied packets.
controller-1(config-managed-srv-udp-replicate)# in-dst-ip 10.1.1.1
controller-1(config-managed-srv-udp-replicate)# out-dst-ip 10.1.2.1

Redundancy of Managed Services in Same DMF Policy

In this method, users can use a second managed service as a backup service in the same DANZ Monitoring Fabric (DMF) policy. The backup service is activated only when the primary service becomes unavailable. The backup service can be on the same service node or core switch or a different service node and core switch.
Note: Transitioning from active to backup managed service requires reprogramming switches and associated managed appliances. This reprogramming, done seamlessly, will result in a slight traffic loss.

Using the GUI to Configure a Backup Managed Service

To assign a managed service as a backup service in a DANZ Monitoring Fabric (DMF) policy, perform the following steps:
  1. Select Monitoring > Policies and click the Provision control (+) to create a new policy.
  2. Configure the policy as required. From the Services section, click the Provision control (+) in the Managed Services table.
    Figure 54. Policy with Backup Managed Service
  3. Select the primary managed service from the Managed Service selection list.
  4. Select the backup service from the Backup Service selection list and click Append.

Using the CLI to Configure a Backup Managed Service

To implement backup-managed services, perform the following steps:
  1. Identify the first managed service.
    managed-service MS-SLICE-1
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag1
  2. Identify the second managed service.
    managed-service MS-SLICE-2
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag2
  3. Configure the policy referring to the backup managed service.
    policy SLICE-PACKETS
    action forward
    delivery-interface TOOL-PORT-1
    filter-interface TAP-PORT-1
    use-managed-service MS-SLICE-1 sequence 1 backup-managed-service MS-SLICE-2
    1 match ip

Application Identification

The DANZ Monitoring Fabric (DMF) Application Identification feature allows for the monitoring of applications identified with Deep Packet Inspection (DPI) into packet flows received via filter interfaces and generates IPFIX flow records. These IPFIX flow records are transmitted to a configured collector device via the L3 delivery interface. The feature provides a filtering function by forwarding or dropping packets from specific applications before sending the packet to the analysis tools.
Note: Application identification is supported on R640 Service Nodes (DCA-DM-SC and DCA-DM-SC2) and R740 Service Nodes (DCA-DM-SDL and DCA-DM-SEL).

Using the CLI to Configure app-id

Perform the following steps to configure app-id.
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id command.
    Note: The above command should enter the app-id submode, which supports two configuration parameters: collector and l3-delivery-interface. Both are required.
  3. To configure the IPFIX collector IP address, enter the following command: collector ip-address.
    The UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively.
  4. Enter the command: l3-delivery-interface delivery interface name to configure the delivery interface.
  5. Add this managed service to a policy. The policy will not have a physical delivery interface.
The following shows an example of an app-id configuration that sends IPFIX application records to the Collector (analytics node) at IP address 192.168.1.1 over the configured delivery interface named app-to-analytics:
managed-service ms
service-interface switch core1 ethernet2
!
1 app-id
collector 192.168.1.1
l3-delivery-interface app-to-analytics

After configuring the app-id, refer to the analytics node for application reports and visualizations. For instance, a flow is classified internally with the following tuple: ip, tcp, http, google, and google_maps. Consequently, the analytics node displays the most specific app ID for this flow as google_maps under appName.

On the Analytics Node, there are AppIDs 0-4 representing applications according to their numerical IDs. 0 is the most specific application identified in that flow, while 4 is the least. In the example above, ID 0 would be the numerical ID for google_maps, ID 1 google, ID 2 http, ID 3 tcp, and ID 4 IP address. Use the appName in place of these since these require an ID to name mapping to interpret.

Using the CLI to Configure app-id-filter

Perform the following steps to configure app-id-filter:
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id-filter command.
    Note: The above command should enter the app-id-filter submode, which supports three configuration parameters: app, app-category, and filter-mode. The category app is required, while app-category and filter-mode are optional. The option filter-mode has a default value of forward.
  3. Enter the command: app application name to configure the application name.
    Tip: Press the Tab key after entering the app keyword to see all possible application names. Type in a partial name and press the Tab to see all possible choices to auto-complete the name. The application name provided must match a name in this list of app names. A service node must be connected to the Controller for this list to appear. Any number of apps can be entered one at a time using the app application-name command. An example of a (partial) list of names:
    dmf-controller-1 (config-managed-srv-app-id-filter)# app ibm
    ibm ibm_as_central ibm_as_dtaqibm_as_netprt ibm_as_srvmap ibm_iseries ibm_tsm
    ibm_app ibm_as_databaseibm_as_fileibm_as_rmtcmd ibm_db2 ibm_tealeaf
  4. Filter applications by category using the app-category category name command. Currently, the applications contained in these categories are not displayed.
    dmf-controller-1(config-managed-srv-app-id-filter)# app-category
    <Category> <String> :<String>
    aaaCategory selection
    adult_contentCategory selection
    advertisingCategory selection
    aetlsCategory selection
    analyticsCategory selection
    anonymizer Category selection
    audio_chat Category selection
    basicCategory selection
    blog Category selection
    cdnCategory selection
    certif_authCategory selection
    chat Category selection
    classified_ads Category selection
    cloud_services Category selection
    crowdfunding Category selection
    cryptocurrency Category selection
    db Category selection
    dea_mail Category selection
    ebook_reader Category selection
    educationCategory selection
    emailCategory selection
    enterprise Category selection
    file_mngtCategory selection
    file_transferCategory selection
    forumCategory selection
    gaming Category selection
    healthcare Category selection
    im_mcCategory selection
    iotCategory selection
    map_serviceCategory selection
    mm_streaming Category selection
    mobile Category selection
    networking Category selection
    news_portalCategory selection
    p2pCategory selection
    payment_serviceCategory selection
    remote_accessCategory selection
    scadaCategory selection
    social_network Category selection
    speedtestCategory selection
    standardized Category selection
    transportation Category selection
    update Category selection
    video_chat Category selection
    voip Category selection
    vpn_tunCategory selection
    webCategory selection
    web_ecom Category selection
    web_search Category selection
    web_sitesCategory selection
    webmailCategory selection
  5. The filter-mode parameter supports two modes: forward and drop. Enter filter-mode forward to allow the packets to be forwarded based on the configured applications. Enter filter-mode drop to drop these packets.
    An example of an app-id-filter configuration that drops all Facebook and IBM Tealeaf packets:
    managed-service MS
    	service-interface switch CORE-SWITCH-1 ethernet2
    	!
    	1 app-id-filter
    		app facebook
    		app ibm_tealeaf
    filter-mode drop
CAUTION: The app-id-filter configuration filters based on flows. For example, if a session is internally identified with the following tuple: ip, tcp, http, google, or google_maps, adding any of these parameters to the filter list permits or drops all the packets matching after determining classification (e.g., adding tcp to the filter list permits or blocks packets from the aforementioned 5-tuple flow as well as all other tcp flows). Use caution when filtering using the lower-layer protocols and apps. Also, when forwarding an application, packets will be dropped at the beginning of the session until the application is identified. When dropping, packets at the beginning of the session will be passed until the application is identified.

Using the CLI to Configure app-id and app-id-filter Combined

Follow the configuration steps described in the services earlier to configure app-id-filter and app-id together. However, in this case, app-id should use a higher seq num than app-id-filter. Thus, the traffic is processed through the app-id-filter policy first, then through app-id.

This behavior can be helpful to monitor certain types of traffic. The following example illustrates a combined app-id-filter and app-id configuration.
! managed-service
managed-service MS1
service-interface switch CORE-SWITCH-1 ethernet2
!
!
1 app-id-filter
app facebook
filter-mode forward
!
2 app-id
collector 1.1.1.1
l3-delivery-interface L3-INTF-1
Note: The two drawbacks of this configuration are app-id dropping all traffic except facebook, and this type of service chaining can cause a performance hit and high memory utilization.

Using the GUI to Configure app-id and app-id-filter

App ID and App ID Filter are in the Managed Service workflow. Perform the following steps to complete the configuration.
  1. Navigate to the Monitoring > Managed Services page. Select the table action + icon button to add a new managed service.
    Figure 55. DANZ Monitoring Fabric (DMF) Managed Services
  2. Configure the Name, Switch, and Interface inputs in the Info step.
    Figure 56. Info Step
  3. In the Actions step, select the + icon to add a new managed service action.
    Figure 57. Add App ID Action
  4. To Add the App ID Action, select App ID from the action selection input:
    Figure 58. Select App ID
  5. Fill in the Delivery Interface, Collector IP, UDP Port, and MTU inputs and select Append to include the action in the managed service:
    Figure 59. Delivery Interface
  6. To Add the App ID Filter Action, select App ID Filter from the action selection input:
    Figure 60. Select App ID Filter
  7. Select the Filter input as Forward or Drop action:
    Figure 61. Select Filter Input
  8. Use the App Names section to add app names.
    1. Select the + button to open a modal pane to add an app name.

    2. The table lists all app names. Use the text search to filter out app names. Select the checkbox for app names to include and click Append Selected.

    3. Repeat the above step to add more app names as necessary.

    Figure 62. Associate App Names
  9. The selected app names are now listed. Use the - icon button to remove any app names, if necessary:
    Figure 63. Application Names
  10. Select the Append button to add the action to the managed service and Save to save the managed service.
For existing managed services, add App ID or App ID Filter using the Edit workflow of a managed service.

Dynamic Signature Updates (Beta Version)

This beta feature allows the app-id and app-id-filter services to classify newly supported applications at runtime rather than waiting for an update in the next DANZ Monitoring Fabric (DMF) release. Perform such runtime service updates during a maintenance cycle. There can be issues with backward compatibility if attempting to revert to an older bundle. Adopt only supported versions. In the Controller’s CLI, perform the following recommended steps:
  1. Remove all policies containing app-id or app-id-filter. Remove the app-id and app-id-filter managed services from the policies using the command: no use-managed-service in policy config.
    Arista Networks recommends this step to avoid errors and service node reboots during the update process. A warning message is printed right before confirming a push. Proceeding without this step may work but is not recommended as there is a risk of service node reboots.
    Note: Arista Networks provides the specific update file in the command example below.
  2. To pull the signature file onto the Controller node, use the command:
    dmf-controller-1(config)# app-id pull-signature-file user@host:path to file.tar.gz
    Password:
    file.tar.gz							5.47MB 1.63MBps 00:03
  3. Fetch and validate the file using the command:
    dmf-controller-1(config)# app-id fetch-signature-file file://file.tar.gz
    Fetch successful.
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
  4. To view files currently saved on the Controller node after the fetch operation is successful, use the following command:
    dmf-controller-1(config)# app-id list-signature-files
    # Signature-file	Checksum 		Fetch time
    -|-----------------|-----------------|------------------------------|
    1 file.tar.gz	abcdefgh12345	2023-08-02 22:20:49.422000 UTC
    Note: Only the files listed by this command can be pushed to service nodes.
  5. Push the file from the Controller to the service nodes using the following command:
    dmf-controller-1(config)# app-id push-signature-file file.tar.gz
    App ID update: WARNING: This push will affect all service nodes
    App ID update: Remove policies configured with app-id or app-id-filter before continuing to avoid errors
    App ID update: Signature file: file.tar.gz
    App ID update: Push app ID signatures to all Service Nodes? Update ("y" or "yes" to continue): yes
    Push successful.
    
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
    Sn push time : 2023-08-02 22:21:49.422000 UTC
  6. Add the app-id and app-id-filter managed services back to the policies.
    As a result of adding app-id, service nodes can now identify and report new applications to the analytics node.
    After adding back app-id-filter, new application names should appear in the app-id-filter Controller app list. To test this, enter app-id-filter submode and press the Tab to see the full list of applications. New identified applications should appear in this list.
  7. To delete a signature file from the Controller, use the command below.
    Note: DMF only allows deleting a signature file that is not actively in use by any service node, which needs to keep a working file in case of issues—attempting to delete an active file causes the command to fail.
    dmf-controller-1(config)# app-id delete-signature-file file.tar.gz
    Delete successful for file: file.tar.gz
Useful Information
The fetch and delete operations are synced with standby controllers as follows:
  • fetch: after a successful fetch on the active Controller, it invokes the fetch RPC on the standby Controller by providing a signed HTTP URL as the source. This URL points to an internal REST API that provides the recently fetched signature file.
  • delete: the active Controller invokes the delete RPC call on the standby controllers.

The Controller stores the signature files in this location: /var/lib/capture/appidsignatureupdate.

On a service node, files are overwritten and always contain the complete set of applications.
Note: An analytics node cannot display these applications in the current version.
This step is only for informational purposes:
  • Verify the bundle version on the service node by entering the show service-node app-id-bundle-version command in the service node CLI, as shown below.
    Figure 64. Before Update
    Figure 64. After Update

CLI Show Commands

Service Node

In the service node CLI, use the following show command:
show service-node app-id-bundle-version
This command shows the version of the bundle in use. An app-id or app-id-filter instance must be configured, or an error message is displayed.
dmf-servicenode-1# show app-id bundle-version
Name : bundle_version
Data : 1.680.0-22 (build date Sep 26 2023)
dmf-servicenode-1#

Controller

To obtain more information about the running version on a Service Node, or when the last push attempt was made and the outcome, use the following Controller CLI commands:

  • show app-id push-results optional SN name
  • show service-node SN name app-id
dmf-controller-1# show app-id push-results
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt TimeLast Attempt Result Last Attempt Failure Reason

-|-----------------|--------------|---------------|------------------------------|----------------|------------------------------|--------------------|------------------------------|-------------------|---------------------------|

1 dmf-servicenode-1 10.240.180.124 1.660.2-332023-12-06 11:13:36.662000 PST 1.680.0-22 2023-09-29 16:21:11.034000 PDT 1.660.2-33 2023-12-06 11:13:34.085000 PST success
dmf-controller-1# show service-node dmf-servicenode-1 app-id
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt Time Last Attempt Result Last Attempt Failure Reason

-|-----------------|--------------|---------------|------------------------------|----------------|------------------|--------------------|-----------------|-------------------|---------------------------|

1 dmf-servicenode-1 10.240.180.124 1.680.0-222023-09-29 16:21:11.034000 PDT
The show app-id signature-files command displays the validated files that are available to push to Service Nodes.
dmf-controller-1# show app-id signature-files
# Signature-fileChecksum 	Fetch time
-|-----------------|-----------------|------------------------------|
1 file1.tar.gzabcdefgh12345 2023-08-02 22:20:49.422000 UTC
2 file2.tar.gzijklmnop67890 2023-08-03 07:10:22.123000 UTC
The show analytics app-info filter-interface-name command displays aggregated information over the last 5 minutes about the applications seen on a given filter interface, sorted by unique flow count. This command also has an optional size option to limit the number of results, default is all.
Note: This command only works in push-per-filter mode.
dmf-controller-1# show analytics app-info filter-interface f1 size 3
# App name Flow count
-|--------|----------|
1 app1 1000
2 app2 900
3 app3 800

Syslog Messages

Syslog messages for configuring the app-id and app-id-filter services appear in a service node’s syslog through journalctl.

A Service Node syslog registers events for the app-id add, modify, and delete actions.

These events contain the keywords dpi and dpi-filter, which correspond to app-id and app-id-filter.

For example:

Adding dpi for port, 
Modifying dpi for port, 
Deleting dpi for port,
Adding dpi filter for port, 
Modifying dpi filter for port, 
Deleting dpi filter for port, 
App appname does not exist - An invalid app name was entered.

The addition, modification, or deletion of app names in an app-id-filter managed-service in the Controller node’s CLI influences the policy refresh activity, and these events register in floodlight.log.

Scale

  • Max concurrent sessions are currently set to permit less than 200,000 active flows per core. Performance may drop the more concurrent flows there are. This value is a maximum value to prevent the service from overloading. Surpassing this threshold may cause some flows not to be processed, and the new flows will not be identified or filtered. Entries for inactive flows will time out after a few minutes for ongoing sessions and a few seconds after the session ends.
  • If there are many inactive sessions, DMF holds the flow contexts, reducing the number of available flows used for DPI. The timeouts are approximately 7 minutes for TCP sessions and 1 minute for UDP.
  • Heavy application traffic load degrades performance.

Troubleshooting

  • If IPFIX reports do not appear on an Analytics Node (AN) or Collector, ensure the UDP port is configured correctly and verify the AN receives traffic.
  • If the app-id-filter app list does not appear, ensure a Service Node (SN) is connected using the show service-node command on the Controller.
  • For app-id-filter, enter at least one valid application from the list that appears using <Tab>. If not, the policy will fail to install with an error message app-id-filter specified without at least one name TLV identifying application.
  • A flow may contain other IDs and protocols when using app-id-filter. For example, the specific application for a flow may be google_maps, but there may be protocols or broader applications under it, such as ssh, http, or google. Adding google_maps will filter this flow. However, adding ssh will also filter this flow. Therefore, adding any of these to the filter list will cause packets of this flow to be forwarded or dropped.
  • An IPFIX element, BSN type 14, that existed in DMF version 8.4 was removed in 8.6.
  • During a dynamic signature update, if a SN reboot occurs, it will likely boot up with the correct version. To avoid issues of traffic loss, perform the update during a maintenance window. Also, during an update, the SN will temporarily not send LLDP packets to the Controller and disconnect for a short while.
  • After a dynamic signature update, do not change configurations or push another signature file for several minutes. The update will take some time to process. If there are any VFT changes, it may lead to warning messages in floodlight, such as:
    Sync job 2853: still waiting after 50002 ms 
    Stuck switch update: R740-25G[00:00:e4:43:4b:bb:38:ca], duration=50002ms, stage=COMMIT

    These messages may also appear when configuring DPI on a large number of ports.

Limitations

  • When using a drop filter, a few packets may slip through the filter before determining an application ID for a flow, and when using a forward filter, a few packets may not be forwarded. Such a small amount is estimated to be between 1 and 6 packets at the beginning of a flow.
  • When using a drop filter, add the unknown app ID to the filter list to drop any unidentified traffic if these packets are unwanted.
  • The Controller must be connected to a Service Node for the app-id-filter app list to appear. If the list does not appear and the application names are unknown, use the app-id to send reports to the analytics node. Use the application names seen there to configure an app-id-filter. The name must match exactly.
  • Since app-category does not currently show the applications included in that category, do not use it when targeting specific apps. Categories like basic, which include all basic networking protocols like TCP and UDP, may affect all flows.
  • For app-id, a report is only generated for a fully classified flow after that flow has been fully classified. Therefore, the number of reported applications may not match the total number of flows. These reports are sent after enough applications are identified on the Service Node. If many applications are identified, DMF sends the reports quickly. However, DMF sends these reports every 10 seconds when identifying only a few applications.
  • DMF treats a bidirectional flow as part of the same n-tuple. As such, generated reports contain the client's source IP address and the server's destination IP address.
  • While configuring many ports with the app-id, there may occasionally be a few Rx drops on the 16 port machines at a high traffic rate in the first couple of seconds.
  • The feature uses a cache that maps dest ip and port to the application. Caching may vary the performance depending on the traffic profile.
  • The app-id and app-id-filter services are more resource-intensive than other services. Combining them in a service chain or configuring many instances of them may lead to degradation in performance.
  • At scale, such as configuring 16 ports on the R740 DCA-DM-SEL, app-id may take a few minutes to set up on all these ports, and this is also true when doing a dynamic signature update.
  • The show analytics app-info command only works in push-per-filter VLAN mode.

Redundancy of Managed Services Using Two DMF Policies

In this method, users can employ a second policy with a second managed service to provide redundancy. The idea here is to duplicate the policies but assign a lower policy priority to the second DANZ Monitoring Fabric (DMF) policy. In this case, the backup policy (and, by extension, the backup service) will always be active but only receive relevant traffic once the primary policy goes down. This method provides true redundancy at the policy, service-node, and core switch levels but uses additional network and node resources.

Example
! managed-service
managed-service MS-SLICE-1
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag1
!
managed-service MS-SLICE-2
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag2
! policy
policy ACTIVE-POLICY
priority 101
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-1 sequence 1
1 match ip
!
policy BACKUP-POLICY
priority 100
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-2 sequence 1
1 match ip

Cloud Services Filtering

The DANZ Monitoring Fabric (DMF) supports traffic filtering to specific services hosted in the public cloud and redirecting filtered traffic to customer tools. DMF achieves this functionality by reading the source and destination IP addresses of specific flows, identifying the Autonomous System number they belong to, tagging the flows with their respective AS numbers, and redirecting them to customer tools for consumption.

The following is the list of services supported:

  • amazon: traffic with src/dst IP belonging to Amazon
  • ebay: traffic with src/dst IP belonging to eBay
  • facebook: traffic with src/dst IP belonging to FaceBook
  • google: traffic with src/dst IP belonging to Google
  • microsoft: traffic with src/dst IP belonging to Microsoft
  • netflix: traffic with src/dst IP belonging to Netflix
  • office365: traffic for Microsoft Office365
  • sharepoint: traffic for Microsoft Sharepoint
  • skype: traffic for Microsoft Skype
  • twitter: traffic with src/dst IP belonging to Twitter
  • default: traffic not matching other rules in this service. Supported types are match or drop.

The option drop instructs the DMF Service Node to drop packets matching the configured application.

The option match instructs the DMF Service Node to deliver packets to the delivery interfaces connected to the customer tool.

A default drop action is auto-applied as the last rule, except when configuring the last rule as match default. It instructs the DMF Service Node to drop packets when either of the following conditions occurs:
  • The stream's source IP address or destination IP address doesn't belong to any AS number.
  • The stream's source IP address or destination IP address is affiliated with an AS number but has no specific action set.

Cloud Services Filtering Configuration

Managed Service Configuration
Controller(config)# managed-service <name>
Controller(config-managed-srv)#
Service Action Configuration
Controller(config-managed-srv)# 1 app-filter
Controller(config-managed-srv-appfilter)#
Filter Rules Configuration
Controller(config-managed-srv-appfilter)# 1 drop sharepoint
Controller(config-managed-srv-appfilter)# 2 match google
Controller(config-managed-srv-appfilter)# show this
! managed-service
managed-service sf3
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 service- app-filter
1 drop sharepoint
2 match google
A policy having a managed service with app-filter as the managed service, but with no matches specified will fail to install. The example below shows a policy incomplete-policy having failed due to the absence of a Match/Drop rule in the managed service incomplete-managed-service.
Controller(config)# show running-config managed-service incomplete-managed-service
! managed-service
managed-service incomplete-managed-service
1 app-filter
Controller(config)# show running-config policy R730-sf3
! policy
policy incomplete-policy
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service incomplete-managed-service sequence 1
1 match any
Controller(config-managed-srv-appfilter)# show policy incomplete-policy
Policy Name : incomplete-policy
Config Status : active - forward
Runtime Status : one or more required service down
Detailed Status : one or more required service down - installed to
forward
Priority : 100
Overlap Priority : 0

Multiple Services Per Service Node Interface

The service-node capability is augmented to support more than one service action per service-node interface. Though this feature is economical regarding per-interface cost, it could cause packet drops in high-volume traffic environments. Arista Networks recommends using this feature judiciously.

Example
controller-1# show running-config managed-service Test
! managed-service
managed-service Test
service-interface switch CORE-SWITCH-1 ethernet13/1
1 dedup full-packet window 2
2 mask BIGSWITCH
3 slice l4-payload-start 0
!
4 netflow an-collector
collector 10.106.6.15 udp-port 2055 mtu 1500
This feature replaces the service-action command with sequential numbers. The allowed range of sequence numbers is 1 -20000. In the above example, the sequence numbering impacts the order in which the managed services influence the traffic.
Note: After upgrading to DANZ Monitoring Fabric (DMF) release 8.1.0 and later, the service-action CLI is automatically replaced with sequence number(s).
Specific managed service statistics can be viewed via the following CLI command:
When using the DMF GUI, view the above information in Monitoring > Managed Services > Devices > Service Stats.
Note: The following limitations apply to this mode of configuration:
  • The NetFlow/IPFIX-action configuration should not be followed by the timestamp service action.
  • Ensure the UDP-replication action configuration is the last service in the sequence.
  • The header-stripping service with post-service-match rule configured should not be followed by the NetFlow, IPFIX, udp-replication, timestamp and TCP-analysis services.
  • When configuring a header strip and slice action, the header strip action must precede the slice action.

Sample Service

The Service Node forwards packets based on the max-tokens and tokens-per-refresh parameters using the DANZ Monitoring Fabric (DMF) Sample Service feature. The sample service uses one token to forward one packet.

After consuming all the initial tokens from the max-tokens bucket, the system drops subsequent packets until the max-tokens bucket refills using the tokens-per-refresh counter at a recurring predefined time interval of 10ms. Packet sizes do not affect this service.

Arista Networks recommends keeping the tokens-per-refresh value at or below max-tokens. For example, max-tokens = 1000 and tokens-per-refresh = 500.

Setting the max-tokens value to 1000 means that the initial number of tokens is 1000, and the maximum number of tokens stored at any time is 1000.

The max-tokens bucket will be zero when the Service Node has forwarded 1000 packets before the first 10 ms period ends, leading to a situation where the Service Node is no longer forwarding packets. After every 10ms time interval, if the tokens-per-refresh value is set to 500, the max-tokens bucket is refilled using the tokens-per-refresh configured value, 500 tokens in this case, to pass packets the service tries to use immediately.

Suppose the traffic rate is higher than the refresh amount added. In that case, available tokens will eventually drop back to 0, and every 10ms, only 500 packets will be forwarded, with subsequent packets being dropped.

If the traffic rate is lower than the refresh amount added, a surplus of tokens will result in all packets passing. Since the system only consumes some of the tokens before the next refresh interval, available tokens will accumulate until they reach the max-tokens value of 1000. After 1000, the system does not store any surplus tokens above the max-tokens value.

To estimate the maximum possible packets passed per second (pps), use the calculation (1000ms/10ms) * tokens-per-refresh and assume the max-tokens value is larger than tokens-per-refresh. For example, if the tokens-per-refresh value is 5000, then 500000 pps are passed.

The Sample Service feature can be used as a standalone Managed Service or chained with other Managed Services.

Use Cases and Compatibility

  • Applies to Service Nodes
  • Limit traffic to tools that cannot handle a large amount of traffic.
  • Use the Sample Service before another managed service to decrease the load on that service.
  • The Sample Service is applicable when needing only a portion of the total packets without specifically choosing which packets to forward.

Sample Service CLI Configuration

  1. Create a managed service and enter the service interface.
  2. Choose the sample managed service with the seq num sample command.
    1. There are two required configuration values: max-tokens and tokens-per-refresh. There are no default values, and the service requires both values.
    2. The max-tokens value is the maximum size of tokens in the token bucket. The service will start with the number of tokens specified when first configured. Each packet passed consumes one token. If no tokens remain, packet forwarding stops. Configure the max-tokens value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    3. DMF refreshes the token bucket every 10 ms. The tokens-per-refresh value is the number of tokens added to the token bucket on each refresh. Each packet passed consumes one token, and when the number of tokens drops to zero, the system drops all subsequent packets until the next refresh. The number of tokens in the bucket cannot exceed the value of max-tokens. Configure the tokens-per-refresh value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    The following example illustrates a typical Sample Service configuration
    dmf-controller-1(config-managed-srv-sample)# show this
    ! managed-service
    managed-service MS
    !
    3 sample
    max-tokens 50000
    tokens-per-refresh 20000
  3. Add the managed service to the policy.

Show Commands

Use the show running-config managed-service sample_service_name command to view pertinent details. In this example, the sample_service_name is techpubs.

DMF-SCALE-R450> show running-config managed-service techpubs

! managed-service
managed-service techpubs
!
1 sample
max-tokens 1000
tokens-per-refresh 500
DMF-SCALE-R450>

Sample Service GUI Configuration

Use the following steps to add a Sample Service.
  1. Navigate to the Monitoring > Managed Services page.
    Figure 66. DMF Managed Services
  2. Under the Managed Services section, click the + icon to create a new managed service. Go to the Actions, and select the Sample option in the Action drop-down. Enter values for Max tokens and Tokens per refresh.
    Figure 67. Configure Managed Service Action
  3. Click Append and then Save.

Troubleshooting Sample Service

Troubleshooting

  • If the number of packets forwarded by the Service Node interfaces is few, the max-tokens and tokens-per-refresh values likely need to be higher.
  • If fewer packets than the tokens-per-refresh value forward, ensure the max-tokens value is larger than the tokens-per-refresh value. The system discards any surplus refresh tokens above the max-tokens value.
  • When all traffic forwards, the initial max-tokens value is too large, or the tokens refreshed by tokens-per-refresh are higher than the packet rate.
  • When experiencing packet drops after the first 10ms post commencement of traffic, it may be due to a low tokens-per-refresh value. For example, calculate the minimum value of max-tokens and tokens-per-refresh that would lead to forwarding all packets.

Calculation Example

Traffic Rate : 400 Mbps
Packet Size - 64 bytes
400 Mbps = 400000000 bps
400000000 bps = 50000000 Bps
50000000 Bps = 595238 pps (Includes 20 bytes of inter packet gap in addition to the 64 bytes)
1000 ms = 595238 pps
1 ms = 595.238 pps
10 ms = 5952 pps
max-tokens : 5952 (the minimum value)
tokens-per-refresh : 5952 ( the minimum value)

Limitations

  • In the current implementation, the Service Sample action is bursty. The token consumption rate is not configured to withhold tokens over time, so a large burst of incoming packets can immediately consume all the tokens in the bucket. There is currently no way to select what traffic is forwarded or dropped; it only depends on when the packets arrive concerning the refresh interval.
  • Setting the max-tokens and tokens-per-refresh values too high will forward all packets. The maximum value is 9,223,372,036,854,775,807, but Arista Networks recommends staying within the maximum values stated under the description section.

Flow Diff Latency and Drop Analysis

Latency and drop information help determine if there is a loss in a particular flow and where the loss occurred. A Service Node action configured as a DANZ Monitoring Fabric (DMF) managed service has two separate taps or spans in the production network and can measure the latency of a flow traversing through these two points. It can also detect packet drops between two points in the network if the packet only appears on one point within a specified time frame, currently set to 100ms.

Latency and drop analysis require PTP time-stamped packets. The DMF PTP timestamping feature can do this as the packets enter the monitoring fabric, or the production network switches can also timestamp the packet.

The Service Node accumulates latency values by flow and sends IPFIX data records with each flow's 5-tuple and ingress and egress identifiers. It sends IPFIX data records to the Analytics Node after collecting a specified number of values for a flow or when a timeout occurs for the flow entry. The threshold count is 10,000, and the flow timeout is 4 seconds.

Note: Only basic statistics are available: min, max, and mean. Use the Analytics Node to build custom dashboards to view and check the data.

Use the DMF Analytics Node to build custom dashboards to view and check the data.

Attention: The flow diff latency and drop analysis feature is switch dependent and requires PTP timestamping. It is supported on 7280R3 switches.

Configure Flow Diff Latency and Drop Analysis Using the CLI

Configure this feature through the Controller as a Service Node action in managed services using the managed service action flow-diff.

Latency configuration configures multiple tap point pairs to analyze latency or drops between which analysis of latency or drops occurs. A tap point pair comprises a source and a destination tap point, identified by the filter interface, policy name, or filter interface group. Based on the latency configuration, configuring the Service Node with traffic metadata tells the Service Node where to look for tap point information, timestamps, and the IPFIX collector.

Configure appropriate DMF Policies such that traffic tapped from tap point pairs in the network is delivered to the configured Service Node interface for analysis.

Configuration Steps for flow-diff.

  1. Create a managed service and enter the service interface.
  2. Choose the flow-diff service action with the command: seq num flow-diff
    Note: The command should enter the flow-diff submode, which supports three configuration parameters: collector, l3-delivery-interface, and tap-point-pair. These all are required parameters.
  3. Configure the IPFIX collector IP address by entering the following command: collector ip-address (the UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively).
  4. Configure the delivery interface by entering the command l3-delivery-interface delivery interface name.
  5. Configure the points for flow-diff and drop analysis using tap-point-pair parameters as specified in the following section. Multiple options to identify the tap-point include filter-interface, filter-interface-group, and policy-name. This command will require a source and a destination tap point.
  6. Optional parameters are latency-table-size, sample-count-threshold, packet-timeout, and flow-timeout. The default values are large, 10000, 100 ms, and 4000 ms, respectively.
  7. latency-table-size determines the memory footprint of flow-diff action on the Service Node.
  8. sample-count-threshold specifies the number of samples needed to generate a latency report. Every time a packet times out, it generates a sample for that flow. DMF generates a report if the flow reaches the sample threshold and resets the flow stats.
  9. packet-timeout is the time interval in which timestamps are collected for a packet. It must be larger than the time it takes the same packet to appear at all tap points. Every timeout generates a sample for the flow associated with the packet.
  10. flow-timeout is the time after which, when the flow no longer receives any packets, the flow will be evicted, and a report for the flow is generated. The timeout for a flow refreshes each time a new packet is received. If packets are continuously received for a flow below the flow timeout value, then the flow will never be evicted.
The following example illustrates configuring flow-diff using the steps mentioned earlier:
dmf-controller-1(config)# managed-service managed_service_1
dmf-controller-1(config-managed-srv)# service-interface switch delivery1 ethernet1
dmf-controller-1(config-managed-srv)# 1 flow-diff
dmf-controller-1(config-managed-srv-flow-diff)# collector 192.168.1.1
dmf-controller-1(config-managed-srv-flow-diff)# l3-delivery-interface l3-iface-1
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source filter-interface f1 destination filter-interface f2
dmf-controller-1(config-managed-srv-flow-diff)# latency-table-size small|medium|large
dmf-controller-1(config-managed-srv-flow-diff)# packet-timeout 100
dmf-controller-1(config-managed-srv-flow-diff)# flow-timeout 4000
dmf-controller-1(config-managed-srv-flow-diff)# sample-count-threshold 10000

Configuring Tap Points

Configure tap points using tap-point-pair parameters in the flow-diff submode specifying three identifiers: filter interface name, policy name, and filter-interface-group.
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source <Tab>
filter-interface policy-namefilter-interface-group

Both source and destination tap points must use identifiers compatible; policy-name cannot be used with filter-interface or filter-interface-group.

The filter-interface-group option takes in any configured filter interface group used to represent a collection of tap points in push-per-filter mode. This is an optional command to use when a group of tap points exists, all expecting traffic from the same source or group of source tap points for ease of configuration. For example:

  1. Instead of having two separate tap-point-pairs to represent A → B, A → C, use a filter-interface-group G = [B, C], and only one tap-point-pair A → G.
    dmf-controller-1(config-managed-srv-flow-diff# tap-point-pair source type A destination filter-interface-group G
  2. With a topology like A → C and B → C, configure a filter-interface-group G = [A, B], and tap-point-pair G → C.
    dmf-controller-1(config-managed-srv-flow-diff # tap-point-pair source filter-interface-group G destination type C

There are some restrictions to keep in mind while configuring tap-point-pairs:

  • source and destination must exist and cannot refer to the same tap point
  • You can only configure a maximum of 1024 tap points, therefore a maximum of 512 tap-point-pairs. These limits are not for a single flow-diff managed service but across all managed services with flow-diff action.
  • filter-interface-group must not overlap with other groups within the same managed service and cannot have more than 128 members.
  • When configuring multiple tap-point-pairs using filter-interface and filter-interface-group, an interface part of a filter interface group cannot be used simultaneously as an individual source and destination within the same managed service.

There are several caveats on what is accepted based on the VLAN mode:

Push per Filter

  • Use the filter-interface option to provide the filter interface name.
  • Use the filter-interface-group option to provide the filter interface group name.
  • The policy-name identifier is invalid in this mode.

Push per Policy

  • Accepts a policy-name identifier as a tap point identifier.
  • The filter-interface and filter-interface-group identifiers are invalid in this mode.
  • Policies configured as source and destination tap points within a tap-point-pair must not overlap.

Configuring Policy

Irrespective of the VLAN mode, configure a policy or policies so that the same packet can be tapped from two independent points in the network and then sent to the Service Node.

After creating a policy, add the managed service with flow-diff action as shown below:

dmf-controller-1 (config-policy)# use-managed-service service name sequence 1

There are several things to consider while configuring policies depending on the VLAN mode:

Push per Filter

  • Only one policy can contain the flow-diff service action.
  • A policy should have all filter-interfaces and filter-interface-groups configured as tap points in the flow-diff configuration. Missing filter interfaces and groups from policy may result in drops being reported when in reality we simply won’t be forwarding the packets from one end of the tap-point-pair to the Service Node.
  • It’s also advisable to not add any filter interfaces (groups) that are not in the tap-point-pairs as their latency and drop analysis will not be done, so it will cause unnecessary packets forwarded to the Service Node, which will be reported as unexpected.

Push per Policy

  • Add the flow-diff service to two policies when using policy-name as source and destination identifiers. In this case, there are no restrictions on how many filter interfaces a policy can have.
  • A policy configured as one of the tap points will fail if it overlaps with the other policy in a tap-point-pair, or if the other policy does not exist.
In both VLAN modes, policies must have PTP timestamping enabled. To do so, use the following command:
dmf-controller-1 (config-policy)# use-timestamping

Configuring PTP Timestamping

This feature depends on configuring PTP timestamping for the packet stream going through the tap points. Refer to the Resources section for more information on setting up PTP timestamping functionality.

Show Commands

The following show commands provide helpful information.

The show running-config managed-service managed service command helps check whether the flow-diff configuration is complete.
dmf-controller-1(config)# show running-config managed-service flow-diff 
! managed-service
managed-service flow-diff
service-interface switch DCS-7050CX3-32S ethernet2/4
!
1 flow-diff
collector 192.168.1.1
l3-delivery-interface AN-Data
tap-point-pair source filter-interface ingress destination filter-interface egress
The show managed-services managed service command provides status information about the service.
dmf-controller-1(config)# show managed-services flow-diff 
# Service Name SwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW 
-|------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 flow-diffDCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps

The show running-config policy policy command checks whether the policy flow-diff service exists, whether use-timestamping is enabled, and the use of the correct filter interfaces.

The show policy policy command provides detailed information about a policy and whether any errors are related to the flow-diff service. The Service Interfaces tab section shows the packets transmitted to the Service Node and IPFIX packets received from the Service Node.

Note: Regarding two policies in a tap-point-pair (one source and one destination), only one policy will show stats about packets received from the Service Node. This output is by design, as only a single VLAN exists for the IPFIX packets transmitted from the Service Node, so there can only be one policy.
dmf-controller-1 (config)# show policy flow-diff-1
Policy Name: flow-diff-1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 4
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 2
Post Match Filter Traffic: 215Mbps
Total Delivery Rate: -
Total Pre Service Rate : 217Mbps
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: flow-diff
Installed Time : 2023-11-16 18:15:27 PST
Installed Duration : 19 minutes, 45 secs
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------|--------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 BP17280SR3E Ethernet25 uprx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF NameState Dir Packets BytesPkt Rate Bit Rate Counter Reset Time 
-|-------|---------|----------|-----|---|-------|------|--------|--------|------------------------------|
1 AN-Data 7050SX3-1 ethernet41 uptx81117222 0-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role SwitchIF Name State Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------------|------------|---------------|-----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 flow-diffpre-serviceDCS-7050CX3-32S ethernet2/4 uptx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
2 flow-diffpost-service DCS-7050CX3-32S ethernet2/4 uprx81 1175460-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Core Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# SwitchIF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|---------------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 7050SX3-1 ethernet7uprx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
2 7050SX3-1 ethernet56 uprx81 1172220-2023-11-16 18:18:18.837000 PST
3 7050SX3-1 ethernet56 uptx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
4 7280SR3EEthernet7uptx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
5 DCS-7050CX3-32S ethernet28 uptx81 1175460-2023-11-16 18:18:18.837000 PST
6 DCS-7050CX3-32S ethernet28 uprx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
~ Failed Path(s) ~
None.

Syslog Messages

The Flow Diff Latency and Drop Analysis feature does not create Syslog messages.

Troubleshooting

Controller

Policies dictate how and what packets are directed to the Service Node. Policies must be able to stream packets from two distinct tap points so that the same packet gets delivered to the Service Node for flow-diff and drop analysis.

Possible reasons for flow-diff and drop analysis not working are:

  • flow-diff action is not added to both policies in a tap-point-pair in push-per-policy mode.
  • flow-diff action exists in a policy that does not have all filter interfaces or groups configured as tap-point-pairs in push-per-filter mode.

A policy programmed to use flow-diff service action can fail for multiple reasons:

  • An incomplete flow-diff configuration exists, there are missing source or destination tap points, or the use of a policy-name identifier in push-per-filter or filter-interface or filter-interface-group in push-per-policy mode.
  • In the push-per-policy mode, the policy doesn’t match the source or destination tap point for any configured tap-point-pair.
  • In the push-per-policy mode, policy names configured as tap points within a tap-point-pair overlap or do not exist.
  • There are more than 512 tap-point-pairs configured or more than 1024 distinct tap points. These limits are global across all flow-diff managed services.
  • filter-interface-groups overlap with each other within the same managed service or have more than 128 group members
  • filter-interface is being used individually as a tap point and as a part of some filter-interface-group within the same managed service

Reasons for failure are available in the runtime state of the policy and viewed using the show policy policy name command.

After verifying the correct configuration of the policy and flow-diff action, enabling the log debug mode and reviewing the floodlight logs (use the fl-log command in bash mode) should show the ipfix-collector, traffic-metadata, and flow-diff gentable entries sent to the Service Node.

If no computed latency reports are generated, this can mean two things. First, not reaching the sample-count-threshold value. Either lower the sample-count-threshold until reports are generated or increase the number of unique packets per flow. Second, there are no evicted flows. The time specified in flow-timeout will refresh every time a new packet is received on that flow before the timeout occurs. If a flow continuously receives packets, it will never time out. Lower the flow-timeout value if the default of 4 seconds causes flows not to expire. After a flow expires, the Controller generates a report for that flow.

For packet-timeout, this value must be larger than the time expected to receive the same packet on every tap-point.

For A->B, if it takes 20ms for the packet to appear at B, packet-timeout must be larger than this time frame to collect timestamps for both these tap points and compute a latency in one sample.

If the Controller generates many unexpected reports, ensure taps are in the correct order in the Controller configuration. If there are many drop reports, ensure the same packet is received at all relevant taps within the packet-timeout window.

Limitations

  • Only 512 tap-point-pairs and 1024 distinct tap points are allowed.
  • filter-interface-group used as a tap point must not overlap with any other group within the same managed service and must not have more than 128 members.
  • A filter interface cannot be used individually as a tap point and as a part of a filter-interface-group simultaneously within the same managed service.
  • There is no chaining if a packet flows through three or more tap points in an A->B->C->...->Z topology. The only computed latency reports will be for tap-point-pairs A->B, A->C, …, and A->Z if these links are specified, but B->C, C->D, etc., will not be computed.
  • Hardware RSS firmware in the Service Node currently cannot parse L2 header timestamps, so packets for all L2 header timestamps are sent to the same lcore; however, RSS does distribute packets correctly to multiple lcores when using src-mac timestamping.
  • PTP timestamping doesn’t allow rewrite-dst-mac, so filter interfaces cannot be used as tap points in push-per-policy mode.
  • In push-per-policy, a policy can only have one filter interface.
  • Each packet from the L3 header and onwards gets hashed to a 64-bit value; if two packets hash to the same value, we assume the underlying packets are the same.
  • Currently, on the flow-diff action in the Service Node, if packets are duplicated so that N copies of the same packet are received:
    • N-1 latencies are computed.
    • The ingress identifier is the earliest timestamp.
  • The system reports timestamps as unsigned 32-bit values, with the maximum timestamp being 2^32-1, corresponding to approximately 4.29 seconds.
  • Only min/mean/max latencies are currently reported.
  • Packets are hashed from the L3 header onwards, meaning if there is any corrupted data past the L3 header, it will lead to drop reports. The same packet must appear at two tap points to generate a computed latency report.
  • In A->B, if B packets appear before A, an unexpected type report is generated.
  • At whichever tap point the packet first appears with the earliest timestamp, it is considered the source.
  • While switching the latency configuration, the system may generate a couple of unexpected or drop reports at the beginning.
  • Users must have good knowledge of network topology when setting up tap points and configuring timeout or sample threshold values. Improper configuration may lead to drop or unexpected reports.

Service Node Management Migration L3ZTN

After the first boot (initial configuration) is completed, in the case of Layer-3 topology mode an administrator can move a Service Node (SN) from an old DANZ Monitoring Fabric (DMF) Controller to a new one via the CLI.

Note:For appliances to connect to the Controller in Layer-3 Zero Touch Network (L3ZTN) mode, you must configure the Controller's deployment mode as pre-configure.

 

To migrate a Service Node's management to a new Controller, follow the steps outlined below:

  1. Remove the Service Node from the old Controller using the following command:
    controller-1(config)# no service-node service-node-1
  2. Connect the data NICs' sni interfaces to the new core fabric switch ports.
  3. SSH to the Service Node and configure the new Controller's IP address using the zerotouch l3ztn controller-ip command:
    service-node-1(config)# zerotouch l3ztn controller-ip 10.2.0.151
  4. Get the management MAC address (of interface bond0) of the Service Node using the following command:
    service-node-1(config)# show local-node interfaces 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Interfaces~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Interface Master Hardware address Permanent hardware address Operstate Carrier Bond mode Bond role 
    ---------|------|------------------------|--------------------------|---------|-------|-------------|---------|
    bond078:ac:44:94:2b:b6 (Dell)upupactive-backup
  5. Add the Service Node and its bond0 interface's MAC address (obtained in the step above) to the new Controller:
    controller-2(config)# service-node service-node-1
    controller-2(config-service-node)# mac 78:ac:44:94:2b:b6
  6. After associating the Service Node with the new Controller, reboot the Service Node.
  7. Once the Service Node is back online, the Controller should receive a ZTN request. If the Service Node's image differs from the Service Node image file on the new Controller, the mismatch triggers the Service Node to perform an auto-upgrade of the image and to reboot twice.
    controller-2# show zerotouch request
    4178:ac:44:94:2b:b6 (Dell)10.240.156.10 get-manifest2024-06-12 23:14:42.284000 UTC okThe request has succeeded
    5624:6e:96:78:58:b4 (Dell)10.240.156.91 get-manifest2024-06-12 23:13:38.633000 UTC okThe request has succeeded
  8. Then the Service Node should appear as a member of the new DMF fabric, which you can verify by using the following command:
    controller-2# show service-node service-node-1 details