VMware NSX-T: Evaluation Guide

Overview

VMware NSX

The goal of this document is to offer a "step by step" NSX Evaluation Guide to test (some) NSX Services: -Security Services - Micro-Segmentation (DFW) - Logical Networking Services - Logical Switching - Logical

Routing (with distributed routing) - Operation tools - Network Topology - Traceflow.  NSX offers many more services, such as Load Balancing, VPN, IDS, NSX Intelligence, Federation, etc.

Those are currently out of scope of that document. Also to limit the ESXi/Storage requirements, this evaluation does not cover high-availability and only 1 element of each NSX component will be installed.

1. REQUIREMENTS

Here are the requirements for NSX-T Evaluation.


Compute & Storage

Compute and Storage for NSX-T Evaluation

 

 

Compute

Number

Version

Download

vCenter

1

7.0

download link

vCenter-Cluster

1+

n/a

n/a

ESXi per Cluster

2+

7.0

download link

CPU per ESXi

8+

n/a

n/a

RAM per ESXi

48GB+

n/a

n/a

NIC per ESXi

2+

n/a

n/a

 

Storage

Shared storage - Recommended for live vMotionf tests

Size

500 GB

 

Networking

Networking for NSX-T Evaluation

 

VLAN Number Description
Management VLAN 11 VLAN where Management is running (vCenter / ESXi-Mgt / future NSX-Mgr / future EdgeNode-Mgt)
Overlay VLAN 12 VLAN where future NSX Logical Switches Overlay will run in
Physical Router VLAN IP MTU Note
Management VLAN 11 192.168.50.1/24 1500  
Overlay VLAN 12 192.168.51.1/24 * 1700+ *  
Web VLAN 16 10.16.1.1/24 1500 Needed for NSX Evaluation - Security only (no Logical Network)
External VLAN 3103 20.20.20.1/24 1500 Needed for NSX Evaluation -Logical Network + Security

* Since in this lab all Transport Nodes (ESXi / Edge Nodes) run the Overlay traffic in the same VLAN 12, there is actually requirement to have an IP and MTU 1700+ on the physical router.

2. INSTALLATION OF NSX-T

Disclaimer: The below install is a minimal installation intended for a lab environment only. We do not recommend below install in a live production environment.

This install is a minimal installation intended for a lab environment only. We do not recommend below install in a live production environment.

Another picture of a flowchart showing minimal installation for of NSX-T

IP of each Element Management (VLAN11) Overlay -TEP (VLAN12)
vCenter 19.168.50.4 -
ESXi1 19.168.50.21 192.168.51.21
ESXi2 19.168.50.22 192.168.51.22
NSX-T Manager 19.168.50.5 -
Edge Node 19.168.50.31 192.168.51.31

Steps to follow as below

2.1. Download of NSX Manager OVA

 

Download VMWare NSX-T Data Center 3.0

2.2. Deployment of NSX-T Manager

  • From vCenter, deploy NSX-T Unified Appliance OVA.

From vCenter, deploy NSX-T Unified Appliance OVA.

Select OVF file

Select OVF file

  • From vCenter, deploy NSX-T Unified Appliance OVA.

From vCenter, deploy NSX-T Unified Appliance OVA.

  • Select OVF file.

Select OVF file.

  • Enter NSX-T Manager VM name + vCenter folder for VM.

Enter NSX-T Manager VM name + vCenter folder for VM.

  • Select ESXi to host NSX-T Manager.

Select ESXi to host NSX-T Manager.

 

  • Review NSX-T Manager VM details.

Review NSX-T Manager VM details.

 

  • Select NSX-T Manager VM size (Small).

Select NSX-T Manager VM size (Small).

  • Select storage for NSX-T Manager VM.

 

 

 

Select storage for NSX-T Manager VM.

  • Select VDS Port Group for NSX-T Manager management vNIC (vCenter Management Port Group).

Select VDS Port Group for NSX-T Manager management vNIC (vCenter Management Port Group).

  • Enter NSX-T Manager information (passwords, hostname, IP, DNS, NTP). Important: Role name is "NSX Manager".

Enter NSX-T Manager information (passwords, hostname, IP, DNS, NTP). Important: Role name is "NSX Manager".

  • Review NSX-T Manager VM settings.

Review NSX-T Manager VM settings.

  • Once NSX-T Manager deployment is finished, start the VM.

Once NSX-T Manager deployment is finished, start the VM.

 

2.3. Register NSX-T to vCenter

Note: NSX-T Manager requires few minutes to fully start and get all its services running.

Log on NSX-T Manager UI.

  • Configuration NSX-T License.
    Under "System - Settings - Licenses", click "Add".

Configuration NSX-T License.

 

  • Register NSX-T in vCenter (to allow the deployment of NSX elements into vCenter/ESXi from NSX).
    Under "System - Configuration - Fabric - Compute Managers", click "Add".

Register NSX-T in vCenter (to allow the deployment of NSX elements into vCenter/ESXi from NSX).

Warning: Thumbprint is missing

  • Validate NSX-T registration in vCenter.
    Under "System - Configuration - Fabric - Compute Managers", click "Refresh" (bottom-left).

Validate NSX-T registration in vCenter.

2.4. ESXi Host Preparation

    1. 2.4.1 New VDS-NSX creation
  • Create New VDS-NSX (for future NSX-T Logical Switches).
    From vCenter, under "Networking", select the Data Center, and right-click to create a "New Distributed Switch".
    For this lab, see the top of page for "Number of uplinks (1)",
    and "Default Port Group (none)".

Create New VDS-NSX (for future NSX-T Logical Switches).

New Distributed Switch Name and Location

New Distributed Switch Configure settings

New Distributed Switch Ready to complete

  • Add that VDS-NSX to ESXi.
    From vCenter, under "Networking", select the VDS-NSX, and right-click to "Add and Manage Hosts...".

Add that VDS-NSX to ESXi.

VDS-NSX Add and Manage Hosts

VDS-NSX Select new hosts

VDS-NSX Manage physical adapters

VDS-NSX Manage VMKernel adapter

VDS-NSX Migrate VM networking

VDS-NSX Ready to complete

  • Configure that VDS-NSX with a large MTU (at least 1700).
    From vCenter, under "Networking", select the VDS-NSX, and right-click to "Add and Manage Hosts...".

Configure that VDS-NSX with a large MTU (at least 1700).

VDS-NSX Edit Settings

    1. 2.4.2 Uplink Profile Creation
  • Create Uplink Profile for Transport Nodes ("VLAN-Overlay + NIC" information for ESXis + Edge Node).
    From NSX-T, under "System - Configuration - Fabric - Profiles - Uplink Profiles", click "Add".
    For this lab, see the top of page for VLAN for Overlay traffic information (12),
    and number of uplinks for "VDS - NSX-T" information (1 NIC).

Create Uplink Profile for Transport Nodes ("VLAN-Overlay + NIC" information for ESXis + Edge Node).

    1. 2.4.3 Installation of NSX in ESXi
  • Configure NSX-T for ESXi.
    • Select each ESXi of vCenter-Cluster
      Under "System - Configuration - Fabric - Node - Host Transport Nodes - Managed by", select "Lab-vCenter".
      Select Type = VDS (to enable NSX into the existing "VDS-NSX" vCenter Distributed Switch),
      Mode = Standard,
      Transport Zone = "nsx-overlay-transportzone" (Default TZ for overlay traffic) + "nsx-vlan-transportzone" (Default TZ for VLAN traffic),
      Uplink Profile = "Lab-HostProfile" (with VLAN-Overlay information),
      IP (TEP) = Information on top of the page,
      Uplink = ESX VDS Uplink1.

Configure NSX-T for ESXi.

    • For each ESXi, configure its new "VDS - NSX-T"
      Click "Configure NSX".

For each ESXi, configure its new "VDS - NSX-T"

NSX Installation Configure NSX

    • For each ESXi, validate "VDS - NSX-T" creation.

For each ESXi, validate "VDS - NSX-T" creation.

 

 

 

2.5. Deployment of Edge Node

Note: If you limit your Evaluation at Security only (no Logical Network) and not Logical Network + Security nor Operation Tools, you don't need to deploy Edge Nodes.

    1. 2.5.1 Creation of VDS Port Group "All VLAN"
  • Create a Port Group "All VLAN" (= VLAN Tag 0-4096) on VDS.
    From vCenter, under "Networking", select the VDS-NSX, and right-click to "New Distributed Port Group...". For this lab, see the top of page for this Port Group on VDS.
  •  
  • Create a Port Group "All VLAN" (= VLAN Tag 0-4096) on VDS.

    New Distributed Port Group

    New Distributed Port Group configure settings

    New Distributed Port Group ready to complete

    1. 2.5.2 Installation of NSX Edge Node

Deploy 1 Edge Node on ESXi.
Under "System - Configuration - Fabric - Nodes - Edge Transport Nodes", click "Add Edge VM".
Select Form Factor Medium (useful if you want to test later Load-Balancing),
enable SSH for admin and root if you want to try later deeper troubleshooting,
Management and Switch (TEP) IP addresses on the top of the page), and
Transport Zones = "nsx-overlay-transportzone" (default TZ for Overlay traffic) and "nsx-vlan-transportzone" (default TZ for VLAN traffic).

Deploy 1 Edge Node on ESXi.

Add Edge VM name and description

Add Edge VM Credentials

Add Edge VM Configure Deployment

Add Edge VM Configure Node Settings

Add Edge VM Configure NSX

  • Validate Edge Node deployment.
    Under "System - Configuration - Fabric - Nodes - Edge Transport Nodes", click "Refresh" (bottom UI)

Validate Edge Node deployment.

 

 

      1. Creation of Edge Cluster

Create 1 Edge Cluster with EdgeNode1 member.
Under "System - Configuration - Fabric - Nodes - Edge Clusters", click "Add".
Select EdgeNode1 as member of that Edge Cluster.

Create 1 Edge Cluster with EdgeNode1 member.

  • Validate Edge Cluster creation.
    Under "System - Configuration - Fabric - Nodes - Edge Clusters", click "Refresh".

Validate Edge Cluster creation.

 

 

 

 

 

3. NSX EVALUATION

VMware NSX Logo

Overview

 

NSX-T Services evaluated in that Evaluation Guide:

  • Security Services
    • Micro-Segmentation (DFW)
  • Logical Networking Services
    • Logical Switching
    • Logical Routing (with distributed routing)
  • Operation tools
    • Network Topology
    • Traceflow

NSX offers many more services, such as Load Balancing, VPN, IDS, NSX Intelligence, Federation, etc. Those are currently out of scope of that document. Limit the ESXi/Storage requirements, this evaluation does not cover high-availability and only 1 element of each NSX component will be installed.

  1. NSX Evaluation
    1. Security only (no Logical Network)
    2. Logical Network + Security
    3. Operation Tools

3.1. Security only (no Logical Network)

In this section, you'll configure 2 Web VMs on a new VLAN and provide micro-segmentation (DFW) on those 2 VMs.
Important Note: In this section, the routing is still fully done by physical fabric.
So your physical router needs an interface for that new VLAN (10.114.218.1/24 in lab).

 

 

Logical View

Logical View


Physical View

Physical View

 

The Security evaluation done in this chapter is focusing on NSX L4 Stateful North/South and East/West firewalling capabilities. NSX-T offers more than L4 Stateful firewall capabilities, such as Layer7 Firewalling, Intrusion Detection System (IDS), eco-system with Security Vendors like Checkpoint, Fortinet, or Palo Alto

 

    1. 3.1.1 Create VLAN in NSX-T

 

Log on NSX-T Manager UI.

  • Create new VLAN "Web" + interface on physical router for this lab, see on top of the page for the physical router interface + VLAN information.
    There is no steps described in this document, as it varies per physical router.
  • Create new VLAN Segment "VLAN-Web".
    Under "Networking - Segments", click "Add Segment".
    For this lab, see on top of the page for the VLAN number (16).
    Select Transport Zone = "nsx-vlan-transportzone" (Default TZ for VLAN traffic),
    VLAN = "16", and no extra configuration for that Segment.

Create new VLAN Segment "VLAN-Web".

Create new VLAN "Web" + interface on physical router 

  • Validate new VLAN Segment "VLAN-Web" is available on vCenter.
    From vCenter, under "Networking", validate "VLAN-Web" is under VDS-NSX.
    For this lab, see on top of the page for the VM IP addresses.

Validate new VLAN Segment "VLAN-Web" is available on vCenter.

 

    1. 3.1.2 Create 2 Web VMs in VLAN "VLAN-Web"

 

  • Create 2 Web VMs in VLAN "VLAN-Web"
    From vCenter, under "Host and Clusters", validate 2 Web VMs are well created and connected to "VLAN-Web"

Create 2 Web VMs in VLAN "VLAN-Web"

  • Validate connectivity from external to those VMs
    From external client, validate ping communication to VMs is allowed,
    and validate SSH communication to VMs is also allowed.
    Note: I'm using ping + SSH, but you can use any protocol of your choice

root@lab3-jumphost:~# ping 10.16.1.11

PING 10.16.1.11 (10.16.1.11) 56(84) bytes of data.

64 bytes from 10.16.1.11: icmp_seq=1 ttl=63 time=0.565 ms

64 bytes from 10.16.1.11: icmp_seq=2 ttl=63 time=0.593 ms

^C

--- 10.16.1.11 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1022ms

rtt min/avg/max/mdev = 0.565/0.579/0.593/0.014 ms

 

root@lab3-jumphost:~# ssh root@10.16.1.11

The authenticity of host '10.16.1.11 (10.16.1.11)' can't be established.

ECDSA key fingerprint is SHA256:uncl2WyCuNSTwllyvR2He8JEKqZn0K2qdhYB06L+bKE.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.16.1.11' (ECDSA) to the list of known hosts.

root@10.16.1.11's password:

Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-116-generic x86_64)

 

 * Documentation:  https://help.ubuntu.com

 * Management:     https://landscape.canonical.com

 * Support:        https://ubuntu.com/advantage

 

217 packages can be updated.

136 updates are security updates.

 

Last login: Mon Apr  6 16:58:28 2020

root@VLANWebeb-VM1:~#

root@lab3-jumphost:~# ping 10.16.1.12

PING 10.16.1.12 (10.16.1.12) 56(84) bytes of data.

64 bytes from 10.16.1.12: icmp_seq=1 ttl=63 time=1.21 ms

64 bytes from 10.16.1.12: icmp_seq=2 ttl=63 time=0.441 ms

^C

--- 10.16.1.12 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 0.441/0.828/1.216/0.388 ms

 

root@lab3-jumphost:~# ssh root@10.16.1.12

The authenticity of host '10.16.1.12 (10.16.1.12)' can't be established.

ECDSA key fingerprint is SHA256:uncl2WyCuNSTwllyvR2He8JEKqZn0K2qdhYB06L+bKE.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.16.1.12' (ECDSA) to the list of known hosts.

root@10.16.1.12's password:

Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-116-generic x86_64)

 

 * Documentation:  https://help.ubuntu.com

 * Management:     https://landscape.canonical.com

 * Support:        https://ubuntu.com/advantage

 

217 packages can be updated.

136 updates are security updates.

 

 

Last login: Mon Apr  6 16:59:23 2020

root@VLANWeb-VM2:~#

 

 

 

    1. 3.1.3 Configure Microsegmentation

 

3.1.3.1. Create NSX Group "VLAN Web VMs"

To simplify the configuration of micro-segmentation, NSX offers the ability to Group Workload into static or dynamic membership, such as VM name, tags, segment, etc.

  • Create NSX Group "Group VLAN Web VMs".
    From NSX-T, under "Inventory - Groups", click "Add Group". For this lab, we create dynamic Membership Criteria based on VM Name "starts with VLAN Web".

Create NSX Group "Group VLAN Web VMs".

Group VLAN Web VMs

  • Validate membership of NSX Group "Group VLAN Web VMs".
    From NSX-T, under "Inventory - Groups", click "View Members" of "Group VLAN Web VMs".
  • Validate membership of NSX Group "Group VLAN Web VMs".

View Members Group VLAN Web VMs

3.1.3.2. Create Micro-Segmentation (DFW)

 

Micro-segmentation is defined in "Categories" (Emergency, Infrastructure, Environment, Application), with security "Sections" + "Rules" in each. The security rules in the different sections will be pushed to the relevant VMs vNics based on the "Apply To" defined in the Section and/or Rule.

  • Create new DFW Section (= Policy).
    From NSX-T, under "Security - Distributed Firewall - Category Specific Rules", click "Add Policy".
    For this lab, let's create a Section name "Section-VLANWeb",
    and with an Applied To = "Group VLAN Web VMs".

Create new DFW Section (= Policy).

  • Create new DFW Rule.
    From NSX-T, under "Security - Distributed Firewall - Category Specific Rules", select section "Section-VLAN Web" and click "Add Rule".
    For this lab, let's create the following rules:

Name

Sources

Destinations

Services

Profiles

Applied To

Action

Internal

Group VLAN Web VMs

Group VLAN Web VMs

HTTP + ICMP

None

DFW

Allow

External

Any

Group VLAN Web VMs

HTTP

None

DFW

Allow

Default

Any

Group VLAN Web VMs

Any

None

DFW

Reject

Create new DFW Rule.

 

 

  • Publish DFW.
    From NSX-T, under "Security - Distributed Firewall - Category Specific Rules", click "Publish" (top-right).

Publish DFW.

 

 

 

    1. 3.1.4 Validate Micro-Segmentation

 

  • Validate connectivity from external to those VMs.
    From external client, validate HTTP communication to VMs is allowed,
    and validate ICMP communication to VMs is NOT allowed.
    Note: I'm using the web client "curl" to access the web page "/test.php", but you can use any web client.

root@lab3-jumphost:~# curl http://10.16.1.11/test.php

The Client IP@ is: 10.114.218.216<br>

The Server IP@ is: 10.16.1.11

 

root@lab3-jumphost:~# ping 10.16.1.11

PING 10.16.1.11 (10.16.1.11) 56(84) bytes of data.

From 10.16.1.11 icmp_seq=1 Destination Host Prohibited

From 10.16.1.11 icmp_seq=2 Destination Host Prohibited

^C

--- 10.16.1.11 ping statistics ---

2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1013ms

root@lab3-jumphost:~# curl http://10.16.1.12/test.php

The Client IP@ is: 10.114.218.216<br>

The Server IP@ is: 10.16.1.12

 

root@lab3-jumphost:~# ping 10.16.1.12

PING 10.16.1.12 (10.16.1.12) 56(84) bytes of data.

From 10.16.1.12 icmp_seq=1 Destination Host Prohibited

From 10.16.1.12 icmp_seq=2 Destination Host Prohibited

^C

--- 10.16.1.12 ping statistics ---

2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1001ms

  • Validate L2 connectivity between those VMs.
    From VLANWeb VM, validate HTTP + ICMP communication to VLANWeb VM is allowed,
    and validate SSH communication to VLAN Web VM is NOT allowed.

root@VLANWebeb-VM1:~# ping 10.16.1.12

PING 10.16.1.12 (10.16.1.12) 56(84) bytes of data.

64 bytes from 10.16.1.12: icmp_seq=1 ttl=64 time=1.80 ms

64 bytes from 10.16.1.12: icmp_seq=2 ttl=64 time=1.23 ms

^C

--- 10.16.1.12 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 1.231/1.518/1.805/0.287 ms

(reverse-i-search)`cu': ^Crl http://127.0.0.1/response_code.php

root@VLANWebeb-VM1:~# curl http://10.16.1.12/test.php

The Client IP@ is: 10.16.1.11<br>

The Server IP@ is: 10.16.1.12

 

root@VLANWebeb-VM1:~# ssh 10.16.1.12

ssh: connect to host 10.16.1.12 port 22: Connection refused

3.2. Logical Network and Security

In this section, you'll configure Logical Networks for Tenants Green and Blue (Logical Routers = "Tier1" and Logical Switches = "Segments").
Those Tenants Logical Networks will have access to the physical fabric via a Logical Router ("Tier0").
Routing between the Tier0 and physical router will be done via "static routing" or "BGP".

Important Note: In this section, the internal Tenant routing (East/West) is done in "Logical Space" by NSX. The physical router provides the routing between the "logical space" and the "physical world" (North/South).

Logical View physical router nsx
Logical View
Logical View


Physical View of nsx

Physical View

The Network evaluation done in this chapter is focusing on NSX Switching and Routing capabilities. NSX-T offers more than Switching and Routing capabilities, such as NAT, Load Balancing, VPN.


The Security evaluation done in this chapter is focusing on NSX L4 Stateful North/South and East/West firewalling capabilities.NSX-T offers more than L4 Stateful firewall capabilities, such as Layer7 Firewalling, Intrusion Detection System (IDS), eco-system with Security Vendors like Checkpoint, Fortinet, or Palo Alto Networks. More information on https://www.vmware.com/products/nsx.html and https://nsx.techzone.vmware.com/.

 

 

    1. 3.2.1 Create Tenants Logical Networks

 

Log on NSX-T Manager UI.

  • Create new Logical Routers "T1-xxx".
    Under "Networking - Connectivity - Tier-1 Gateways", click "Add Tier-1 Gateway".
    For this lab, see on top of the page for the T1 name (T1-Tenant1, and T1-Tenant2).
    Configure the T1 Name.

Create new Logical Routers "T1-xxx".

  • Create new Overlay Segments "LSxxx".
    Under "Networking - Segments", click "Add Segment".
    For this lab, see on top of the page for the Segment name (LS1.1, LS1.2, and LS2.1).
    Select Connectivity = "T1-xxx" ("LS1.1 + LS1.2 on T1-Tenant1" and "LS2.1 on T1-Tenant2"),
    Transport Zone = "nsx-overlay-transportzone" (Default TZ for Overlay traffic),
    Subnets = 10.x.x.1/24"

Create new Overlay Segments "LSxxx".

  • Validate new Overlay Segments "LSxxx" is available on vCenter.
    From vCenter, under "Networking", validate "LSxxx" is under VDS-NSX.

Validate new Overlay Segments "LSxxx" is available on vCenter.

  • Create 2 Web VMs in each Overlay Segment "LSxxx".
    From vCenter, under "Host and Clusters", validate VMs are well created and connected to "LSxxx" For this lab, see on top of the page for the VM IP addresses.

Create 2 Web VMs in each Overlay Segment "LSxxx".

 

 

 

    1. 3.2.2 Configure North/South Communication (T0 / Physical Router)

 

3.2.2.1. Configure physical router + Create T0-Provider + Connect T1s to T0-Provider

 

  • Create new VLAN External + interface on physical router.
    For this lab, see on top of the page for the physical router interface + VLAN information.
    There is no steps described in this document, as it varies per physical router.
  • Create VLAN Segment "External".
    Under "Networking - Segments", click "Add Segment".
    For this lab, see on top of the page for the VLAN number (3103).
    Select Transport Zone = "nsx-vlan-transportzone" (Default TZ for VLAN traffic),
    VLAN = "3103"

Create VLAN Segment "External".

  • Create new Logical Routers "T0-Provider".
    Under "Networking - Connectivity - Tier-0 Gateways", click "Add Gateway Tier-0".
    For this lab, see on top of the page for the T0 settings.
    Select Edge Cluster = ""EdgeCluster1",
    and the following settings:
    Interface "20.20.20.2/24" on Segment "External" on Edge Node "EdgeNode1".

Create new Logical Routers "T0-Provider".

Set Interfaces

  • Connect the different T1 to the Provider-T0.
    For each T1, under "Networking - Connectivity - Tier-1 Gateways", edit T1 and link it to "T0-Provider".

Connect the different T1 to the Provider-T0.

Then configure "3.2.2.2. Static Routing." OR "3.2.2.3. Dynamic Routing."

3.2.2.2. Configure North/South Routing Static

 

Then configure "3.2.2.2. Static Routing." OR "3.2.2.3. Dynamic Routing."

  • Configure static route on physical router.
    Subnets "10.1.1.0/24" + "10.1.2.0/24" + "10.2.1.0/24" have a static route via "20.20.20.2". There is no steps described in this document, as it varies per physical router.
    Just showing the routing table of the physical router

physical-router@lab3:~$ show ip route

Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,

       I - ISIS, B - BGP, > - selected route, * - FIB route

 

S>* 10.1.1.0/24 [1/0] via 20.20.20.2, eth3

S>* 10.1.2.0/24 [1/0] via 20.20.20.2, eth3

S>* 10.2.1.0/24 [1/0] via 20.20.20.2, eth3

  • Configure static route on T0-Provider.
    Default gateway via "20.20.20.1".
    Under "Networking - Connectivity - Tier-0 Gateways", edit the "T0-Provider" and under "Routing - Static Routes", set a "Static Route".

Configure static route on T0-Provider.

And Configure the "Set Next Hops" = "20.20.20.1"

Configure the "Set Next Hops" = "20.20.20.1"

3.2.2.3. Configure North/South Routing Dynamic with BGP

3.2.2.3. Configure North/South Routing Dynamic with BGP

  • Configure BGP on physical router.
    There is no steps described in this document, as it varies per physical router.
    Just showing the BGP configuration of the physical router

physical-router@lab3:~$ show configuration commands | grep bgp

set protocols bgp 2 neighbor 20.20.20.2 'default-originate'     <-- Advertise itself for default gateway

set protocols bgp 2 neighbor 20.20.20.2 remote-as '1'

  • Configure BGP on T0-Provider.
    Under "Networking - Connectivity - Tier-0 Gateways", edit the "T0-Provider" and under "BGP", configure the "Local AS" = "1".

Configure BGP on T0-Provider.

And configure the "BGP Neighbors" = "20.20.20.1", with "Remote AS number" = "2", and with "Source Addresses" = "20.20.20.1".

Set BGP Neighbors

  • Configure T0-Provider "Route Distribution".
    Under "Networking - Connectivity - Tier-0 Gateways", edit the "T0-Provider" and under "Route Redistribution", add redistribution of T1 Subnets.

Configure T0-Provider "Route Distribution".

And configure the "T1 Connected Interfaces & Segments".

configure the "T1 Connected Interfaces & Segments".

  • Configure T1-xxx "Route Distribution".
    Under "Networking - Connectivity - Tier-1 Gateways", edit each "T1-xxx" and under "Route Advertisement", select "All Connected Segments & Service Ports".

Configure T1-xxx "Route Distribution".

  • Validate learned BGP routes on physical router.

physical-router@lab3:~$ show ip bgp neighbors 20.20.20.2

BGP neighbor is 20.20.20.2, remote AS 1, local AS 2, external link

  BGP version 4, remote router ID 20.20.20.2

  BGP state = Established, up for 00:00:16

  <snip>

 

physical-router@lab3:~$ show ip bgp neighbors 20.20.20.2 routes

BGP table version is 0, local router ID is 192.168.52.1

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

              r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

 

   Network          Next Hop            Metric LocPrf Weight Path

*> 10.1.1.0/24      20.20.20.2               0             0 1 ?

*> 10.1.2.0/24      20.20.20.2               0             0 1 ?

*> 10.2.1.0/24      20.20.20.2               0             0 1 ?

 

Total number of prefixes 3

  • Validate BGP status of T0-Provider.
    Under "Networking - Connectivity - Tier-0 Gateways", expand "BGP", and click on "BGP Neighbors".
    And click on the "i" next to "Status" ("Connection State" should be "ESTABLISHED").

Validate BGP status of T0-Provider.

 

3.2.3. Validate Networking

 

  • Validate North/South connectivity from external to those VMs.
    From external client, validate communication to VMs

root@lab3-jumphost:~# ping 10.1.1.11

PING 10.1.1.11 (10.1.1.11) 56(84) bytes of data.

64 bytes from 10.1.1.11: icmp_seq=1 ttl=61 time=1.64 ms

64 bytes from 10.1.1.11: icmp_seq=2 ttl=61 time=1.20 ms

^C

--- 10.1.1.11 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 1.202/1.424/1.646/0.222 ms

root@lab3-jumphost:~# ping 10.2.1.11

PING 10.2.1.11 (10.2.1.11) 56(84) bytes of data.

64 bytes from 10.2.1.11: icmp_seq=1 ttl=61 time=8.01 ms

64 bytes from 10.2.1.11: icmp_seq=2 ttl=61 time=1.67 ms

 

--- 10.2.1.11 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 1.672/4.845/8.019/3.174 ms

  • Validate East/West connectivity from VMs to VMs.
    From VM3, validate communication to VM4, VM5, and VM7.

root@LS1-1-VM3:~# ping 10.1.1.12

PING 10.1.1.12 (10.1.1.12) 56(84) bytes of data.

64 bytes from 10.1.1.12: icmp_seq=1 ttl=64 time=1.82 ms

64 bytes from 10.1.1.12: icmp_seq=2 ttl=64 time=0.828 ms

^C

--- 10.1.1.12 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1002ms

rtt min/avg/max/mdev = 0.828/1.325/1.822/0.497 ms

root@LS1-1-VM3:~# ping 10.1.2.11

PING 10.1.2.11 (10.1.2.11) 56(84) bytes of data.

64 bytes from 10.1.2.11: icmp_seq=1 ttl=63 time=3.00 ms

64 bytes from 10.1.2.11: icmp_seq=2 ttl=63 time=0.469 ms

^C

--- 10.1.2.11 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 0.469/1.735/3.002/1.267 ms

root@LS1-1-VM3:~# ping 10.2.1.11

PING 10.2.1.11 (10.2.1.11) 56(84) bytes of data.

64 bytes from 10.2.1.11: icmp_seq=1 ttl=61 time=0.482 ms

64 bytes from 10.2.1.11: icmp_seq=2 ttl=61 time=0.596 ms

^C

--- 10.2.1.11 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.482/0.539/0.596/0.057 ms

 

 

3.2.4. Configure and Validate Security (Micro-Segmentation)

 

Follow the procedure detailed in to implement the following Micro-Segmentation:

Configure and Validate Security (Micro-Segmentation)

 

 

 

 

 

To simplify the configuration of micro-segmentation, NSX offers the ability to Group Workload into static or dynamic membership, such as VM name, tags, segment, etc.

Groups

Members

Group-Tenant1-LS1.1

Segment LS1.1

Group-Tenant1-LS1.2

Segment LS1.2

Group-Tenant1

Group-Tenant1-LS1.1 + Group-Tenant1-LS1.2

Group-Tenant2

Segment LS2.1

Group-AllTenants

Group-Tenant1 + Group-Tenant2

Micro-segmentation is defined in "Categories" (Emergency, Infrastructure, Environment, Application), with security "Sections" + "Rules" in each. The security rules in the different sections will be pushed to the relevant VMs vNics based on the "Apply To" defined in the Section and/or Rule.

Section

Rule-Name

Sources

Destinations

Services

Profiles

Applied To

Action

Tenant1

             

ApplyTo = Group-Tenant1

             
 

Internal-LS1.1 Deny

Group-Tenant1-LS1.1

Group-Tenant1-LS1.1

Any

None

DFW

Reject

 

Internal-LS1.2 Deny

Group-Tenant1-LS1.2

Group-Tenant1-LS1.2

Any

None

DFW

Reject

 

L3 East/West Allow

Group-Tenant1-LS1.1

Group-Tenant1-LS1.2

HTTP + ICMP

None

DFW

Allow

 

L3 East/West Deny

Group-Tenant1-LS1.1

Group-Tenant1-LS1.2

Any

None

DFW

Reject

               

Tenant2

             

ApplyTo = Group-Tenant2

             
 

Internal allow

Group-Tenant2

Group-Tenant2

Any

None

DFW

Allow

               

Cross-Tenants

             

ApplyTo = Group-AllTenants

             
 

Cross-Tenants Allow1

Group-Tenant1

Group-Tenant2

HTTP

None

DFW

Allow

 

Cross-Tenants Allow2

Group-Tenant2

Group-Tenant1

HTTP

None

DFW

Allow

 

Cross-Tenants Deny1

Group-Tenant1

Group-Tenant2

Any

None

DFW

Reject

 

Cross-Tenants Deny2

Group-Tenant2

Group-Tenant1

Any

None

DFW

Reject

               

External

             

ApplyTo = Group-AllTenants

             
 

External Allow

Any

Group-AllTenants

HTTP

None

DFW

Allow

 

External Deny

Any

Group-AllTenants

Any

None

DFW

Reject

 

Here is a partial configuration view:

Here is a partial configuration view of firewall

 

 

 

3.3. Operation Tools

In this section, you'll use 2 popular Operation tools which greatly facilitate Network and Security admins:

  • Network Topology
  • Traceflow

The Operation evaluation done in this chapter is focusing on those 2 tools.
NSX-T offers more than those tools, such as Port Mirroring, IPFIX, Syslog, advanced status and statistics on its different services.


    1. 3.3.1 Network Topology

 

What has been created so far is the following logical topology:

Network Topology

 

 

NSX offers a graphical representation of its network topology.

Log on NSX-T Manager UI.

  • Display the NSX Network Topology.
    Under "Networking - Network Topology".

Display the NSX Network Topology.

  • And specific Network elements, such as T0 information.

And specific Network elements, such as T0 information.


    1. 3.3.2 Traceflow

 

Traceflow allows you to inject a packet into the network and monitor its flow across the network.
Traceflow allows you to identify the path a packet takes to reach its destination or, conversely, where a packet is dropped along the way.
Each entity reports the packet handling on input and output, so you can determine whether issues occur when receiving a packet or when forwarding the packet.

  • Check the Traceflow from VM3 HTTP to VM7.

 

Under "Plan & Troubleshoot - Traceflow",
select the Source "LS1.1-VM3",
to Destination "LS2.1-VM7",
Protocol Type "TCP" with Source Port = "5000" to Destination Port = "80".

Check the Traceflow from VM3 HTTP to VM7.

And click "Trace".

You can follow the path through the different Logical NSX Routing + Security elements on the top half of the screen.

You can follow the path through the different Logical NSX Routing + Security elements on the top half of the screen.

You can also follow each step of the different NSX elements on the bottom half of the screen (and on which device it's running).

You can also follow each step of the different NSX elements on the bottom half of the screen (and on which device it's running).

Note: Worth nothing even if that traffic is routed, it actually does not leave the ESXi1 (192.168.50.21) thanks to the power of NSX service distribution :-)

  • Check the Traceflow from VM3 HTTP to VM1.

 

Under "Plan & Troubleshoot - Traceflow",
select the Source "LS1.1-VM3",
to Destination "VLANWeb-VM1",
Protocol Type "TCP" with Source Port = "5000" to Destination Port = "80".

Check the Traceflow from VM3 HTTP to VM1.

And click "Trace".

You can follow the path through the different Logical NSX Routing + Security elements on the top half of the screen.

You can follow the path through the different Logical NSX Routing + Security elements on the top half of the screen.

You can also follow each step of the different NSX elements on the bottom half of the screen (and on which device it's running).

You can also follow each step of the different NSX elements on the bottom half of the screen (and on which device it's running).

Note: The traceflow tracks the different NSX elements up to it reaches the physical fabric and ends there.

 

 

Associated Content

home-carousel-icon From the action bar MORE button.

Filter Tags

Networking Security NSX Document Activity Path Design Guide Overview Design Deploy Network Operations Security