NSX-T Part 7 – VM Connectivity – How to check if it works?

To come to the end of the series, a segment in the overlay and two VMs are required. So go to Networking, Connectivity, Segments, ADD SEGMENT:

Enter a name, the connected T1 GW, the overlay, the subnet information, and hit SAVE and so on.

I used two PhotonOS 3 VMs with DHCP setup (inside the segment). See my connectivity tests here:

photon-lin2 has 10.10.11.11:

I can ping the DGW in the overlay:

I can ping photon-lin1 which has 10.10.11.12:

I get an answer from http://www.google.de:

So that’s it for now – thank you for reading. Maybe this is helpful for non-native network IT-people.

NSX-T Part 6 – Create T0 and T1 Gateway

What does the Tier-0 Gateway do?

It handles the connection to the physical world, runs BGP as its routing protocol, and can be setup as an active/active or an active/standby cluster.

What does the Tier-1 Gateway do?

It connects to one Tier-0 northbound and to one or more overlay networks southbound, and can be setup as an active/active or an active/standby cluster as well. It is often called a tenant router, dedicated to a customer.

Network Topology

This is how it will look like in my environment. The IP’s between T0 and T1 are manage by NSX-T.

Home Lab Use Case

In my environment, the VyOS router will act as my “physical world” and I will work with static routing with only one network and two VMs. I will later upgrade to dynamic routing and BGP peering.

The next part consists of

  1. Create a Segment
  2. Create T0 GW
  3. Create T1 GW
  4. Configure NAT in VyOS
Setup of a Segment

First, go to Networking, Connectivity, Segments, ADD SEGMENT. Enter a Segment Name, Connected Gateway None, add the Vlan Transport Zone and in the VLAN field enter 0, please:

This segment will be used for the Tier-0 configuration.

Setup of Tier-0 Gateway

Go to Networking, Connectivity, Tier-0 Gateways, ADD GATEWAY, Tier-0. Choose a name and HA Mode as well as the Edge Cluster and hit SAVE.

Continue configuring the Tier-0 GW and go to INTERFACES:

Click ADD INTERFACE, add a Name (something descriptive may ease it for you), an IP address and choose the Segment to connect to (VLAN 3009). Select an Edge Node and SAVE the configuration:

Next is ROUTING, Static Routes, Set, and ADD STATIC ROUTE:

Next Hop (in VLAN 3009) to my VyOS “physical world”.

Setup of Tier-1 Gateway

Go to Networking, Connectivity, Tier-1 Gateways, ADD TIER-1 GATEWAY and enter a name, the linked T0-GW, and the Edge Cluster:

Route Advertisement needs to be configured:

At least, “All Connected Segments & Service Ports” needs to be selected. I selected “All NAT IP’s” and “ALL LB VIP Routes” as well for later.

Setup of VyOS as my “physical world”

To be able to connect a VM from the overlay segment to the Internet, additional configuration steps are required. First, see the screenshot:

First, VyOS needs a static route for the 10.10.11.0 network to forward traffic to 172.16.8.11 ind VLAN 3009:

configure

set protocols static route 10.10.11.0/24 next-hop 172.16.8.11

The rest of the commands deals with Source NAT configuration so that the VMs are NATed and can connect to the outside world:

set nat source rule 10 source address 10.10.11.0/24

set nat source rule 10 destination address 0.0.0.0/0

set nat source rule 10 translation address masquerade

commit

save

exit

NSX-T Part 5 – Overlay Network – How to check if it works?

After all configuration done some checks are helpful to see if all works well.

SSH to the edge via management IP (here 172.16.10.51 in VLAN 3011). Log in as admin and your password:

If you want to disable the timeout of the cli, enter “set cli-timeout 0” and hit enter.

To get an overview of the interfaces do “get interface”:

You can do a “get route” but this shows only the management network gateway from which you should be able to ping – this has nothing to do with the overlay.

To go deeper, enter “vrf 0”. From there do “get forwarding”

Here you are in the overlay. If you can ping the TEPs of the hosts, the DGW, and the TEP of the edge, all looks good:

Follow on, we will later check communication of VMs in the overlay between each other and of course connect to the Internet.

NSX-T Part 4 – Fabric Configuration continued

Create Nodes continued

In System, Fabric, Nodes, Edge Clusters, add an edge cluster object used later as a container for the edge transport nodes. Choose a name and keep the default profile which contains settings for high availability checks for BFD (Bidirectional Forwarding Detection).

Edge Cluster

An Edge Cluster is used to balance services like Service Routers (SRs) and Distributed Routers (DRs) later.

Edge Transport Node

Here, we are in the management network VLAN 3011, in which the NSX-T Manager is located and from the same subnet, IPs will be used for the Edge Transport Nodes. A FQDN for the node is required, so create a DNS entry first.

You could go with Form Factor Small, but I will continue to play and create a Load Balancer as well and use Medium Form Factor. Check Advanced Resource Reservations in which you could remove the memory reservation (please do not change in production environment), if needed.

Enter credentials – for learning and/or troubleshooting purposes, you could activate SSH and Root SSH Login.

For the deployment of the Edge Node, choose the Edge Cluster.

In Configure Node Settings, the IP addresses entered here are from the management network in which the NSX-T manager is located as well.

In this step, please see the information about the Edge Switch Name in the white box of the screenshot:

First, configure the Overlay Transport Zone:

We use the default “nsx-edge-single-nic-uplink-profile” here with IP addresses from the pool created earlier for VLAN 3010.

Second, configure the Vlan Transport Zone for the uplink to the VyOS router:

We use the default “nsx-edge-single-nic-uplink-profile” here. The port group of VLAN 3009 is used as the connection to the outside world. Once, the Edge has been deployed, it needs to be added to the Edge-Cluster object:

In this overview, the assignment of the vnics of the Edge VM to the “fastpath” is visible, eth0 is always the management NIC.

You should add a second Edge Transport Node and add it to the Edge-Cluster object as well.

NSX-T Part 3 – Fabric Preparation and Configuration

Next upcoming steps are:

  1. Create Transport Zones
  2. Create ESXi Host Uplink Profile
  3. Create ESXi Host Transport Nodes

and more …

Create Transport Zones

Overlay Transport Zone:

Vlan Transport Zone:

We need the transport zones later.

Create Profile

One profile is required for the ESXi hosts for the overlay network. This is a Tunnel Endpoint in the overlay.

Teaming policy Load Balance Source can be used to actively use for example two network cards in the host. The names in Active Uplinks are place holders. I used “Host-TN-TEP-1″ and ” …-2″.

The Vlan needs to be configured here, but not the MTU. I have set my VDS to 9000 and NSX-T sets the MTU to 1600 in its configuration, so all is good so far.

ESXi Host Transport Nodes

Under System, Fabric, Nodes, Host Transport Nodes, Managed by <your VCSA>, take one of the hosts in the resource cluster to start with CONFIGURE NSX:

Click NEXT and configure the settings:

Using the VDS and all the pre created parts like Overlay, Profile, Address Pool here.

Here you select the vmnics to be used by the ESXi host. Repeat the steps for the second host. Finally looking good:

OK, the next step is around Edge Transport Nodes, which follows in part 4.