My Homelabs

I am working as consultant and therefore I need to have “some” lab environment available. Sometimes it is ok to go through demo labs, sometimes it is great to have a shared team lab, sometimes it is ok to go through Hands-on Labs.

Most of the time, I am working in my own lab environment.

1. Introduction

I am a big fan of SUPERMICRO. They have so many motherboards you can choose from. I am using some of these A2SDI-8C-HLN4F (click here to get the details). It comes with an Intel Atom Processor C3758 with 8 cores (consuming less power) and you could use up to 256 GB RAM.

For VMware Cloud Foundation, I am going with two X10DRi-T4+ (click here to get the details) and build it nested.

NSX-T Part 7 – VM Connectivity – How to check if it works?

To come to the end of the series, a segment in the overlay and two VMs are required. So go to Networking, Connectivity, Segments, ADD SEGMENT:

Enter a name, the connected T1 GW, the overlay, the subnet information, and hit SAVE and so on.

I used two PhotonOS 3 VMs with DHCP setup (inside the segment). See my connectivity tests here:

photon-lin2 has 10.10.11.11:

I can ping the DGW in the overlay:

I can ping photon-lin1 which has 10.10.11.12:

I get an answer from http://www.google.de:

So that’s it for now – thank you for reading. Maybe this is helpful for non-native network IT-people.

NSX-T Part 6 – Create T0 and T1 Gateway

What does the Tier-0 Gateway do?

It handles the connection to the physical world, runs BGP as its routing protocol, and can be setup as an active/active or an active/standby cluster.

What does the Tier-1 Gateway do?

It connects to one Tier-0 northbound and to one or more overlay networks southbound, and can be setup as an active/active or an active/standby cluster as well. It is often called a tenant router, dedicated to a customer.

Network Topology

This is how it will look like in my environment. The IP’s between T0 and T1 are manage by NSX-T.

Home Lab Use Case

In my environment, the VyOS router will act as my “physical world” and I will work with static routing with only one network and two VMs. I will later upgrade to dynamic routing and BGP peering.

The next part consists of

  1. Create a Segment
  2. Create T0 GW
  3. Create T1 GW
  4. Configure NAT in VyOS
Setup of a Segment

First, go to Networking, Connectivity, Segments, ADD SEGMENT. Enter a Segment Name, Connected Gateway None, add the Vlan Transport Zone and in the VLAN field enter 0, please:

This segment will be used for the Tier-0 configuration.

Setup of Tier-0 Gateway

Go to Networking, Connectivity, Tier-0 Gateways, ADD GATEWAY, Tier-0. Choose a name and HA Mode as well as the Edge Cluster and hit SAVE.

Continue configuring the Tier-0 GW and go to INTERFACES:

Click ADD INTERFACE, add a Name (something descriptive may ease it for you), an IP address and choose the Segment to connect to (VLAN 3009). Select an Edge Node and SAVE the configuration:

Next is ROUTING, Static Routes, Set, and ADD STATIC ROUTE:

Next Hop (in VLAN 3009) to my VyOS “physical world”.

Setup of Tier-1 Gateway

Go to Networking, Connectivity, Tier-1 Gateways, ADD TIER-1 GATEWAY and enter a name, the linked T0-GW, and the Edge Cluster:

Route Advertisement needs to be configured:

At least, “All Connected Segments & Service Ports” needs to be selected. I selected “All NAT IP’s” and “ALL LB VIP Routes” as well for later.

Setup of VyOS as my “physical world”

To be able to connect a VM from the overlay segment to the Internet, additional configuration steps are required. First, see the screenshot:

First, VyOS needs a static route for the 10.10.11.0 network to forward traffic to 172.16.8.11 ind VLAN 3009:

configure

set protocols static route 10.10.11.0/24 next-hop 172.16.8.11

The rest of the commands deals with Source NAT configuration so that the VMs are NATed and can connect to the outside world:

set nat source rule 10 source address 10.10.11.0/24

set nat source rule 10 destination address 0.0.0.0/0

set nat source rule 10 translation address masquerade

commit

save

exit

NSX-T Part 4 – Fabric Configuration continued

Create Nodes continued

In System, Fabric, Nodes, Edge Clusters, add an edge cluster object used later as a container for the edge transport nodes. Choose a name and keep the default profile which contains settings for high availability checks for BFD (Bidirectional Forwarding Detection).

Edge Cluster

An Edge Cluster is used to balance services like Service Routers (SRs) and Distributed Routers (DRs) later.

Edge Transport Node

Here, we are in the management network VLAN 3011, in which the NSX-T Manager is located and from the same subnet, IPs will be used for the Edge Transport Nodes. A FQDN for the node is required, so create a DNS entry first.

You could go with Form Factor Small, but I will continue to play and create a Load Balancer as well and use Medium Form Factor. Check Advanced Resource Reservations in which you could remove the memory reservation (please do not change in production environment), if needed.

Enter credentials – for learning and/or troubleshooting purposes, you could activate SSH and Root SSH Login.

For the deployment of the Edge Node, choose the Edge Cluster.

In Configure Node Settings, the IP addresses entered here are from the management network in which the NSX-T manager is located as well.

In this step, please see the information about the Edge Switch Name in the white box of the screenshot:

First, configure the Overlay Transport Zone:

We use the default “nsx-edge-single-nic-uplink-profile” here with IP addresses from the pool created earlier for VLAN 3010.

Second, configure the Vlan Transport Zone for the uplink to the VyOS router:

We use the default “nsx-edge-single-nic-uplink-profile” here. The port group of VLAN 3009 is used as the connection to the outside world. Once, the Edge has been deployed, it needs to be added to the Edge-Cluster object:

In this overview, the assignment of the vnics of the Edge VM to the “fastpath” is visible, eth0 is always the management NIC.

You should add a second Edge Transport Node and add it to the Edge-Cluster object as well.

NSX-T Part 3 – Fabric Preparation and Configuration

Next upcoming steps are:

  1. Create Transport Zones
  2. Create ESXi Host Uplink Profile
  3. Create ESXi Host Transport Nodes

and more …

Create Transport Zones

Overlay Transport Zone:

Vlan Transport Zone:

We need the transport zones later.

Create Profile

One profile is required for the ESXi hosts for the overlay network. This is a Tunnel Endpoint in the overlay.

Teaming policy Load Balance Source can be used to actively use for example two network cards in the host. The names in Active Uplinks are place holders. I used “Host-TN-TEP-1″ and ” …-2″.

The Vlan needs to be configured here, but not the MTU. I have set my VDS to 9000 and NSX-T sets the MTU to 1600 in its configuration, so all is good so far.

ESXi Host Transport Nodes

Under System, Fabric, Nodes, Host Transport Nodes, Managed by <your VCSA>, take one of the hosts in the resource cluster to start with CONFIGURE NSX:

Click NEXT and configure the settings:

Using the VDS and all the pre created parts like Overlay, Profile, Address Pool here.

Here you select the vmnics to be used by the ESXi host. Repeat the steps for the second host. Finally looking good:

OK, the next step is around Edge Transport Nodes, which follows in part 4.

NSX-T Part 2 – NSX Manager Deployment and Compute Manager Registration

Next step is to install NSX-T Manager. For demo purposes, one is sufficient. I will use VLAN 3011 for the management network of NSX-T. We will later use two other VLANs (one for Tunnel Endpoints and on as uplink to the VyOS router).

Continue to populate the required information. See my entries here:

I created a DNS subdomain for my NSX-T Deployment which I use to organize the components:

After the deployment has finished, start your NSX-T manager and login. Keep in mind that there is a password expiration of 90 days in NSX-T Manager. You can change the expiration value.

  1. SSH to NSX-T manager
  2. set user admin password-expiration <choose whatever you need from 1 to 9999>
  3. Check out VMware documentation for more information here: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/installation/GUID-89E9BD91-6FD4-481A-A76F-7A20DB5B916C.html?hWord=N4IghgNiBcIA5gM6IO4HsBOATAtAUwA84BLDMAF2LQDsQBfIA

After initial login to NSX-T Manager, register VCSA through the tab System, then Fabric, Compute Managers, ADD COMPUTE MANAGER

Registered:

IP ADDRESS POOL

Before starting with the next tasks which are around Transport Zones, Profiles, and Nodes, an IP ADDRESS POOL is needed. Go to Networking, IP Management, IP Address Pools in NSX-T Manager and click ADD IP ADDRESS POOL:

Configure an IP Range using one of the Vlans created before:

and save the configuration.

The official Design Guide

How NSX-T 3.1 should be setup and configured strongly depends on business requirements and use cases. I am writing here how one can install NSX-T 3.1 in a home lab environment. I do not necessarily include best practices here.

The official design guide can be found starting on that page here and going forward: https://blogs.vmware.com/networkvirtualization/2019/09/nsx-t-design-guide.html/

NSX-T Part 1 – Basic Network Environment

Starting with the basic setup of my FRITZ!Box with a static route and the configuration of my VyOS router.

FRITZ!Box Static Route

First, I will add a static route for a 172.16.8.0/21 network (netmask 255.255.248.0). This will be devided into /24 networks which gives us the networks 172.16.8.0/24 to 172.16.15.0/24.

I will use the gateway address for my VyOS virtual routing appliance. For that, I pick .253 for LAN and change WAN later.

VyOS Deployment

After the deployment of VyOS, it needs to be configured.

After VyOS is up and running, the eight port groups need to be configured as VLAN tags in vCenter Server and the network adapters of the VyOS VM connected to them.

The first adapter is the one which has been configured during the deployment, adapters 2 to 9 correspond to the networks mentioned at the beginning. Network adapter 10 will not be used.

VyOS Configuration

Doing a show interfaces shows that eth0 in VyOS is vnic1 in VCSA:

The next steps are about the following topics:

  1. Default static route
  2. Interface configuration
  3. DNS configuration
Default static route

I recommend using ssh to VyOS and start configuration:

configure

set protocols static route 0.0.0.0/0 next-hop 192.168.178.1 distance ‘1’

To save the configuration:

commit

save

Interface configuration

To set up the interfaces:

configure

set interfaces ethernet eth1 address 172.16.8.1/24 #do this for eth2 to eth8

To save the configuration:

commit

save

Check the configuration:

To set the MTU size to 9000, the following command needs to be executed:

configure

set interfaces ethernet eth1 mtu 9000 # (example)

commit

save

DNS configuration

Setup of DNS domain name:

configure

set system domain-name myhomelab.local

commit

save

You need to leave the config level with exit.

Intro – NSX-T 3.1 Understandable

Step-by-step guide for the Non-Networking Engineers.

There are tons of blogs, videos, books, and documentation links around. I was always missing some information and got stuck somewhere. The following parts do not have specifics like Kubernetes, TKG, or Cloud Director in mind. It is just NSX-T 3.1 on vSphere 7 Update 1 in my home lab.

I will use two vSphere 7 clusters (one edge cluster and one resource cluster), each with two hosts. And I will use the VDS 7 to simplify the configuration a bit.

In Germany, the FRITZ!Box (https://avm.de/) is a very popular SOHO DSL router and I have as well some of them in my home lab. Additionally, I am using a VyOS (https://vyos.io/) router which I downloaded in the past as .ova file to provide some networks I need. I will come to the details.

Update on January 11, 2021:

To give you an idea how all is connected, please see this overview: