After the initial deployment of VMware Cloud Foundation, there are several steps you should consider for your VCF environment. For an overview, check the VMware documentation below “Deploying the Management Domain” from the link.
I added the Repository Settings for the SDDC Manager and configured the backup for SDDC Manager, NSX-T Data Center, and vCenter Server already.
After that, I created a Microsoft Certificate Authority based on Windows Server 2022 and integrated it by following the VMware documentation step by step here.
Generate Certificate Signing Requests (CSRs)
In the first screenshot you can see that the connection to the SDDC Manager is not secure.
Generate Signed Certificates
You should be able to continue in the process
Install Certificates
The VMware documentation worked perfectly fine for me.
Cloudbuilder has been imported, it is configured with an IP address of the native VM Network I am using for VCF (in my case VLAN 1611), and the parameter sheet has been filled.
Step by step:
Using no special hardware, it is nested:
Checking prerequisites:
Upload workbook:
Starting validation:
Now seeing some issues which need to be fixed (Common name doesn’t macht):
In my case, I needed to do the following:
Set correct FQDN
Regenerate certificates
Restart services
esxcli system hostname set --fqdn=<FQDN_ESXi_Host> # check before with esxcli system hostname get
/sbin/generate-certificates # to regenerate certificate
/etc/init.d/hostd restart && /etc/init.d/vpxa restart # restarting services
We are good to go:
Finally, it is deployed:
I can start the SDDC Manager:
To be clear here, I had my challenges to use a nested environment. I will dig into it and see what I could use from other people to speed up my basic deployment and configuration of ESXi Hosts. First, I will check out the configuration of the components involved.
First you may ask: You are VMware employee and you work with VyOS? Answer: Why not, they are quite simple to setup and I will need some of them later for my Nested Lab to configure BGP for VCF.
I. Basic Configuration
After downloading a current RollingRelease, you create a VM based on Debian 11, 64 Bit, give it 1 GB RAM, 1 CPU, and 10 GB Disk. When the VM has booted its operating system, you issue
install image
and follow the dialogue to make the image persistent across reboots.
Configure Interfaces
I will work with several VLANs as well as sub-interfaces (VIF) routed by the VyOS appliance.
Configure Static Route and first VyOS Appliance
My FRITZ!Box DSL router is representing my core network. The VyOS router(s) provide services to my virtual machines in several VLANs and provide Internet connectivity.
First we configure one interface:
Then we give it one of our IP addresses in the network and configure as well the static route:
set interfaces ethernet eth0 address 192.168.178.253/24
set protocols static route 0.0.0.0/0 next-hop 192.168.178.1 distance '1'
Configure Interfaces
First VyOS instance looks like this now:
Second VyOS Appliance
The initial starting point here is exactly the same as in the first screenshot in this blog article. I will do the basic config followed by the corresponding screenshot. Make sure you are using different IP addresses for your interfaces. We will configure virtual IP addresses (VIPs) once we implement the Virtual Router Redundancy Protocol (VRRP).
I am working as consultant and therefore I need to have “some” lab environment available. Sometimes it is ok to go through demo labs, sometimes it is great to have a shared team lab, sometimes it is ok to go through Hands-on Labs.
Most of the time, I am working in my own lab environment.
1. Introduction
I am a big fan of SUPERMICRO. They have so many motherboards you can choose from. I am using some of these A2SDI-8C-HLN4F (click here to get the details). It comes with an Intel Atom Processor C3758 with 8 cores (consuming less power) and you could use up to 256 GB RAM.
For VMware Cloud Foundation, I am going with two X10DRi-T4+ (click here to get the details) and build it nested.
It handles the connection to the physical world, runs BGP as its routing protocol, and can be setup as an active/active or an active/standby cluster.
What does the Tier-1 Gateway do?
It connects to one Tier-0 northbound and to one or more overlay networks southbound, and can be setup as an active/active or an active/standby cluster as well. It is often called a tenant router, dedicated to a customer.
Network Topology
This is how it will look like in my environment. The IP’s between T0 and T1 are manage by NSX-T.
Home Lab Use Case
In my environment, the VyOS router will act as my “physical world” and I will work with static routing with only one network and two VMs. I will later upgrade to dynamic routing and BGP peering.
The next part consists of
Create a Segment
Create T0 GW
Create T1 GW
Configure NAT in VyOS
Setup of a Segment
First, go to Networking, Connectivity, Segments, ADD SEGMENT. Enter a Segment Name, Connected Gateway None, add the Vlan Transport Zone and in the VLAN field enter 0, please:
This segment will be used for the Tier-0 configuration.
Setup of Tier-0 Gateway
Go to Networking, Connectivity, Tier-0 Gateways, ADD GATEWAY, Tier-0. Choose a name and HA Mode as well as the Edge Cluster and hit SAVE.
Continue configuring the Tier-0 GW and go to INTERFACES:
Click ADD INTERFACE, add a Name (something descriptive may ease it for you), an IP address and choose the Segment to connect to (VLAN 3009). Select an Edge Node and SAVE the configuration:
Next is ROUTING, Static Routes, Set, and ADD STATIC ROUTE:
Next Hop (in VLAN 3009) to my VyOS “physical world”.
Setup of Tier-1 Gateway
Go to Networking, Connectivity, Tier-1 Gateways, ADD TIER-1 GATEWAY and enter a name, the linked T0-GW, and the Edge Cluster:
Route Advertisement needs to be configured:
At least, “All Connected Segments & Service Ports” needs to be selected. I selected “All NAT IP’s” and “ALL LB VIP Routes” as well for later.
Setup of VyOS as my “physical world”
To be able to connect a VM from the overlay segment to the Internet, additional configuration steps are required. First, see the screenshot:
First, VyOS needs a static route for the 10.10.11.0 network to forward traffic to 172.16.8.11 ind VLAN 3009:
configure
set protocols static route 10.10.11.0/24 next-hop 172.16.8.11
The rest of the commands deals with Source NAT configuration so that the VMs are NATed and can connect to the outside world:
set nat source rule 10 source address 10.10.11.0/24
set nat source rule 10 destination address 0.0.0.0/0
set nat source rule 10 translation address masquerade
In System, Fabric, Nodes, Edge Clusters, add an edge cluster object used later as a container for the edge transport nodes. Choose a name and keep the default profile which contains settings for high availability checks for BFD (Bidirectional Forwarding Detection).
Edge Cluster
An Edge Cluster is used to balance services like Service Routers (SRs) and Distributed Routers (DRs) later.
Edge Transport Node
Here, we are in the management network VLAN 3011, in which the NSX-T Manager is located and from the same subnet, IPs will be used for the Edge Transport Nodes. A FQDN for the node is required, so create a DNS entry first.
You could go with Form Factor Small, but I will continue to play and create a Load Balancer as well and use Medium Form Factor. Check Advanced Resource Reservations in which you could remove the memory reservation (please do not change in production environment), if needed.
Enter credentials – for learning and/or troubleshooting purposes, you could activate SSH and Root SSH Login.
For the deployment of the Edge Node, choose the Edge Cluster.
In Configure Node Settings, the IP addresses entered here are from the management network in which the NSX-T manager is located as well.
In this step, please see the information about the Edge Switch Name in the white box of the screenshot:
First, configure the Overlay Transport Zone:
We use the default “nsx-edge-single-nic-uplink-profile” here with IP addresses from the pool created earlier for VLAN 3010.
Second, configure the Vlan Transport Zone for the uplink to the VyOS router:
We use the default “nsx-edge-single-nic-uplink-profile” here. The port group of VLAN 3009 is used as the connection to the outside world. Once, the Edge has been deployed, it needs to be added to the Edge-Cluster object:
In this overview, the assignment of the vnics of the Edge VM to the “fastpath” is visible, eth0 is always the management NIC.
You should add a second Edge Transport Node and add it to the Edge-Cluster object as well.
One profile is required for the ESXi hosts for the overlay network. This is a Tunnel Endpoint in the overlay.
Teaming policy Load Balance Source can be used to actively use for example two network cards in the host. The names in Active Uplinks are place holders. I used “Host-TN-TEP-1″ and ” …-2″.
The Vlan needs to be configured here, but not the MTU. I have set my VDS to 9000 and NSX-T sets the MTU to 1600 in its configuration, so all is good so far.
ESXi Host Transport Nodes
Under System, Fabric, Nodes, Host Transport Nodes, Managed by <your VCSA>, take one of the hosts in the resource cluster to start with CONFIGURE NSX:
Click NEXT and configure the settings:
Using the VDS and all the pre created parts like Overlay, Profile, Address Pool here.
Here you select the vmnics to be used by the ESXi host. Repeat the steps for the second host. Finally looking good:
OK, the next step is around Edge Transport Nodes, which follows in part 4.
Next step is to install NSX-T Manager. For demo purposes, one is sufficient. I will use VLAN 3011 for the management network of NSX-T. We will later use two other VLANs (one for Tunnel Endpoints and on as uplink to the VyOS router).
Continue to populate the required information. See my entries here:
I created a DNS subdomain for my NSX-T Deployment which I use to organize the components:
After the deployment has finished, start your NSX-T manager and login. Keep in mind that there is a password expiration of 90 days in NSX-T Manager. You can change the expiration value.
SSH to NSX-T manager
set user admin password-expiration <choose whatever you need from 1 to 9999>
After initial login to NSX-T Manager, register VCSA through the tab System, then Fabric, Compute Managers, ADD COMPUTE MANAGER
Registered:
IP ADDRESS POOL
Before starting with the next tasks which are around Transport Zones, Profiles, and Nodes, an IP ADDRESS POOL is needed. Go to Networking, IP Management, IP Address Pools in NSX-T Manager and click ADD IP ADDRESS POOL:
Configure an IP Range using one of the Vlans created before:
and save the configuration.
The official Design Guide
How NSX-T 3.1 should be setup and configured strongly depends on business requirements and use cases. I am writing here how one can install NSX-T 3.1 in a home lab environment. I do not necessarily include best practices here.
Starting with the basic setup of my FRITZ!Box with a static route and the configuration of my VyOS router.
FRITZ!Box Static Route
First, I will add a static route for a 172.16.8.0/21 network (netmask 255.255.248.0). This will be devided into /24 networks which gives us the networks 172.16.8.0/24 to 172.16.15.0/24.
I will use the gateway address for my VyOS virtual routing appliance. For that, I pick .253 for LAN and change WAN later.
VyOS Deployment
After the deployment of VyOS, it needs to be configured.
After VyOS is up and running, the eight port groups need to be configured as VLAN tags in vCenter Server and the network adapters of the VyOS VM connected to them.
The first adapter is the one which has been configured during the deployment, adapters 2 to 9 correspond to the networks mentioned at the beginning. Network adapter 10 will not be used.
VyOS Configuration
Doing a show interfaces shows that eth0 in VyOS is vnic1 in VCSA:
The next steps are about the following topics:
Default static route
Interface configuration
DNS configuration
Default static route
I recommend using ssh to VyOS and start configuration:
configure
set protocols static route 0.0.0.0/0 next-hop 192.168.178.1 distance ‘1’
To save the configuration:
commit
save
Interface configuration
To set up the interfaces:
configure
set interfaces ethernet eth1 address 172.16.8.1/24 #do this for eth2 to eth8
To save the configuration:
commit
save
Check the configuration:
To set the MTU size to 9000, the following command needs to be executed: