Configure Certificate Management in SDDC Manager (VCF 4.5)

After the initial deployment of VMware Cloud Foundation, there are several steps you should consider for your VCF environment. For an overview, check the VMware documentation below “Deploying the Management Domain” from the link.

I added the Repository Settings for the SDDC Manager and configured the backup for SDDC Manager, NSX-T Data Center, and vCenter Server already.

After that, I created a Microsoft Certificate Authority based on Windows Server 2022 and integrated it by following the VMware documentation step by step here.

Generate Certificate Signing Requests (CSRs)

In the first screenshot you can see that the connection to the SDDC Manager is not secure.

Select resource type to generate csr
Details step 1
Checking Subject Alternative Name (SAN)
Summary overview
CSRs created successfully

Generate Signed Certificates

You should be able to continue in the process

Selecting my Microsoft Certificate Authority
Certificates created successfully

Install Certificates

Let the installation run
Finally, the connection is secure.

The VMware documentation worked perfectly fine for me.

Deployment Of VCF 4.4.x On Nested ESXi

Cloudbuilder has been imported, it is configured with an IP address of the native VM Network I am using for VCF (in my case VLAN 1611), and the parameter sheet has been filled.

Step by step:

Accept EULA

Using no special hardware, it is nested:

Platform selection

Checking prerequisites:

Prerequisites

Upload workbook:

Workbook upload

Starting validation:

Validation

Now seeing some issues which need to be fixed (Common name doesn’t macht):

Common name issue

In my case, I needed to do the following:

  • Set correct FQDN
  • Regenerate certificates
  • Restart services
esxcli system hostname set --fqdn=<FQDN_ESXi_Host> # check before with esxcli system hostname get 
/sbin/generate-certificates # to regenerate certificate
/etc/init.d/hostd restart && /etc/init.d/vpxa restart # restarting services

We are good to go:

Validation successful

Finally, it is deployed:

Deployment successful

I can start the SDDC Manager:

Launch SDDC Manager

To be clear here, I had my challenges to use a nested environment. I will dig into it and see what I could use from other people to speed up my basic deployment and configuration of ESXi Hosts. First, I will check out the configuration of the components involved.

Setup VyOS Router (Part I)

First you may ask: You are VMware employee and you work with VyOS? Answer: Why not, they are quite simple to setup and I will need some of them later for my Nested Lab to configure BGP for VCF.

I. Basic Configuration

After downloading a current RollingRelease, you create a VM based on Debian 11, 64 Bit, give it 1 GB RAM, 1 CPU, and 10 GB Disk. When the VM has booted its operating system, you issue

install image

and follow the dialogue to make the image persistent across reboots.

Configure Interfaces

I will work with several VLANs as well as sub-interfaces (VIF) routed by the VyOS appliance.

Command sh int to see the interfaces in the system

Configure Static Route and first VyOS Appliance

My FRITZ!Box DSL router is representing my core network. The VyOS router(s) provide services to my virtual machines in several VLANs and provide Internet connectivity.

First we configure one interface:

Configure

Then we give it one of our IP addresses in the network and configure as well the static route:

set interfaces ethernet eth0 address 192.168.178.253/24
set protocols static route 0.0.0.0/0 next-hop 192.168.178.1 distance '1'
eth0 and static route
First interface configured and Internet connectivity established

Configure Interfaces

First VyOS instance looks like this now:

Configuration of the interfaces on the first router

Second VyOS Appliance

The initial starting point here is exactly the same as in the first screenshot in this blog article. I will do the basic config followed by the corresponding screenshot. Make sure you are using different IP addresses for your interfaces. We will configure virtual IP addresses (VIPs) once we implement the Virtual Router Redundancy Protocol (VRRP).

Configuration of the interfaces on the second router

My Homelabs

I am working as consultant and therefore I need to have “some” lab environment available. Sometimes it is ok to go through demo labs, sometimes it is great to have a shared team lab, sometimes it is ok to go through Hands-on Labs.

Most of the time, I am working in my own lab environment.

1. Introduction

I am a big fan of SUPERMICRO. They have so many motherboards you can choose from. I am using some of these A2SDI-8C-HLN4F (click here to get the details). It comes with an Intel Atom Processor C3758 with 8 cores (consuming less power) and you could use up to 256 GB RAM.

For VMware Cloud Foundation, I am going with two X10DRi-T4+ (click here to get the details) and build it nested.

NSX-T Part 7 – VM Connectivity – How to check if it works?

To come to the end of the series, a segment in the overlay and two VMs are required. So go to Networking, Connectivity, Segments, ADD SEGMENT:

Enter a name, the connected T1 GW, the overlay, the subnet information, and hit SAVE and so on.

I used two PhotonOS 3 VMs with DHCP setup (inside the segment). See my connectivity tests here:

photon-lin2 has 10.10.11.11:

I can ping the DGW in the overlay:

I can ping photon-lin1 which has 10.10.11.12:

I get an answer from http://www.google.de:

So that’s it for now – thank you for reading. Maybe this is helpful for non-native network IT-people.

NSX-T Part 6 – Create T0 and T1 Gateway

What does the Tier-0 Gateway do?

It handles the connection to the physical world, runs BGP as its routing protocol, and can be setup as an active/active or an active/standby cluster.

What does the Tier-1 Gateway do?

It connects to one Tier-0 northbound and to one or more overlay networks southbound, and can be setup as an active/active or an active/standby cluster as well. It is often called a tenant router, dedicated to a customer.

Network Topology

This is how it will look like in my environment. The IP’s between T0 and T1 are manage by NSX-T.

Home Lab Use Case

In my environment, the VyOS router will act as my “physical world” and I will work with static routing with only one network and two VMs. I will later upgrade to dynamic routing and BGP peering.

The next part consists of

  1. Create a Segment
  2. Create T0 GW
  3. Create T1 GW
  4. Configure NAT in VyOS
Setup of a Segment

First, go to Networking, Connectivity, Segments, ADD SEGMENT. Enter a Segment Name, Connected Gateway None, add the Vlan Transport Zone and in the VLAN field enter 0, please:

This segment will be used for the Tier-0 configuration.

Setup of Tier-0 Gateway

Go to Networking, Connectivity, Tier-0 Gateways, ADD GATEWAY, Tier-0. Choose a name and HA Mode as well as the Edge Cluster and hit SAVE.

Continue configuring the Tier-0 GW and go to INTERFACES:

Click ADD INTERFACE, add a Name (something descriptive may ease it for you), an IP address and choose the Segment to connect to (VLAN 3009). Select an Edge Node and SAVE the configuration:

Next is ROUTING, Static Routes, Set, and ADD STATIC ROUTE:

Next Hop (in VLAN 3009) to my VyOS “physical world”.

Setup of Tier-1 Gateway

Go to Networking, Connectivity, Tier-1 Gateways, ADD TIER-1 GATEWAY and enter a name, the linked T0-GW, and the Edge Cluster:

Route Advertisement needs to be configured:

At least, “All Connected Segments & Service Ports” needs to be selected. I selected “All NAT IP’s” and “ALL LB VIP Routes” as well for later.

Setup of VyOS as my “physical world”

To be able to connect a VM from the overlay segment to the Internet, additional configuration steps are required. First, see the screenshot:

First, VyOS needs a static route for the 10.10.11.0 network to forward traffic to 172.16.8.11 ind VLAN 3009:

configure

set protocols static route 10.10.11.0/24 next-hop 172.16.8.11

The rest of the commands deals with Source NAT configuration so that the VMs are NATed and can connect to the outside world:

set nat source rule 10 source address 10.10.11.0/24

set nat source rule 10 destination address 0.0.0.0/0

set nat source rule 10 translation address masquerade

commit

save

exit

NSX-T Part 4 – Fabric Configuration continued

Create Nodes continued

In System, Fabric, Nodes, Edge Clusters, add an edge cluster object used later as a container for the edge transport nodes. Choose a name and keep the default profile which contains settings for high availability checks for BFD (Bidirectional Forwarding Detection).

Edge Cluster

An Edge Cluster is used to balance services like Service Routers (SRs) and Distributed Routers (DRs) later.

Edge Transport Node

Here, we are in the management network VLAN 3011, in which the NSX-T Manager is located and from the same subnet, IPs will be used for the Edge Transport Nodes. A FQDN for the node is required, so create a DNS entry first.

You could go with Form Factor Small, but I will continue to play and create a Load Balancer as well and use Medium Form Factor. Check Advanced Resource Reservations in which you could remove the memory reservation (please do not change in production environment), if needed.

Enter credentials – for learning and/or troubleshooting purposes, you could activate SSH and Root SSH Login.

For the deployment of the Edge Node, choose the Edge Cluster.

In Configure Node Settings, the IP addresses entered here are from the management network in which the NSX-T manager is located as well.

In this step, please see the information about the Edge Switch Name in the white box of the screenshot:

First, configure the Overlay Transport Zone:

We use the default “nsx-edge-single-nic-uplink-profile” here with IP addresses from the pool created earlier for VLAN 3010.

Second, configure the Vlan Transport Zone for the uplink to the VyOS router:

We use the default “nsx-edge-single-nic-uplink-profile” here. The port group of VLAN 3009 is used as the connection to the outside world. Once, the Edge has been deployed, it needs to be added to the Edge-Cluster object:

In this overview, the assignment of the vnics of the Edge VM to the “fastpath” is visible, eth0 is always the management NIC.

You should add a second Edge Transport Node and add it to the Edge-Cluster object as well.

NSX-T Part 3 – Fabric Preparation and Configuration

Next upcoming steps are:

  1. Create Transport Zones
  2. Create ESXi Host Uplink Profile
  3. Create ESXi Host Transport Nodes

and more …

Create Transport Zones

Overlay Transport Zone:

Vlan Transport Zone:

We need the transport zones later.

Create Profile

One profile is required for the ESXi hosts for the overlay network. This is a Tunnel Endpoint in the overlay.

Teaming policy Load Balance Source can be used to actively use for example two network cards in the host. The names in Active Uplinks are place holders. I used “Host-TN-TEP-1″ and ” …-2″.

The Vlan needs to be configured here, but not the MTU. I have set my VDS to 9000 and NSX-T sets the MTU to 1600 in its configuration, so all is good so far.

ESXi Host Transport Nodes

Under System, Fabric, Nodes, Host Transport Nodes, Managed by <your VCSA>, take one of the hosts in the resource cluster to start with CONFIGURE NSX:

Click NEXT and configure the settings:

Using the VDS and all the pre created parts like Overlay, Profile, Address Pool here.

Here you select the vmnics to be used by the ESXi host. Repeat the steps for the second host. Finally looking good:

OK, the next step is around Edge Transport Nodes, which follows in part 4.

NSX-T Part 2 – NSX Manager Deployment and Compute Manager Registration

Next step is to install NSX-T Manager. For demo purposes, one is sufficient. I will use VLAN 3011 for the management network of NSX-T. We will later use two other VLANs (one for Tunnel Endpoints and on as uplink to the VyOS router).

Continue to populate the required information. See my entries here:

I created a DNS subdomain for my NSX-T Deployment which I use to organize the components:

After the deployment has finished, start your NSX-T manager and login. Keep in mind that there is a password expiration of 90 days in NSX-T Manager. You can change the expiration value.

  1. SSH to NSX-T manager
  2. set user admin password-expiration <choose whatever you need from 1 to 9999>
  3. Check out VMware documentation for more information here: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/installation/GUID-89E9BD91-6FD4-481A-A76F-7A20DB5B916C.html?hWord=N4IghgNiBcIA5gM6IO4HsBOATAtAUwA84BLDMAF2LQDsQBfIA

After initial login to NSX-T Manager, register VCSA through the tab System, then Fabric, Compute Managers, ADD COMPUTE MANAGER

Registered:

IP ADDRESS POOL

Before starting with the next tasks which are around Transport Zones, Profiles, and Nodes, an IP ADDRESS POOL is needed. Go to Networking, IP Management, IP Address Pools in NSX-T Manager and click ADD IP ADDRESS POOL:

Configure an IP Range using one of the Vlans created before:

and save the configuration.

The official Design Guide

How NSX-T 3.1 should be setup and configured strongly depends on business requirements and use cases. I am writing here how one can install NSX-T 3.1 in a home lab environment. I do not necessarily include best practices here.

The official design guide can be found starting on that page here and going forward: https://blogs.vmware.com/networkvirtualization/2019/09/nsx-t-design-guide.html/

NSX-T Part 1 – Basic Network Environment

Starting with the basic setup of my FRITZ!Box with a static route and the configuration of my VyOS router.

FRITZ!Box Static Route

First, I will add a static route for a 172.16.8.0/21 network (netmask 255.255.248.0). This will be devided into /24 networks which gives us the networks 172.16.8.0/24 to 172.16.15.0/24.

I will use the gateway address for my VyOS virtual routing appliance. For that, I pick .253 for LAN and change WAN later.

VyOS Deployment

After the deployment of VyOS, it needs to be configured.

After VyOS is up and running, the eight port groups need to be configured as VLAN tags in vCenter Server and the network adapters of the VyOS VM connected to them.

The first adapter is the one which has been configured during the deployment, adapters 2 to 9 correspond to the networks mentioned at the beginning. Network adapter 10 will not be used.

VyOS Configuration

Doing a show interfaces shows that eth0 in VyOS is vnic1 in VCSA:

The next steps are about the following topics:

  1. Default static route
  2. Interface configuration
  3. DNS configuration
Default static route

I recommend using ssh to VyOS and start configuration:

configure

set protocols static route 0.0.0.0/0 next-hop 192.168.178.1 distance ‘1’

To save the configuration:

commit

save

Interface configuration

To set up the interfaces:

configure

set interfaces ethernet eth1 address 172.16.8.1/24 #do this for eth2 to eth8

To save the configuration:

commit

save

Check the configuration:

To set the MTU size to 9000, the following command needs to be executed:

configure

set interfaces ethernet eth1 mtu 9000 # (example)

commit

save

DNS configuration

Setup of DNS domain name:

configure

set system domain-name myhomelab.local

commit

save

You need to leave the config level with exit.