Sorry, your browser does not support inline SVG.
Previous page <Page Up>,<L-arrow>
Single-step <R-arrow>
Play <Space>,<D-arrow>
Next page <Page Down>
Play All <Enter>,<U-arrow>
Toggle Audio
Choose audio narration on or off.
Sorry, your browser does not support inline SVG.
Script: TKGs Network Configuration 1. Does TKG Networking confuse you? -------------------------------- We are going to build the TKGs environment you see here, piece by piece, and use that to explain the network configuration steps required for TKGs. You'll note there are a lot of pieces here, all with important roles. If you have a mental picture of them, it may be easier to follow configuation documents such as the Quick Start guide, linked here. In a moment we will clear the page, take a simple application and build it up into a modern application running on multiple machines. 2. Applications used to be simple ------------------------------ Decades ago, an application was code running on a single machine. It took input from files or the keyboard and sent results to the screen or output files. A user might sit directly at the terminal... ...or log in over a network. We've simplied this early environment, of couse, but it is a good baseline. 3. Modularization -------------- We now move forward several decades to modularized applications. For example, let's say we have an online store selling pies and baked goods. We'll hide the input and output into a more generic category of communication among modules. Users will communicate through their browsers. And if we use a LAMP stack example, the application will split into Linux machines, an Apache web front, a MySQL database, and use PHP for the store application. 4. VMware introduces virtualization -------------------------------- VMware introduced virtualization through ESXi. Virtual machines could share the same hardware, but be tailored for particular apps. Virtualization optimized resource utilization, increased Reliability, Availability, and Serviceability, improved scalability through easier deployment, replication, and so forth, and provided uniform virtual platforms for developers and for production environments. 5. Proliferation ------------- But, scaling and proliferation bring their own challenges. Applications sharded into many microservices, with the benefits of modularization, but increasing the number of moving parts. We also saw an increase in the number of virtual machines and of physical hosts. The number of applications grew. Services and applications saw themselves rebalanced, updated, and relaunched with increasing frequency. 6. Containers and Kubernetes ------------------------- We need to organize the many services, machines, and applications. Containers let us launch every service quickly and in exactly the right environment. Kubernetes orchestrates multiple containers. Related or coupled containers are grouped into pods. A number of pods can be run on suitable worker machines or nodes, until they run out of capacity. The set of machines running the pods of an application are called a cluster. Kubernetes also runs controller processes to monitor the pods, handle failures, balance loads, etc. 7. Tanzu Kubernetes Grid Service (TKGs) --------------------------------------- Tanzu is the VMware technology for modern applications. TKG or Tanzu Kubernetes Grid is technology to supply and manage multiple Kubernetes clusters. We are going to look at a variant called TKG Service, or TKGs. The first aspect of TKG we'll examine are the specialized VMs that we are labeling Tanzu VMs. They optimize the VM technology to spin up quickly and only support functionality required for Kubernetes. Plus they run lean images optimized for Kubernetes. In the case of TKGs, one Tanzu Kubernetes Cluster is designated a Supervisor. Its job is NOT to run any of your applications, but to manage all the other clusters that are running your applications. As you request new clusters, the supervisor will create them on the hosts under its control. Clusters of virtual machines are very easy to prepare and launch compared to physical clusters. The supervisor can spin the clusters up and down, ensure resource constraints are met and much more. 8. Full TKGs environment --------------------- Now let's flesh out the other components in a working TKGs environment. Everything runs on VMware ESXi hosts. vCenter Server Appliance, the familiar vSphere GUI, also lets you set up TKG. For TKGs your networking options are some form of NSX or at least two VDS subnets. Like the Quick Start Guide, here we will use VDS: one subnet is for a management network and one for a workload network. This explanation also assumes VDS and subnets are already set up before we begin configuring the TKGs networking. While admins will set up the infrastructure and the networks, we need to remember that other people, app developers and end users, will be be using the TKG clusters and need network access. The last component TKGs requires is a load balancer. You can use NSX Advanced Load Balancer, Avi, or, as we will do here, you can use HAProxy, which we will launch as an appliance running on a standard VM. It provides load balancing services and it proxies addresses used by developers and end users, translating them into the actual addresses of all the Tanzu VMs in the TKCs. The Supervisor Cluster manages the HAProxy at runtime. 9. Configuration — Planning ------------------------ To prepare for TKGs network configuration, you need to note or to decide upon several addresses and address ranges. In vCenter, you can navigate to the virtual distributed switch select your your management network and you can note its gateway, its DNS, and its network address and mask. Next, note the same information for your workload network. Note the address of the vCenter Server Appliance. The HAProxy appliance, when you deploy it, will need an address on the workload network and another address, which we'll put on the management network as is done in the Quick Start Guide. You can work with your network admin to choose addresses not already in use. These must be static addresses. The supervisor cluster has three VMs, but will need five addresses on the management network, three for the VMs, one for maintenance, and an extra, redundant one for availability. These must be five sequential, static addresses, never changed by a DHCP server. During configuration you will enter the first of these as the Starting IP Address. Again, you can work with your network admin to choose addresses not already in use. The supervisor will also need three addresses on the workload network, but you don't have to choose them. Services will assign them for you. All of the Tanzu VMs (including the three VMs in the Supervisor Cluster) will be assigned addresses from the Workload Network. You have to choose and provide a range of addresses. You have the option to supply multiple ranges. Lastly, we have the Virtual IP range. The developers and end users will access the TKCs with addresses in this range. HAProxy will translate them into appropriate addresses assigned to actual VMs. You have the option to supply muliple ranges. We have to use some of this information to configure the HAProxy appliance and all of this information to configure the TKG Supervisor, in vCenter you will not see a main menu item called "TKG Supervisor" nor "Tanzu". Tanzu is set up under the "Workload Management" menu item. 10. Configuration — Example ----------------------- If you have difficulty following the Tanzu Quick Start Guide, it may be in taking all the addresses you carefully planned and entering their correct values into the six dialog boxes you must fill out. So, let's start with the example addresses and ranges shown here. Note there are no conflicting addresses. The VDS gateways, DNS, network addresses, and masks can be found in vCenter as shown here. There are two dialogs to fill with our network information as we deploy the HAProxy appliance. The first requires the VDS information from both networks, and requires the addresses of the HAProxy appliance itself, one from each network. The second of the networking HAProxy deployment dialogs, here, requires the Virtual IP address range in CIDR format. This example uses a single range, but you can specify more than one. From the vCenter main menu you select the Workload Management item to start to setup Tanzu and the TKG supervisor cluster. There are a series of dialog boxes to go through and four of them require networking information. The first of them is where you select the vCenter URI. The second networking dialog box when setting up workload management is for load balancing. You supply the management network address of the HAProxy appliance. You also supply the VIP address ranges with first and last addresses of each range. Note that in this example we specify a slightly smaller range than the same range in CIDR format. The third workload management dialog box requires the first of the five sequential management network IP addresses for the supervisor cluster. You also supply the management network subnet mask, its gateway, and its DNS list. In the top of the fourth dialog specify the Workload Network. And in the bottom of the fourth workload management dialog box, enter the range, or multiple ranges, of addresses on the workload network for the TKG clusters. You also supply the workload network subnet mask, its gateway, and the DNS list. 11. Tanzu — Developer View -------------------------- All we have been discussing is relevant to the admin who has to set up the Tanzu environment. But for a developer, and especially for an end user, all of this is largely invisible. A user simply accesses services at some network address. And a developer, using the address from the admin, uses normal Kubernetes commands and files to access pods and clusters. Advanced users can use Tanzu add-ons to create new clusters of their own.