Networking and Security with NSX-T in vSphere with Tanzu

In this post we will explore how Networking and Security with NSX-T in vSphere with Tanzu works and can be utilized by VirtualInfrastructure Admins and also with DevOps and Developers.

A secondary post regarding how Developers themselves can utilize the built in Network CNI Antrea that is the default shipped with Tanzu is also coming, stay tuned for that.

Introduction to vSphere with Tanzu and NSX-T:

With Tanzu in vSphere there is a possibility to utilize either the VMware vSphere networking stack based on vDS virtual Distributed Switches with an external Loadbalancer or VMware NSX-T to have connectivity and different services for the Tanzu Kubernetes control plane VMs, Container workloads and services (Loadbalancers, Ingress etc.)

In my previous post I described the different components that constitutes Tanzu on vSphere so if you’re not familiar to that go to here and read about it.

With NSX-T in vSphere with Tanzu we have different features provided:

– IPAM
– Segmented Networking
– Firewall Isolation
– Load Balancing
– Visibility

This post will go more into detail around Firewall isolation and visibility.

Securing Tanzu Kubernetes Networks with NSX-T Firewall Isolation:

North-South traffic:

Traffic going in and out of the different namespaces is with NSX-T controlled on the Edge Gateways T0/T1’s. This is done by the Administrators in NSX-T UI or API; creating security policies to be able to restrict traffic in/out of the Supervisor clusters.

East-West traffic:

Inter-Namespace: Security isolation between different Supervisor Namespaces is enabled by default. There is a default rule in NSX-T created for every Namespace denying network traffic between Namespaces so called 

Intra-Namespace: By default traffic within each Namespace is allowed

To be able to create rules allowing traffic into Namespaces, Kubernetes has something called a Network Policies. See here for more information at Kubernetes.io https://kubernetes.io/docs/concepts/services-networking/network-policies/

So whenever a person want to create firewall rules for an application or a namespace the create different Network Policies with Ingress and Egress rules. Defining what source, destination and sort of service to open up for with by selecting the applications based on matching Labels that are set on the application containers and pod Selected with the network policy. In turn NSX-T with Tanzu then create these policies as DFW, Distributed Firewall Rules in NSX-T.

Let’s look at how that can be performed:

Utilize the Redis PHP Guestbook from Kubernetes.io as an example see link here.

The Guestbook demonstrates how to build a multi-tier web application. The tutorial shows how to setup a guestbook web service on an external IP with a load balancer, and how to run a Redis cluster with a single leader and multiple replicas/followers.

The following diagram shows an overview of the application architecture along with the Network Policies in-place with NSX-T:

So let’s start building the application: 

Download the PHP Guestbook with GIT:

git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
cd kubernetes-engine-samples/guestbook

Download the docker images described in all the yaml files to a Private Image Registry, I use Harbor by VMWare enabled by vSphere with Tanzu.
So that later on when creating the different application components the images where pulled from the private registry.

Prerequisities:

Enable the Embedded Harbor Registry on the Supervisor Cluster 

Configure a Docker Client with the Embedded Harbor Registry Certificate 

Install the vSphere Docker Credential Helper and Connect to the Registry

The IP on the internal Harbor registry is 10.30.150.4
The Namespace I am working in and also the project where the images will be uploaded into repositories is called: greenbag
Login and connect with the vSphere Docker Credential Helper:

docker-credential-vsphere login
docker-credential-vsphere login 10.179.145.77
Username: a_jimmy@int.rtsvl.se
Password: INFO[0017] Fetched username and password
INFO[0017] Fetched auth token
INFO[0017] Saved auth token

Download all the images:
docker pull docker.io/redis:6.0.5
docker pull gcr.io/google_samples/gb-redis-follower:v2
docker pull gcr.io/google_samples/gb-frontend:v5
Tag Images to Embedded Harbor Registry:
docker tag docker.io/redis:6.0.5 10.30.150.4/greenbag/redis:6.0.5
docker tag gcr.io/google_samples/gb-redis-follower:v2 10.30.150.4/greenbag/gb-redis-follower:v2
docker tag gcr.io/google_samples/gb-frontend:v5 10.30.150.4/greenbag/gb-frontend:v5
Check the images in docker:
docker images
REPOSITORY                                  TAG                IMAGE ID       CREATED         SIZE

10.30.150.4/greenbag/gb-frontend            v5                 3efc9307f034   6 weeks ago     981MB
gcr.io/google_samples/gb-frontend           v5                 3efc9307f034   6 weeks ago     981MB
10.30.150.4/greenbag/gb-redis-follower      v2                 6148f7d504f2   3 months ago    104MB
gcr.io/google_samples/gb-redis-follower     v2                 6148f7d504f2   3 months ago    104MB
10.30.150.4/greenbag/redis                  6.0.5              235592615444   15 months ago   104MB
redis                                       6.0.5              235592615444   15 months ago   104MB
Push Images to Embedded Harbor Registry:
docker push 10.30.150.4/greenbag/redis:6.0.5
docker push 10.30.150.4/greenbag/gb-redis-follower:v2
docker push 10.30.150.4/greenbag/gb-frontend:v5

This is how it looks in Harbor after the images are pushed into the registry.

With all images uploaded into Harbor it was time to download all the yaml files for the PHP Guestbook to be able to change the images path

Setting up the Redis leader:

Edit the yaml with the correct path to the harbor registry

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
      - name: leader
        image: "10.30.150.4/greenbag/redis:6.0.5"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
Run the following command to deploy the Redis leader:
kubectl apply -f redis-leader-deployment.yaml
Verify that the Redis leader Pod is running:
kubectl get pods

Create the Redis leader service:

Start the Redis leader Service by running:
kubectl apply -f redis-leader-service.yaml
Verify that the Service is created:
kubectl get service
NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
redis-leader                            ClusterIP      10.30.152.190   <none>         6379/TCP                     4h7m

Setting up Redis followers and change the path to correct Harbor registry:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: follower
        tier: backend
    spec:
      containers:
      - name: follower
        image: 10.30.150.4/greenbag/gb-redis-follower:v2
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

To create the Redis follower Deployment, run:
kubectl apply -f redis-follower-deployment.yaml

Verify that the two Redis follower replicas are running by querying the list of Pods:
kubectl get pods

Create the Redis follower service:

kubectl apply -f redis-follower-service.yaml

Verify that the Service is created:
kubectl get service

Setting up the guestbook web frontend and change the path to correct Harbor registry:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
        app: guestbook
        tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
image: 10.30.150.4/greenbag/gb-frontend:v5
        env:
        - name: GET_HOSTS_FROM
          value: "dns"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

To create the guestbook web frontend Deployment, run:
kubectl apply -f frontend-deployment.yaml
kubectl get pods -l app=guestbook -l tier=frontend

NAME                        READY   STATUS    RESTARTS   AGE
frontend-6cbb49f8df-g5jjg   1/1     Running   0          4h3m
frontend-6cbb49f8df-k86bf   1/1     Running   0          4h3m
frontend-6cbb49f8df-rmggc   1/1     Running   0          4h3m

Expose frontend on an LoadBalancer with an external IP address:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  type: LoadBalancer
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend

To create the Service, run the following command:
kubectl apply -f frontend-service.yaml

Visiting the guestbook website:

kubectl get service frontend

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
frontend   LoadBalancer   10.30.152.165   10.30.150.10   80:31681/TCP   4h2m

Copy the IP address from the EXTERNAL-IP column, and load the page in your browser:

Create Network Policy rules:

Once all the functionality for the Guestbook is completed it’s time to build the Network Policies to isolate and control what traffic is allowed and denied.
This is done with Ingress and Egress rules for the 3 different services – frontend, follower and leader.

Frontend Network Policy:

Starting with the frontend, this is where the connections outside is coming into the application.
It is exposed on port 80 with the service against the Loadbalancer.
So we create an Network Policy with an Ingress allowing access for all against port 80 on TCP protocol and that is applied on all pods that a tagged with a label of app=guestbook and tier=frontend.

To get the labels on all pods run the following:
kubectl get pods –show-labels

frontend-6cbb49f8df-g5jjg         1/1     Running   0          11m   10.30.160.101   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
frontend-6cbb49f8df-k86bf         1/1     Running   0          11m   10.30.160.102   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
frontend-6cbb49f8df-rmggc         1/1     Running   0          11m   10.30.160.100   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
redis-follower-7bd547b745-297jw   1/1     Running   0          15m   10.30.160.99    esxi01   <none>           <none>            app=redis,pod-template-hash=7bd547b745,role=follower,tier=backend
redis-follower-7bd547b745-ngk6s   1/1     Running   0          15m   10.30.160.98    esxi01   <none>           <none>            app=redis,pod-template-hash=7bd547b745,role=follower,tier=backend
redis-leader-7759fd599f-bfwdk     1/1     Running   0          21m   10.30.160.27    esxi01   <none>           <none>            app=redis,pod-template-hash=7759fd599f,role=leader,tier=backend

I created a new yaml called: redis-guestbook-networkpolicy-nsxt.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: guestbook-network-policy
spec:
  podSelector:
    matchLabels:
      app: guestbook
      tier: frontend
  policyTypes:
  - Ingress
- Egress
ingress:
  - ports:
    - protocol: TCP
      port: 80

Create a Network Policy for the Redis Leaders:

The frontend need to be able to access the leaders on port 6379 for reading and writing data, so we create ingress and egress rules for these ports in/out.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: redis-leader-network-policy
spec:
  podSelector:
    matchLabels:
      app: redis
      role: leader
      tier: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
    - ports:
      - protocol: TCP
        port: 6379
  egress:
   - ports:
     - protocol: TCP
       port: 6379    

Network Policy for the Redis Followers:

The frontend need to be able to access the followers on port 6379 for reading and writing data, so we create ingress and egress rules for these ports in/out

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: redis-follower-network-policy
spec:
  podSelector:
    matchLabels:
      app: redis
      role: follower
      tier: backend
  policyTypes:
  -  Ingress
  -  Egress
  ingress:
    - ports:
      - protocol: TCP
        port: 6379
  egress:
    - ports:
- protocol: TCP
        port: 6379

Verifying in the NSX-T UI

We see rules are created corresponding to the ingress and egress rules we created earlier.

Digging into the greenbag-redis-follower-network-policy-whitelist Policy Section we see that TCP.6379-ingress-allow allows anyone to talk against the follower pods with IP 10.30.160.99, .98 on port 6379. The group definition is based on the tag tier=backend, role=follower, app=redis
And so forth for the rest on the different rules in the tiered application.

We also have Policy Sections for each part that Drops everything else that is not allowed:

Lastly we can with NSX-T run a Traceflow between one of the Frontend pods and Follower Pods and see that traffic is allowed on port 6379

And denied on a different port:

We have now secured ingress and egress traffic into and between vSphere Pods in a Namespace using Network Policies.

As we look further Developers can also utilize Antrea for securing traffic flows between application for use within Tanzu Kubernetes Grid Clusters that are running ontop of the Supervisor layer. But more on that in a different post.

Happy securing you Modern Applications!

vSphere with Tanzu

vSphere with Tanzu

In this post I intend to explore vSphere with Tanzu and why the product is interesting for customers to choose, compared to running Kubernetes in a manual setup maintaining it by your self or choosing another solution to orchestrate the orchestrator Kubernetes, for the delivery of a platform that can serve infrastructure for containerized applications.

What is vSphere with Tanzu?
I get the information from VMware explaining it best.

vSphere with Tanzu is a developer-ready infrastructure, that delivers:

  • The fastest way to get started with Kubernetes – get Kubernetes infrastructure in an hour: 
    • Configure an enterprise-grade Kubernetes infrastructure leveraging your existing networking and storage in as little as an hour *
    • Simple, fast, self-service provisioning of Tanzu Kubernetes Grid clusters in just a few minutes
  • A seamless developer experience: IT admins can provide developers with self-service access to Kubernetes namespaces and clusters, allowing developers to integrate vSphere with Tanzu with their development process and CI/CD pipelines.
  • Kubernetes to the fingertips of millions of IT admins: Kubernetes can be managed through the familiar environment and interface of vSphere. This allows vSphere admins to leverage their existing tooling and skillsets to manage Kubernetes-based applications.  Moreover, it provides vSphere admins with the ability to easily grow their skillset in and around the Kubernetes ecosystem.

vSphere with Tanzu Architecture:
You enable vSphere with Tanzu on a VMware vSphere ESXi cluster. This creates a Kubernetes control plane inside the hypervisor layer. This layer contains objects that enable the capability to run Kubernetes workloads within ESXi.

A VMware vSphere ESXi cluster that is enabled for vSphere with Tanzu is called a Supervisor Cluster.
It runs on top of the SDDC layer that consists of ESXi for compute, NSX-T Data Center or vSphere networking, and vSAN or another shared storage solution.
Shared storage is used for persistent volumes for vSphere Pods, VMs running inside the Supervisor Cluster, and pods in a Tanzu Kubernetes cluster. After a Supervisor Cluster is created, as a vSphere administrator you can create namespaces within the Supervisor Cluster that are called Supervisor Namespaces that is reflected in vSphere as a Resource Pool so that IT Ops can control the quota of resources CPU/RAM/Storage a Developer may get.
As a DevOps engineer, you can run workloads consisting of containers running inside vSphere Pods and also create Tanzu Kubernetes clusters.


Difference between vSphere pods Supervisor and Tanzu Kubernetes Clusters, TKG
:
The vSphere Supervisor Cluster is Kubernetes setup by vSphere directly on the ESXi hosts. Making the ESXi hosts to be Worker nodes in a Kubernetes Cluster. The enablement of Tanzu on vSphere also creates three Kubernetes control plane VMs in the Cluster for managing the Kubernetes environment.

  • The three control plane VMs are load balanced as each one of them has its own IP address. Additionally, a floating IP address is assigned to one of the VMs. vSphere DRS determines the exact placement of the control plane VMs on the ESXi hosts and migrates them when needed. vSphere DRS is also integrated with the Kubernetes Scheduler on the control plane VMs, so that DRS determines the placement of vSphere Pods. When as a DevOps engineer you schedule a vSphere Pod, the request goes through the regular Kubernetes workflow then to DRS, which makes the final placement decision. 
  • Spherelet. An additional process called Spherelet is created on each host. It is a kubelet that is ported natively to ESXi and allows the ESXi host to become part of the Kubernetes cluster. 
  • Container Runtime Executive (CRX). CRX is similar to a VM from the perspective of Hostd and vCenter Server. CRX includes a paravirtualized Linux kernel that works together with the hypervisor. CRX uses the same hardware virtualization techniques as VMs and it has a VM boundary around it. A direct boot technique is used, which allows the Linux guest of CRX to initiate the main init process without passing through kernel initialization. This allows vSphere Pods to boot nearly as fast as containers.
  • The Virtual Machine Service, Cluster API, and VMware Tanzu™ Kubernetes Grid™ Service are modules that run on the Supervisor Cluster and enable the provisioning and management of Tanzu Kubernetes clusters.

TKG, Tanzu Kubernetes Grid:
A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes software that is packaged, signed, and supported by VMware. In the context of vSphere with Tanzu, you can use the Tanzu Kubernetes Grid Service to provision Tanzu Kubernetes clusters on the Supervisor Cluster. You can invoke the Tanzu Kubernetes Grid Service API declaratively by using kubectl and a YAML definition.A Tanzu Kubernetes cluster resides in a Supervisor Namespace. You can deploy workloads and services to Tanzu Kubernetes clusters the same way and by using the same tools as you would with standard Kubernetes clusters.

vSphere Supervisor and Tanzu Kubernetes Grid Use Cases:
What to choose when setting up applications/containers in vSphere Supervisor layer or in Tanzu Kubernetes Grid layer is depending on what kind of functionality you as a Developer or IT Operations would like.
Below are example Use Cases when to choose which:
Supervisor Cluster:
Strong Security and Resource Isolation
Performance Advantages
Serverless Experience

Tanzu Kubernetes Cluster:
Cluster Level Tenancy Model
Fully Conformant to Upstream k8s
Configurable k8s Control Plane
Flexible Lifecycle, including upgrades
Install and customize favorite tools easily

So lets say I am a Developer that would like to utilize the Supervisor Cluster as a Production and the Tanzu Kubernetes Grid as a Development area for my application:

Let’s go into vSphere first and check out the UI:

I’m logged in as the Developer now and I can see my namespace called tkg01 created by the IT Operations team.
I have also created a Tanzu Kubernetes Grid Cluster for my development testing and to be able to myself lifecycle manage the cluster.

What IP/DNS me as a developer should run my kubectl commands against is received by checking out the Link to CLI Tools in the picture:

Starting my Ubuntu with kubectl installed: I login against the vSphere Supervisor cluster:

To see which Kubernetes versions IT Operations has made available you run the command: kubectl get tanzukubernetesreleases

I have created a yaml file containing the specifications for setting up a TKG cluster:
In the file I specify a kind of type TanzuKubernetesCluster
the file also contains the name, namespace, version of Kubernetes and number of controlPlane and worker nodes I want.

apiVersion: run.tanzu.vmware.com/v1alpha1      #TKGS API endpoint
kind: TanzuKubernetesCluster                   #required parameter
metadata:
  name: tkg01-cl02          #cluster name, user defined
  namespace: tkg01          #vsphere namespace
spec:
  distribution:
    version:  1.20.2        #Resolves to the latest v1.18 image
  topology:
    controlPlane:
      count: 1              #number of control plane nodes
      class: best-effort-small    #vmclass for control plane nodes
      storageClass: tanzu-storage-policy  #storageclass for control plane
    workers:
      count: 3                 #number of worker nodes
      class: best-effort-small  #vmclass for worker nodes
      storageClass: tanzu-storage-policy #storageclass for worker nodes

To setup a new Kubernetes cluster it is as easy as running

ubuntu@tanzu-cli:~/Tanzu$ kubectl apply -f tkg-cluster.yaml 
tanzukubernetescluster.run.tanzu.vmware.com/tkg01-cl02 created

We see in vSphere the cluster has been created

Now let’s create a deployment and expose it to the world:
I have a simple nginx web application:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-test
    run: nginx
  name: nginx-test
spec:
    replicas: 2
    selector:
      matchLabels:
        app: nginx-test
    template:
      metadata:
        labels:
          app: nginx-test
      spec:
        containers:
        - image: nginx:latest
          name: nginx

and a service of type LoadBalancer:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx-test
    run: nginx
  name: nginx-test
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-test
  type: LoadBalancer
status:
  loadBalancer: {}

I first login to my newly created TKG cluster tkg01-cl02 to deploy my test application in my development area:

We see that we are logged in and that the controlplance node and worker nodes are Ready and on version 1.20.2 as I stated before in my TKG cluster yaml file.

Before we can deploy the application we need to set the permission for our authenticated user:
The following kubectl command creates a ClusterRoleBinding that grants access to authenticated users run a privileged set of workloads using the default PSP vmware-system-privileged.

kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

Now let’s create the deployment and service: Since I have NSX-T integrated with vSphere 7 and enabled during the Workload enablement of Tanzu. NSX-T will create a Loadbalancer and expose it to the world on IP network 10.30.150.0/24.

We get the services in the cluster and se that we have an External IP set on the LoadBalancer for nginx-test

ubuntu@tanzu-cli:~/Tanzu/nginx$ kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        23m
nginx-test   LoadBalancer   10.98.212.238   10.30.150.6   80:31971/TCP   6s
supervisor   ClusterIP      None            <none>        6443/TCP       23m

Let’s use a browser and look what we got:

Nice! We have our Web up and running in our Development TKG Tanzu Cluster 02.
Now let’s say we are satisfied with our development and would like to create the deployment in production. This is supposed to be placed on the SuperVisor Cluster in vSphere. Logging into this is performed with the following command:

kubectl vsphere login -u a_jimmy@int.rtsvl.se --server=10.30.150.1

Let’s get the nodes: We see the ESXi hosts as the workers and we have 3 Supervisor Control VMs.

Let’s deploy the application in production:

Now we see if the pods are running, if the service is created and the External IP is set to 10.30.150.8

Lastly let’s check with a browser again:

Awesome we have our Frontend Web up and running in production on the Supervisor Cluster.
Looking in vSphere we also see the vSphere PODs created for the deployment.

I am excited about how Tanzu and vSphere is always improving to help me and my customers work in the hybrid cloud. I will continue posting new technical and product information about Tanzu.

Join me by following my blog directly using
Thank you.

Scheduled PostgreSQL backup of Cloud Director Embedded DB

In this post I want to describe how a backup of the PostgreSQL database for Cloud Director can be setup to be scheduled

In the VMware documentation there is only a description on how one could take a manual backup of the Postgres DB. In normal Operations t can be convenient to have this automated and scheduled.
I want to create a daily backup of the database and save that on the NFS Transfer Store inorder to be able to create Image backup of the NFS Server.

To start with I use an old tool called Cron that’s included in the Cloud Director Cell.
Cron is running it’s schedules from the directory that is placed at /etc/cron.d/
So in order to create a backup for Postgres we simply create a new file in the location. Called: vcdpostgres_db_backup

In the file we add the following line that creates a backup everyday at 15:00, this can be customized to be once every week or whatever is suitable.

#m  h  dom mon dow user     command
00 15  *   *   *   postgres  /opt/vmware/vpostgres/10/bin/pg_dump vcloud > /opt/vmware/vcloud-director/data/transfer/pgdb-backup/$(date +\%F)_vcloud_postgresdbdump.tgz
00 15  *   *   *   root      find /opt/vmware/vcloud-director/data/transfer/pgdb-backup/_vcloud -mtime +5 -type f -delete

The command runs as the Postgres user and creates a dump of the vcloud database to the location of /opt/vmware/vcloud-director/data/transfer/pgdb-backup.
It finally checks if there are any files that are older than 5 days and deletes them.
Be sure to check that the cron job works by tailing /var/log/cron

VMware Certified Master Specialist – Cloud Native 2020 – Passed

So last week 17 September 2020, I passed the VMware Certified Master Specialist Cloud Native 2020 exam.

I can agree with some of my colleagues that have also passed this exam that it was not an easy test to pass since this was my second attempt.

For me the reason it was a bit hard is because I have a background within infrastructure, working with VMware products handling Software Defined Data Centers and NSX-T.
Understanding a whole new set of tools and ways working as a Developer/DevOps is challenging but a new and very exciting arena for me.

I believe that the future working with VMware products and also OpenSource Communities is the way to go. Not to get stuck with just Legacy OS:es and Applications not created as Cloud Native Apps. But a Mixture of Cloud Native apps and Legacy Applications and Backend Systems all with philosophy that; Lifecycle handling, Tooling, Automation and Orchestration is one of the key elements in surviving in this fast pace-moving Tech World we live in.

So to give som help to you reading this in how you can walk the walk and talk the talk in getting your own VMware Certified Master Specialist Cloud Native Badge below are my links I followed during this chapter that just has begun.

Prerequisites:

CKA/CKAD
First and foremost it is a requirement to pass one of the exams for CKA, Certified Kubernetes Administrator or CKAD Certified Kubernetes Application Developer. I went with CKA since that’s my background.

Read and get to know the full documentation of the Kubernetes.io site Kubernetes.io
Create an account at CNFC before taking the CKA/CKAD exam: https://www.cncf.io/certification/cka/

I took the course Certified Kubernetes Administrator (CKA) with Practice Tests at Udemy
https://www.udemy.com/share/101WmEAEMaeFtWTHwH/

Preparations for the VMware Certified Master Specialist Cloud Native 2020 exam:
The site that explains all needed to pass the exam: VMware

Read up on the Studyguide and learn each of the links in the document listed. https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/certification/vmw-ms-cloud-native-2019.pdf

I read some blogs to prepare and followed their very good advise:
https://itq.eu/knowledge/vmware-cloud-native-master-specialist-exam/
https://johannstander.com/2020/02/26/vmware-cloud-native-master-specialist/

I also did som HOL at VMware that has some of the tools questioned in the test:
HOL Cloud Native Apps
HOL-2033-01-CNA

The test is not asking any questions regarding PKS Enterprise as it says in the description for the exam. So disregard that. I hope VMware will change this because it creates confusion.
The Certification get more into asking questions regarding Docker, Docker Build how to crate a file in the correct order etc.
A whole bunch of questions on OPA Open Policy Agent with Rego, Pod Security Policies, Conformance Tests with Sonobuy, Monitoring and writing exporters with Prometheus, Backup with Velero, and Log Forwarding with FluentBit.

I would also advise to create a Kubernetes cluster and test all of the mentioned apps, tools and functionality on your own to get to know how and what they do.

As a bonus go and learn Tanzu and about Kubernetes at the following sites:
https://www.modernapps.ninja/
https://kube.academy

Good Luck!

Automatic Failover of the VMware Cloud Director 10.1 Appliance

So in this post I wanted to describe how a setup with Cloud Director 10.1 with embedded PostgreSQL DB can be set to automatic failover. As described in VMware documentation se Link

So starting with VMware Cloud Director 10.1 the automatic failover functionallity has been added for the roles related to the database that is embedded in the Appliances. So if for some reason the appliance holding the primary DB role of PostgreSQL cluster failed you would prefer that it failover automatically, so you would not have to bother to manually do that. Compared to before the 10.1 release where that was required.

For some reason the the failover mode by default is set to manual. As with the release of Cloud Director 10.1 there is now also a Appliance API in VMware Cloud Director. See the VMware Cloud Director Appliance API 1.0 Schema Reference.

I have a setup of 3 Cloud Director appliances.
1 Primary and the minimal required amount of 2 Standby.

With a browser against one of my cell appliances I check the status of the DB cluster

Starting up Postman Client against the Cloud Director Appliance API I perform a GET command to list all the nodes in my cluster. Below we notice it is set to manual
The command to run against the cell appliance api is:
https://cell-name:5480/api/1.0.0/nodes

So let’s change the mode to automatic.
By running the command according to the API Guide:
https://cell-name:5480/api/1.0.0/nodes/failover/automatic

Then with a new GET as before we se that the mode is now set to automatic

Verifying with the browser we see the mode is change in the UI also.

This concludes this post on how to set the mode of the roles to be automatic from manual.

How to put Cloud Director 10.1 Multi-cell appliances with embedded DB into Maintenance-mode.

In this short post I wanted to describe a procedure on how you should put you Cloud Director 10.1 appliances with the embedded PostgreSQL DB into maintenance mode.
Both for the VCD Service and also the DB to be moved if the cell is a Primary cell in the DB Cluster.

The case of going into Maintenance can be that you need to perform a planned upgrade or decommission a cell. If the appliance cell is holding the Primary PostgreSQL DB role, you also fail the primary role over to a Standby DB Cell and execute the following commands:

On all Cells that are members of the DB Cluster run the below command to put them in Maintenance mode:

/opt/ vmware/vcloud-director/bin/cell-management-tool -u administrator cell –maintenance true

On a Cell that is DB Standby:

sudo -i -u postgres
/opt/vmware/vpostgres/current/bin/repmgr standby switchover -f /opt/vmware/vpostgres/current/etc/repmgr.conf –siblings-follow

– Finally, remove all cells from Maintenance mode:

/opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell –maintenance false

Then you can access UI.

The wording mentioned at VMware documentation site is a bit confusion for this that’s the reason I wanted to explain a bit more.
Link

NSX-T integration with vCloud Director 10

Hi!
In this post I will detail the process of what is needed inorder to consume network resources from NSX-T Data Center. As of today with vCloud Director 10 and NSX-T 2.5 there are restrictions, requirements and design decisions that are related to both vCloud Director and NSX-T that must be kept in mind before deciding to go NSX-T only. Tomas Fojta has at his blog created a great feature comparison between NSX-V and NSX-T and what kind of functionality that you can get out by choosing NSX-T today. There are some stuff that at the moment from a vCloud Director perspective are not working with NSX-T and that is important to consider when planning and choosing NSX-T with your design. Since NSX-V is the release that has been around for along time and features that exists in that platform are not yet fully functional with NSX-T and vCloud Director. So keep that in mind.

Starting any deployment there is a need to create a design. Below image displays how you could setup NSX-T and vCloud Director with SDDC components with a separate Management and shared Edge and Compute Cluster.

The Edge and Compute cluster is managed by its own vCenter server. It is to this cluster that we will connect the vCloud Director, NSX-T Manager and also place the NSX-T Edge appliances where the T0 and T1 Gateways for providing Tenant N/S traffic, routing functionality and stateful services e.g. Edge Firewall and NAT services. Also Tenant Workloads will reside in this cluster.
In the shared Edge and Compute cluster vCloud Director will create the Provider vDC Resource Pool needed to consume the resources that the cluster provides. (CPU, RAM, Storage, NSX-T Resources (Logical Network Segments, Gateways etc.)).
Inside of the PvDC there will be Tenant Organizations created and for each Organization there can be one or many Organisation Virtual Datacenters, OvDC.
Inorder for the Tenants OvDCs to be able to connect their vAPP and Virtual Machine networks and have the traffic to be able to flow N/S there is a need in NSX-T first to create a T0 Gateway. And to this T0 Gateway is where OvDC Tenants connect their T1 Gateways.

NOTE: The following link to VMware Documentation describes the process that is needed to prepare NSX-T.


I will go through the What to do Next process in this post.

So WHAT TO DO NEXT?
After you install vCloud Director, you:
– Register the NSX-T Manager
– Create a Geneve Network Pool that is Backed by NSX-T transport zone.
– Import the T0 Gateway create an External Network and bind it to the pre-created T0 Gateway in vCD
– Create an OvDC Edge T1 Gateway and connect it to the External Network
– Create an OvDC Routed Network and connect it to the OvDC T1 Gateway
– Create a SNAT and DNAT rule for the External IP to the internal Virtual Machine Overlay Segment IP and test ping.
– Connect a vAPP Virtual Machine to the OvDC Routed Network

Register the NSX-T Manager

Registering the NSX-T Manager is done by logging into vCloud Director provider portal and going to vSphere Resources.

Create a Geneve Network Pool that is Backed by NSX-T transport zone.

Next we create a Network Pool that is backed by NSX-T Geneve transport zone

VMware docs link: Create a Network Pool Backed by an NSX-T Data Center Transport Zone

Import the T0 Gateway create an External Network and bind it to the pre-created T0 Gateway in vCD

On the External Network section we now create the External Network that is provided by the T0 Gateway created earlier in NSX-T. We set a name for the network, and also the configuration for the gateway and static pool that is mean to be provided for the PvDC.
VMware docs link: Add an External Network That Is Backed by an NSX-T Data Center Tier-0 Logical Router

Create an OvDC Edge T1 Gateway and connect it to the External Network

We now create an OvDC Edge T1 gateway and connect it to the External Network T0 Gateway.
The NSX-T Data Center edge gateway provides a routed organization VDC network with connectivity to external networks and can provide services such as network address translation, and firewall.
VMware docs link: Add an NSX-T Data Center Edge Gateway

Create an OvDC Routed Network and connect it to the OvDC T1 Gateway

Now logging in as a Tenant Organization administrator we can see the OvDC and here we can create a routed network and connect it to the OvDC T1 Gateway edge. We may also go to NSX-T Manager UI and check that the T1 Gateway has got the new Segment created and attached.
VMware docs link: Add a Routed Organization Virtual Data Center Network

Create a SNAT and DNAT rule for the External IP to the internal Virtual Machine Overlay Segment IP and test ping

Next we can create Source NAT and Destination NAT rules for the External IP we have received and forward traffic to and from the test VM called Ubuntu_Test01 in the OvDC.
VMware docs link: Add an SNAT or a DNAT Rule to an NSX-T Edge Gateway

Going forward VMware will release more and more NSX-T and vCloud Director features. I am hoping for more functionality regarding creating Load Balancers and VPN from the UI in vCD.

Have a nice Channukah and Xmas.
/Jimmy….

NSX-T 2.5 Custom Monitoring Dashboard

In this post I wanted to explain something that is not well documented with Vmware today. The topic is how to create a custom dashboard in NSX-T based on Widgets.

In the NSX-T Manager UI interface there are different monitoring dashboards out of the box that one can view and get information from. These dashboards display details about system status, networking and security and compliance reporting. You will find the dashboards by logging into NSX-T Manager and go to Home-> Monitoring Dashboards.

Here we have some already system defined dashboards:

  • System:
    • Status of the NSX Manager cluster and resource (CPU, memory, disk) consumption.
    • NSX-T fabric, including host and edge transport nodes, transport zones, and compute managers.
    • NSX-T backups, if configured. It is strongly recommended that you configure scheduled backups that are stored remotely to an SFTP site.
    • Status of endpoint protection deployment.
  • Networking & Security
    • Status of groups and security policies
    • Status of Tier-0 and Tier-1 gateways.
    • Status of network segments
    • Status of the load balancer VMs.
    • Atatus of VPN, virtual private networks.
  • Advanced Networking & Security
    • Status of the load balancer services, load balancer virtual servers, and load balancer server pools
    • Status of firewall, and shows the number of policies, rules, and exclusions list members.
    • Status of virtual private networks and the number of IPSec and L2 VPN sessions open
    • Shows the status of logical switches and logical ports, including both VM and container ports.
  • Compliance Report
    • Displays information regarding if objects are in compliance with set values.
  • Custom
    • Empty dashboard

In the NSX-T REST API Guide there is a section called Management Plane API: Dashboard that contains the information that is needed to create a custom dashboard called VIEW along with a widget configuration.
Link to NSX-T 2.5 API Reference Guide

You will need Postman or any other API Client that can GET, POST and PUT information to the NSX-T Manager. Below is my first GET command that lists all the Views that are in place and already created by the system.

GET https://nsx-t-manager/policy/api/v1/ui-views/

By looking in the API Guide you can create the first POST command that we will send to NSX-T Manager to create a new view with some widgets
POST https://nsx-t-manager/policy/api/v1/ui-views/

{
“display_name”: “My Own Custom View”,
“weight”: 101,
“shared”: true,
“description”: “My own created custom view, with all my favorite widgets and monitoring endpoints”,
“widgets”: [
{
“label”: {
“text”: “Groups”
},
“widget_id”: “DonutConfiguration_Groups-Status”,
“weight”: 1000
},
{
“label”: {
“text”: “Logical Switches Admin Status”,
“hover”: false
},
“widget_id”: “StatsConfiguration_Switching-Logical-Switches-Admin-Status”,
“weight”: 9531,
“alignment”: “LEFT”,
“separator”: false
},
{
“label”: {
“text”: “Tier-1 Gateways”
},
“widget_id”: “DonutConfiguration_Networks-Status”,
“weight”: 3020
}
]
}

After getting an OK from postman we can look in the NSX-T UI once again and see that we have got a new dashboard in the dropdown list called My Own Custom View

Clicking on it will allow us to see the custom widgets that I choose in my API call to add to the dashboard view.

If you for some reason need to delete a widget from the dashboard you need to do an API call GET to list the widget id for the view and then you can do a DELETE call to delete that widget from the view.

So list all views GET https://nsx-t-manager/policy/api/v1/ui-views/

We see the ID for my custom view is View_7a09f510-4d8f-4132-b371-337408004096
So doing a get call against the view will get more information about the view. GET https://nsx-t-manager/policy/api/v1/ui-views/View_7a09f510-4d8f-4132-b371-337408004096


We can now do a DELETE call and remove the widget configuration for Groups since that widget was of no interest. Note that you will need to add /widgetconfigurations/widget_id after the view_id
https://nsx-t-manager/policy/api/v1/ui-views/View_7a09f510-4d8f-4132-b371-337408004096/widgetconfigurations/DonutConfiguration_Groups-Status

Refreshing the NSX-T UI we see the widget is now removed.

Micro-Segmentation and Security Design Planning with vRealize Network Insight, vRNI

This blogpost has been prepared to describe the Micro-Segmentation and security conceptual design planning utilizing VMware vRealize Network Insight, vRNI. It can act as a support for anyone that wishes to know how to think and implement doing microsegmentation in a VMware based environment either with NSX-V or NSX-T. Some of my text that is written out in this post is borrowed and referenced from an official document by VMware: Data Center Security and
Networking Assessment

My design is based upon the findings, utilizing the network assessment tool performed by VMware vRealize Network Insight.

Details:

You can deploy a Micro-Segmentation security architecture, bearing in mind to:

  • Deploy firewalls to protect the traffic flowing East­West (e.g., from server to server). The vast majority of the network traffic in a VMware based SDDC is East­West based. Unprotected East­West traffic seriously compromises data center security by allowing threats to easily spread throughout the data center.
  • Implement a solution that can filter all traffic within the virtualized part of the data center, as well as firewall the traffic between systems on the same Layer 2 segment (VLAN). My analysis showed a vast majority of traffic is VM­ to­ VM, and a significant amount is between systems on the same VLAN.

About VMware NSX and vRealize Network Insight

Because of its unique position inside the hypervisor layer, VMware NSX is able to have deep visibility into traffic patterns on the network – even when this traffic flows entirely in the virtualized part of the data center. Combining this intelligence with advanced analytics, vRNI Visibility and Operations Platform provides insight for IT managers, enabling them to make better decisions on what and how to protect critical assets.

Security in the Data Center Today

The standard approach to securing data centers has emphasized strong perimeter protection to keep threats on the outside of the network. However, this model is ineffective for handling new types of threats – including advanced persistent threats and coordinated attacks. What’s needed is a better model for data center security: one that assumes threats can be anywhere and probably are everywhere, then acts accordingly. Micro­ Segmentation, powered by VMware NSX, not only adopts such an approach, but also delivers the operational agility of network virtualization that is foundational to a modern software defined data center.

Threats to Today’s Data Centers

Cyber threats today are coordinated attacks that often include months of reconnaissance, vulnerability exploits, and “sleeper” malware agents that can lie dormant until activated by remote control. Despite increasing types of protection at the edge of data center networks – including advanced firewalls, intrusion prevention systems, and network based malware detection – attacks are succeeding in penetrating the perimeter, and breaches continue to occur.

The primary issue is that once an attack successfully gets past the data center perimeter, there are few lateral controls to prevent threats from traversing inside the network. The best way to solve this is to adopt a stricter, micro granular security model with the ability to tie security to individual workloads and the agility to provision policies automatically.

The Solution: VMware NSX & Micro­Segmentation

VMware NSX is a network virtualization platform that for the first time makes micro­segmentation economically and operationally feasible. NSX provides the networking and security foundation for the software defined data center (SDDC), enabling the three key functions of micro­segmentation: isolation, segmentation, and segmentation with advanced services. Businesses gain key benefits with micro­segmentation:

  • Network security inside the data center: flexible security policies aligned to virtual network, VM, OS type, dynamic security tag, and more, for granularity of security down to the virtual NIC
  • Automated deployment for data center agility: security policies are applied when a VM spins up, are moved when a VM is migrated, and are removed when a VM is de-provisioned – no more stale firewall rules.
  • Integration with leading networking and security infrastructure: NSX is the platform enabling an ecosystem of partners to integrate – adapting to constantly changing conditions in the data center to provide enhanced security. Best of all, NSX runs on existing data center networking infrastructure.

So I started out by drawing up a conceptual design of the test environment.

Conceptual Layout of Test Environment

The conceptual layout included some sample Applications and Server communication in the Test environment and the systems that were added in is to show just how multifaceted an environment can be.

Figure 1. Conceptual Layout of Environment

  • We have the System1 system that need access to the Database server, DB.
  • We have the System2 system that need access to the Shared Infrastructure Services.
  • We have a Jumphost that connect to the the System1 server and the System2 server.
  • We are going to connect All the systems to the organizations Shared Infrastructure Services; Active Directory, DNS, NTP, SCCM, SCOM, MDM and RDGW.

Security Framework

Provide a Zero Trust security model using Micro-segmentation around organization’s data center applications.  Facilitate only the necessary communications both to the applications and between the components of the applications.

The security framework is described below:

  • The blacklist rules at the top will block communication from certain IP addresses from accessing the SDDC environment.
  • Allow bi-directional communication between the Shared Infrastructure Services and all applications that require access to those services
  • Deny traffic from one environment (TEST) from communicating to another environment (PROD).
  • Allow SYSTEM1 Application to communicate with DB Server running on the default ports.
  • Allow DB Server to communicate with SYSTEM1 Application Server
  • Allow All Clients to communicate with SYSTEM1 and SYSTEM2 Servers
  • Block any unknown communications except the actual application traffic to and from the SYSTEM1 application.
  • Block any unknown communications except the actual application traffic and restrict access to the SYSTEM2 application.
  • Allow the rest of the traffic until Microsegmentation has been performed in the whole environment, then change to Deny the rest of the traffic.

The goal of the security framework is to deny traffic based on certain criteria, explicitly permit what is required and allow by default until Micro-segmentation has been performed throughout the whole environment. The firewall rules to deny traffic from environment to environment, organization to organization is required. For example, if deny Application to Application rule is missing, an app server from SYSTEM1 can communicate with an application server from SYSTEM2 by hitting the allow all traffic to SYSTEM2 servers rule.

There are different permutation and multiple scenarios to handle so there are many potential firewall rules to be allowed that are not known now. Applications can also be running on non-standard ports. In that case, you can manually open up the firewall rules and deny those that are necessary.

Overall Security Design Decisions

In order to be modular and scalable when creating firewall rules, security groups will be based on NSX security tags on the VMs inside the SDDC and IP Sets will be created for items outside the datacenter. Firewall rules will then be applied using these security groups. For each VM, it can be tagged with at least 3 security tags, with 1 of them in each category.

The security tags are classified into 3 categories and each category has a prefix to identify it:

The names illustrated below are a small subset of the actual names to exemplify the NSX security design.

  • Environment Management
    • ST-TEST
    • ST-PROD
  • Organization
    • FG-A
    • FG-B
    • FG-C
  • Tier
    • ST-PROD-INFRA-AD
    • ST-PROD-INFRA-SCOM
    • ST-PROD-INFRA-MDM
    • ST-PROD-INFRA-SCCM
    • ST-PROD-INFRA-FS
    • ST-PROD-INFRA-NTP
    • ST-PROD-INFRA-RDGW
    • ST-TEST-APP-SYSTEM1
    • ST-TEST-DB

For the tier category, a VM can belong to multiple tiers. For example, a VM can be tagged with all 3 security tags in the tier category.

For example, a VM can have the following tags:

  • ST-TEST
  • FG-A
  • ST-TEST-APP-SYSTEM1

This VM can immediately be identified as a TEST VM belonging to the FG-A Organization and the SYSTEM1 application. Using such classification, you could create your security groups accordingly.

To create microsegmentation for systems outside the datacenters.  Creation of IP Sets can be used.  IP Sets may contain any combination of individual IP addresses, IP ranges and/or subnets to be used as sources and destinations for firewall rules or as members of Security groups.

Below lists down some of the security groups:

  • SG-PROD – Include VMs with a tag that contain ST-PROD
  • SG-TEST – Include VMs with a tag that contain ST-TEST
  • SG-FG-A – Include VMs with a tag that contain ST-FG-A
  • DG-FG-B – Include VMs with a tag that contain ST-FG-B
  • SG-PROD-INFRA-ALL – Include all Infra VMs that are AD/DNS servers
  • SG-PROD-INFRA-AD – include IP Set of VMs that are AD/DNS servers
  • SG-PROD-INFRA-NTP – include IP Set of NTP servers or VMs hosting NTP service
  • SG-PROD-INFRA-SCOM – include IP Set of SCOM servers or VMs hosting SCOM services
  • SG-PROD-INFRA-SCCM – include IP Set of SCCM servers or VMs hosting SCCM services
  • SG-PROD-INFRA-MDM – include IP Set of SNOW servers or VMs hosting SNOW services
  • SG-PROD-INFRA-RDGW – include IP Set of RDGW servers or VMs hosting RDGW service
  • SG-PROD-INFRA-FS – include IP Set of FS servers or VMs hosting FS services
  • SG-TEST-APP-SYSTEM1 – Include VMs that belongs to the application ANTURA
  • SG-TEST-DB – Include the DB VMs that belongs
  • SG-KLIENT-ALL – Include the IP Sets for all external clients
  • SG-WindowsServers – Include VMs whose OS starts with Microsoft Windows Server
  • SG-LinuxServers – Include VMs whose OS contains CentOS, Red Hat etc

A service is a protocol-port combination, and a service group is a group of services or other service groups. Below lists down some of the NSX Service groups and Services that can be created and used in combination with Security Groups when creating Firewall Rules in NSX:

SERVICE GROUP NAME SERVICE GROUP CONTAINS
SVG-INFRA-AD SV-INFRA-FS, SV-INFRA-NTP, SV-INFRA-DNS
SVG-WEBPORTS http/https 80/443
SV-INFRA-FS 445
SV-INFRA-NTP 123
SV-INFRA-DNS 53
SV-SQL-1433 tcp/udp 1433

SYSTEM1 Analysis and Rule Building

Requirements for SYSTEM1

  • Allow SYSTEM1 Application to communicate with DB Server running on the default ports.
  • Allow DB Server to communicate with SYSTEM1 Application Server.
    Allow Clients to communicate with SYSTEM1 Servers.
    Block any unknown communications except the actual application traffic to and from the SYSTEM1 application.

To start building firewall rules utilization of vRNI is needed. To ‘Plan Security’ for the VM’s utilize vRNI to start by examining the flows of the SYSTEM1 VM to/from other VMs.

Analysis of flows is done by selecting scope and segment them accordingly based on entities such as VLAN/VXLAN, Security Groups, Application, Tier, Folder, Subnet, Cluster, virtual machine (VM), Port, Security Tag, Security Group, and IPSet. The micro-segmentation dashboard provides the analysis details with the topology diagram. This dashboard consists of the following sections:

  • Micro-Segments: This widget provides the diagram for topology planning. You can select the type of group and flows. Based on your inputs, you can view the corresponding topology planning diagram.
  • Traffic Distribution: This widget provides the details of the traffic distribution in bytes.
  • Top Ports by Bytes: This widget lists the top 100 ports that record the highest traffic. The metrics for the flow count and the flow volume are provided. You can view the flows for a particular port by clicking the count of flows corresponding to that port.

vRNI displays all flows that are inbound, outbound and bi-directional going to the SYSTEM1 server.

By selecting the SYSTEM1 wedge in the circle it is possible to go in deeper and see the actual flows between the application and other servers and services.

Detailed in this section are the services the SYSTEM1 VM are using, the number of external services that are accessed (49), the number of flows that goes to/from (60) and also the recommended Firewall Rules (14) that can be created to micro-segment the server. vRNI is recommending 14 rules to accommodate micro-segmentation for the SYSTEM1 application. 

Option exist to export all the recommended rules as CSV for further processing manually or by automation if needed.

The exported table is listed below for the SYSTEM1 recommended firewall rules.

SourceDestinationServicesProtocolsActionRelated FlowsType
SYSTEM1Others_Internet53 [dns] 137 [netbios-ns] 138 [netbios-dgm] 389 [ldap] 5355UDPALLOW9Virtual
Others_InternetSYSTEM180 [http] 443 [https]TCPALLOW4Virtual
Others_InternetSYSTEM1443 [https]TCPALLOW1Virtual
Others_InternetSYSTEM1123 [ntp]UDPALLOW2Virtual
SYSTEM1Others_Internet80 [http] 88 [kerberos] 135 [epmap] 389 [ldap] 443 [https] 445 [microsoft-ds] 1433 [ms-sql-server] 3268 [msft-gc] 5723 8530 10000-19999 40000-49999 49155 49158TCPALLOW34Virtual
SYSTEM1Others_Internet80 [http]TCPALLOW1Virtual

By continuing the procedure detailed for remaining servers, applications, shared infrastructure services and environments with vRNI, going through each application traffic-flows, exporting the recommended firewall rules, micro-segmentation can be implemented with NSX.

A sample build of firewall rules was conceptually created based on what was gathered during the collection and processing of the data

Firewall Rules The table below shows the firewall rules based on the security framework described based on the requirements above:

Next Steps

When the structure is in order, it is possible to start building the Security Groups, Security, Tags, Services, and Service Groups in NSX once that has been implemented. The next step when creating rules and all needed objects to accomplish micro-segmentation it is important to go through and check the communications with the servers and applications and verify they’re all still working correctly per the requirements given.

I would also like to show a table from VMware regarding the Segmentation Strategies. Make sure to start Small and work your way through your environments and systems.

Start with doing MacroSegmentation. Meaning start finding out what Environments can/cannot communicate with other Environments.

When that is completed Setup the MesoSegmentation; Meaning go through what Applications within your Environments can/cannot communicate with other Applications inside the same or outside the Environment.

And Lastly do the MicroSegmentation; Meaning go through what Systems inside the Application can/cannot communicate with other Systems inside the Applications and inside the Environments. Inception thinking is needed 🙂

Also a good idea when drawing out the different Environments, Applications and Systems withing each Application is to Build a Segmentation Flow Chart. With it you will get a picture drawn up on how things are connected and interacted with each other and also makes it much easier to establish what can/cannot communicate with each other.

A micro­segmentation approach powered by VMware NSX can address the inadequacy of East­West security controls that affect most data centers. The vRNI Visibility and Operations software helps to jumpstart the journey to micro­ segmentation by providing actionable insights into how workloads in a data center communicate and plan the segmentation accordingly.

Thanks for this time! /Jimmy

vExpert 2019!

Happy to update that I’m now a 2nd Time vExpert 2019

I also would like to congratulate all the other returning vExpert NSX members and welcome to all new members joining for the 1st time!

Link to the Announcements!

https://blogs.vmware.com/vexpert/2019/03/07/vexpert-2019-award-announcement/

Load more