Automatic Failover of the VMware Cloud Director 10.1 Appliance

So in this post I wanted to describe how a setup with Cloud Director 10.1 with embedded PostgreSQL DB can be set to automatic failover. As described in VMware documentation se Link

So starting with VMware Cloud Director 10.1 the automatic failover functionallity has been added for the roles related to the database that is embedded in the Appliances. So if for some reason the appliance holding the primary DB role of PostgreSQL cluster failed you would prefer that it failover automatically, so you would not have to bother to manually do that. Compared to before the 10.1 release where that was required.

For some reason the the failover mode by default is set to manual. As with the release of Cloud Director 10.1 there is now also a Appliance API in VMware Cloud Director. See the VMware Cloud Director Appliance API 1.0 Schema Reference.

I have a setup of 3 Cloud Director appliances.
1 Primary and the minimal required amount of 2 Standby.

With a browser against one of my cell appliances I check the status of the DB cluster

Starting up Postman Client against the Cloud Director Appliance API I perform a GET command to list all the nodes in my cluster. Below we notice it is set to manual
The command to run against the cell appliance api is:

So let’s change the mode to automatic.
By running the command according to the API Guide:

Then with a new GET as before we se that the mode is now set to automatic

Verifying with the browser we see the mode is change in the UI also.

This concludes this post on how to set the mode of the roles to be automatic from manual.

How to put Cloud Director 10.1 Multi-cell appliances with embedded DB into Maintenance-mode.

In this short post I wanted to describe a procedure on how you should put you Cloud Director 10.1 appliances with the embedded PostgreSQL DB into maintenance mode.
Both for the VCD Service and also the DB to be moved if the cell is a Primary cell in the DB Cluster.

The case of going into Maintenance can be that you need to perform a planned upgrade or decommission a cell. If the appliance cell is holding the Primary PostgreSQL DB role, you also fail the primary role over to a Standby DB Cell and execute the following commands:

On all Cells that are members of the DB Cluster run the below command to put them in Maintenance mode:

/opt/ vmware/vcloud-director/bin/cell-management-tool -u administrator cell –maintenance true

On a Cell that is DB Standby:

sudo -i -u postgres
/opt/vmware/vpostgres/current/bin/repmgr standby switchover -f /opt/vmware/vpostgres/current/etc/repmgr.conf –siblings-follow

– Finally, remove all cells from Maintenance mode:

/opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell –maintenance false

Then you can access UI.

The wording mentioned at VMware documentation site is a bit confusion for this that’s the reason I wanted to explain a bit more.

NSX-T integration with vCloud Director 10

In this post I will detail the process of what is needed inorder to consume network resources from NSX-T Data Center. As of today with vCloud Director 10 and NSX-T 2.5 there are restrictions, requirements and design decisions that are related to both vCloud Director and NSX-T that must be kept in mind before deciding to go NSX-T only. Tomas Fojta has at his blog created a great feature comparison between NSX-V and NSX-T and what kind of functionality that you can get out by choosing NSX-T today. There are some stuff that at the moment from a vCloud Director perspective are not working with NSX-T and that is important to consider when planning and choosing NSX-T with your design. Since NSX-V is the release that has been around for along time and features that exists in that platform are not yet fully functional with NSX-T and vCloud Director. So keep that in mind.

Starting any deployment there is a need to create a design. Below image displays how you could setup NSX-T and vCloud Director with SDDC components with a separate Management and shared Edge and Compute Cluster.

The Edge and Compute cluster is managed by its own vCenter server. It is to this cluster that we will connect the vCloud Director, NSX-T Manager and also place the NSX-T Edge appliances where the T0 and T1 Gateways for providing Tenant N/S traffic, routing functionality and stateful services e.g. Edge Firewall and NAT services. Also Tenant Workloads will reside in this cluster.
In the shared Edge and Compute cluster vCloud Director will create the Provider vDC Resource Pool needed to consume the resources that the cluster provides. (CPU, RAM, Storage, NSX-T Resources (Logical Network Segments, Gateways etc.)).
Inside of the PvDC there will be Tenant Organizations created and for each Organization there can be one or many Organisation Virtual Datacenters, OvDC.
Inorder for the Tenants OvDCs to be able to connect their vAPP and Virtual Machine networks and have the traffic to be able to flow N/S there is a need in NSX-T first to create a T0 Gateway. And to this T0 Gateway is where OvDC Tenants connect their T1 Gateways.

NOTE: The following link to VMware Documentation describes the process that is needed to prepare NSX-T.

I will go through the What to do Next process in this post.

After you install vCloud Director, you:
– Register the NSX-T Manager
– Create a Geneve Network Pool that is Backed by NSX-T transport zone.
– Import the T0 Gateway create an External Network and bind it to the pre-created T0 Gateway in vCD
– Create an OvDC Edge T1 Gateway and connect it to the External Network
– Create an OvDC Routed Network and connect it to the OvDC T1 Gateway
– Create a SNAT and DNAT rule for the External IP to the internal Virtual Machine Overlay Segment IP and test ping.
– Connect a vAPP Virtual Machine to the OvDC Routed Network

Register the NSX-T Manager

Registering the NSX-T Manager is done by logging into vCloud Director provider portal and going to vSphere Resources.

Create a Geneve Network Pool that is Backed by NSX-T transport zone.

Next we create a Network Pool that is backed by NSX-T Geneve transport zone

VMware docs link: Create a Network Pool Backed by an NSX-T Data Center Transport Zone

Import the T0 Gateway create an External Network and bind it to the pre-created T0 Gateway in vCD

On the External Network section we now create the External Network that is provided by the T0 Gateway created earlier in NSX-T. We set a name for the network, and also the configuration for the gateway and static pool that is mean to be provided for the PvDC.
VMware docs link: Add an External Network That Is Backed by an NSX-T Data Center Tier-0 Logical Router

Create an OvDC Edge T1 Gateway and connect it to the External Network

We now create an OvDC Edge T1 gateway and connect it to the External Network T0 Gateway.
The NSX-T Data Center edge gateway provides a routed organization VDC network with connectivity to external networks and can provide services such as network address translation, and firewall.
VMware docs link: Add an NSX-T Data Center Edge Gateway

Create an OvDC Routed Network and connect it to the OvDC T1 Gateway

Now logging in as a Tenant Organization administrator we can see the OvDC and here we can create a routed network and connect it to the OvDC T1 Gateway edge. We may also go to NSX-T Manager UI and check that the T1 Gateway has got the new Segment created and attached.
VMware docs link: Add a Routed Organization Virtual Data Center Network

Create a SNAT and DNAT rule for the External IP to the internal Virtual Machine Overlay Segment IP and test ping

Next we can create Source NAT and Destination NAT rules for the External IP we have received and forward traffic to and from the test VM called Ubuntu_Test01 in the OvDC.
VMware docs link: Add an SNAT or a DNAT Rule to an NSX-T Edge Gateway

Going forward VMware will release more and more NSX-T and vCloud Director features. I am hoping for more functionality regarding creating Load Balancers and VPN from the UI in vCD.

Have a nice Channukah and Xmas.

NSX-T 2.5 Custom Monitoring Dashboard

In this post I wanted to explain something that is not well documented with Vmware today. The topic is how to create a custom dashboard in NSX-T based on Widgets.

In the NSX-T Manager UI interface there are different monitoring dashboards out of the box that one can view and get information from. These dashboards display details about system status, networking and security and compliance reporting. You will find the dashboards by logging into NSX-T Manager and go to Home-> Monitoring Dashboards.

Here we have some already system defined dashboards:

  • System:
    • Status of the NSX Manager cluster and resource (CPU, memory, disk) consumption.
    • NSX-T fabric, including host and edge transport nodes, transport zones, and compute managers.
    • NSX-T backups, if configured. It is strongly recommended that you configure scheduled backups that are stored remotely to an SFTP site.
    • Status of endpoint protection deployment.
  • Networking & Security
    • Status of groups and security policies
    • Status of Tier-0 and Tier-1 gateways.
    • Status of network segments
    • Status of the load balancer VMs.
    • Atatus of VPN, virtual private networks.
  • Advanced Networking & Security
    • Status of the load balancer services, load balancer virtual servers, and load balancer server pools
    • Status of firewall, and shows the number of policies, rules, and exclusions list members.
    • Status of virtual private networks and the number of IPSec and L2 VPN sessions open
    • Shows the status of logical switches and logical ports, including both VM and container ports.
  • Compliance Report
    • Displays information regarding if objects are in compliance with set values.
  • Custom
    • Empty dashboard

In the NSX-T REST API Guide there is a section called Management Plane API: Dashboard that contains the information that is needed to create a custom dashboard called VIEW along with a widget configuration.
Link to NSX-T 2.5 API Reference Guide

You will need Postman or any other API Client that can GET, POST and PUT information to the NSX-T Manager. Below is my first GET command that lists all the Views that are in place and already created by the system.

GET https://nsx-t-manager/policy/api/v1/ui-views/

By looking in the API Guide you can create the first POST command that we will send to NSX-T Manager to create a new view with some widgets
POST https://nsx-t-manager/policy/api/v1/ui-views/

“display_name”: “My Own Custom View”,
“weight”: 101,
“shared”: true,
“description”: “My own created custom view, with all my favorite widgets and monitoring endpoints”,
“widgets”: [
“label”: {
“text”: “Groups”
“widget_id”: “DonutConfiguration_Groups-Status”,
“weight”: 1000
“label”: {
“text”: “Logical Switches Admin Status”,
“hover”: false
“widget_id”: “StatsConfiguration_Switching-Logical-Switches-Admin-Status”,
“weight”: 9531,
“alignment”: “LEFT”,
“separator”: false
“label”: {
“text”: “Tier-1 Gateways”
“widget_id”: “DonutConfiguration_Networks-Status”,
“weight”: 3020

After getting an OK from postman we can look in the NSX-T UI once again and see that we have got a new dashboard in the dropdown list called My Own Custom View

Clicking on it will allow us to see the custom widgets that I choose in my API call to add to the dashboard view.

If you for some reason need to delete a widget from the dashboard you need to do an API call GET to list the widget id for the view and then you can do a DELETE call to delete that widget from the view.

So list all views GET https://nsx-t-manager/policy/api/v1/ui-views/

We see the ID for my custom view is View_7a09f510-4d8f-4132-b371-337408004096
So doing a get call against the view will get more information about the view. GET https://nsx-t-manager/policy/api/v1/ui-views/View_7a09f510-4d8f-4132-b371-337408004096

We can now do a DELETE call and remove the widget configuration for Groups since that widget was of no interest. Note that you will need to add /widgetconfigurations/widget_id after the view_id

Refreshing the NSX-T UI we see the widget is now removed.

Micro-Segmentation and Security Design Planning with vRealize Network Insight, vRNI

This blogpost has been prepared to describe the Micro-Segmentation and security conceptual design planning utilizing VMware vRealize Network Insight, vRNI. It can act as a support for anyone that wishes to know how to think and implement doing microsegmentation in a VMware based environment either with NSX-V or NSX-T. Some of my text that is written out in this post is borrowed and referenced from an official document by VMware: Data Center Security and
Networking Assessment

My design is based upon the findings, utilizing the network assessment tool performed by VMware vRealize Network Insight.


You can deploy a Micro-Segmentation security architecture, bearing in mind to:

  • Deploy firewalls to protect the traffic flowing East­West (e.g., from server to server). The vast majority of the network traffic in a VMware based SDDC is East­West based. Unprotected East­West traffic seriously compromises data center security by allowing threats to easily spread throughout the data center.
  • Implement a solution that can filter all traffic within the virtualized part of the data center, as well as firewall the traffic between systems on the same Layer 2 segment (VLAN). My analysis showed a vast majority of traffic is VM­ to­ VM, and a significant amount is between systems on the same VLAN.

About VMware NSX and vRealize Network Insight

Because of its unique position inside the hypervisor layer, VMware NSX is able to have deep visibility into traffic patterns on the network – even when this traffic flows entirely in the virtualized part of the data center. Combining this intelligence with advanced analytics, vRNI Visibility and Operations Platform provides insight for IT managers, enabling them to make better decisions on what and how to protect critical assets.

Security in the Data Center Today

The standard approach to securing data centers has emphasized strong perimeter protection to keep threats on the outside of the network. However, this model is ineffective for handling new types of threats – including advanced persistent threats and coordinated attacks. What’s needed is a better model for data center security: one that assumes threats can be anywhere and probably are everywhere, then acts accordingly. Micro­ Segmentation, powered by VMware NSX, not only adopts such an approach, but also delivers the operational agility of network virtualization that is foundational to a modern software defined data center.

Threats to Today’s Data Centers

Cyber threats today are coordinated attacks that often include months of reconnaissance, vulnerability exploits, and “sleeper” malware agents that can lie dormant until activated by remote control. Despite increasing types of protection at the edge of data center networks – including advanced firewalls, intrusion prevention systems, and network based malware detection – attacks are succeeding in penetrating the perimeter, and breaches continue to occur.

The primary issue is that once an attack successfully gets past the data center perimeter, there are few lateral controls to prevent threats from traversing inside the network. The best way to solve this is to adopt a stricter, micro granular security model with the ability to tie security to individual workloads and the agility to provision policies automatically.

The Solution: VMware NSX & Micro­Segmentation

VMware NSX is a network virtualization platform that for the first time makes micro­segmentation economically and operationally feasible. NSX provides the networking and security foundation for the software defined data center (SDDC), enabling the three key functions of micro­segmentation: isolation, segmentation, and segmentation with advanced services. Businesses gain key benefits with micro­segmentation:

  • Network security inside the data center: flexible security policies aligned to virtual network, VM, OS type, dynamic security tag, and more, for granularity of security down to the virtual NIC
  • Automated deployment for data center agility: security policies are applied when a VM spins up, are moved when a VM is migrated, and are removed when a VM is de-provisioned – no more stale firewall rules.
  • Integration with leading networking and security infrastructure: NSX is the platform enabling an ecosystem of partners to integrate – adapting to constantly changing conditions in the data center to provide enhanced security. Best of all, NSX runs on existing data center networking infrastructure.

So I started out by drawing up a conceptual design of the test environment.

Conceptual Layout of Test Environment

The conceptual layout included some sample Applications and Server communication in the Test environment and the systems that were added in is to show just how multifaceted an environment can be.

Figure 1. Conceptual Layout of Environment

  • We have the System1 system that need access to the Database server, DB.
  • We have the System2 system that need access to the Shared Infrastructure Services.
  • We have a Jumphost that connect to the the System1 server and the System2 server.
  • We are going to connect All the systems to the organizations Shared Infrastructure Services; Active Directory, DNS, NTP, SCCM, SCOM, MDM and RDGW.

Security Framework

Provide a Zero Trust security model using Micro-segmentation around organization’s data center applications.  Facilitate only the necessary communications both to the applications and between the components of the applications.

The security framework is described below:

  • The blacklist rules at the top will block communication from certain IP addresses from accessing the SDDC environment.
  • Allow bi-directional communication between the Shared Infrastructure Services and all applications that require access to those services
  • Deny traffic from one environment (TEST) from communicating to another environment (PROD).
  • Allow SYSTEM1 Application to communicate with DB Server running on the default ports.
  • Allow DB Server to communicate with SYSTEM1 Application Server
  • Allow All Clients to communicate with SYSTEM1 and SYSTEM2 Servers
  • Block any unknown communications except the actual application traffic to and from the SYSTEM1 application.
  • Block any unknown communications except the actual application traffic and restrict access to the SYSTEM2 application.
  • Allow the rest of the traffic until Microsegmentation has been performed in the whole environment, then change to Deny the rest of the traffic.

The goal of the security framework is to deny traffic based on certain criteria, explicitly permit what is required and allow by default until Micro-segmentation has been performed throughout the whole environment. The firewall rules to deny traffic from environment to environment, organization to organization is required. For example, if deny Application to Application rule is missing, an app server from SYSTEM1 can communicate with an application server from SYSTEM2 by hitting the allow all traffic to SYSTEM2 servers rule.

There are different permutation and multiple scenarios to handle so there are many potential firewall rules to be allowed that are not known now. Applications can also be running on non-standard ports. In that case, you can manually open up the firewall rules and deny those that are necessary.

Overall Security Design Decisions

In order to be modular and scalable when creating firewall rules, security groups will be based on NSX security tags on the VMs inside the SDDC and IP Sets will be created for items outside the datacenter. Firewall rules will then be applied using these security groups. For each VM, it can be tagged with at least 3 security tags, with 1 of them in each category.

The security tags are classified into 3 categories and each category has a prefix to identify it:

The names illustrated below are a small subset of the actual names to exemplify the NSX security design.

  • Environment Management
    • ST-TEST
    • ST-PROD
  • Organization
    • FG-A
    • FG-B
    • FG-C
  • Tier
    • ST-TEST-DB

For the tier category, a VM can belong to multiple tiers. For example, a VM can be tagged with all 3 security tags in the tier category.

For example, a VM can have the following tags:

  • FG-A

This VM can immediately be identified as a TEST VM belonging to the FG-A Organization and the SYSTEM1 application. Using such classification, you could create your security groups accordingly.

To create microsegmentation for systems outside the datacenters.  Creation of IP Sets can be used.  IP Sets may contain any combination of individual IP addresses, IP ranges and/or subnets to be used as sources and destinations for firewall rules or as members of Security groups.

Below lists down some of the security groups:

  • SG-PROD – Include VMs with a tag that contain ST-PROD
  • SG-TEST – Include VMs with a tag that contain ST-TEST
  • SG-FG-A – Include VMs with a tag that contain ST-FG-A
  • DG-FG-B – Include VMs with a tag that contain ST-FG-B
  • SG-PROD-INFRA-ALL – Include all Infra VMs that are AD/DNS servers
  • SG-PROD-INFRA-AD – include IP Set of VMs that are AD/DNS servers
  • SG-PROD-INFRA-NTP – include IP Set of NTP servers or VMs hosting NTP service
  • SG-PROD-INFRA-SCOM – include IP Set of SCOM servers or VMs hosting SCOM services
  • SG-PROD-INFRA-SCCM – include IP Set of SCCM servers or VMs hosting SCCM services
  • SG-PROD-INFRA-MDM – include IP Set of SNOW servers or VMs hosting SNOW services
  • SG-PROD-INFRA-RDGW – include IP Set of RDGW servers or VMs hosting RDGW service
  • SG-PROD-INFRA-FS – include IP Set of FS servers or VMs hosting FS services
  • SG-TEST-APP-SYSTEM1 – Include VMs that belongs to the application ANTURA
  • SG-TEST-DB – Include the DB VMs that belongs
  • SG-KLIENT-ALL – Include the IP Sets for all external clients
  • SG-WindowsServers – Include VMs whose OS starts with Microsoft Windows Server
  • SG-LinuxServers – Include VMs whose OS contains CentOS, Red Hat etc

A service is a protocol-port combination, and a service group is a group of services or other service groups. Below lists down some of the NSX Service groups and Services that can be created and used in combination with Security Groups when creating Firewall Rules in NSX:

SVG-WEBPORTS http/https 80/443
SV-SQL-1433 tcp/udp 1433

SYSTEM1 Analysis and Rule Building

Requirements for SYSTEM1

  • Allow SYSTEM1 Application to communicate with DB Server running on the default ports.
  • Allow DB Server to communicate with SYSTEM1 Application Server.
    Allow Clients to communicate with SYSTEM1 Servers.
    Block any unknown communications except the actual application traffic to and from the SYSTEM1 application.

To start building firewall rules utilization of vRNI is needed. To ‘Plan Security’ for the VM’s utilize vRNI to start by examining the flows of the SYSTEM1 VM to/from other VMs.

Analysis of flows is done by selecting scope and segment them accordingly based on entities such as VLAN/VXLAN, Security Groups, Application, Tier, Folder, Subnet, Cluster, virtual machine (VM), Port, Security Tag, Security Group, and IPSet. The micro-segmentation dashboard provides the analysis details with the topology diagram. This dashboard consists of the following sections:

  • Micro-Segments: This widget provides the diagram for topology planning. You can select the type of group and flows. Based on your inputs, you can view the corresponding topology planning diagram.
  • Traffic Distribution: This widget provides the details of the traffic distribution in bytes.
  • Top Ports by Bytes: This widget lists the top 100 ports that record the highest traffic. The metrics for the flow count and the flow volume are provided. You can view the flows for a particular port by clicking the count of flows corresponding to that port.

vRNI displays all flows that are inbound, outbound and bi-directional going to the SYSTEM1 server.

By selecting the SYSTEM1 wedge in the circle it is possible to go in deeper and see the actual flows between the application and other servers and services.

Detailed in this section are the services the SYSTEM1 VM are using, the number of external services that are accessed (49), the number of flows that goes to/from (60) and also the recommended Firewall Rules (14) that can be created to micro-segment the server. vRNI is recommending 14 rules to accommodate micro-segmentation for the SYSTEM1 application. 

Option exist to export all the recommended rules as CSV for further processing manually or by automation if needed.

The exported table is listed below for the SYSTEM1 recommended firewall rules.

SourceDestinationServicesProtocolsActionRelated FlowsType
SYSTEM1Others_Internet53 [dns] 137 [netbios-ns] 138 [netbios-dgm] 389 [ldap] 5355UDPALLOW9Virtual
Others_InternetSYSTEM180 [http] 443 [https]TCPALLOW4Virtual
Others_InternetSYSTEM1443 [https]TCPALLOW1Virtual
Others_InternetSYSTEM1123 [ntp]UDPALLOW2Virtual
SYSTEM1Others_Internet80 [http] 88 [kerberos] 135 [epmap] 389 [ldap] 443 [https] 445 [microsoft-ds] 1433 [ms-sql-server] 3268 [msft-gc] 5723 8530 10000-19999 40000-49999 49155 49158TCPALLOW34Virtual
SYSTEM1Others_Internet80 [http]TCPALLOW1Virtual

By continuing the procedure detailed for remaining servers, applications, shared infrastructure services and environments with vRNI, going through each application traffic-flows, exporting the recommended firewall rules, micro-segmentation can be implemented with NSX.

A sample build of firewall rules was conceptually created based on what was gathered during the collection and processing of the data

Firewall Rules The table below shows the firewall rules based on the security framework described based on the requirements above:

Next Steps

When the structure is in order, it is possible to start building the Security Groups, Security, Tags, Services, and Service Groups in NSX once that has been implemented. The next step when creating rules and all needed objects to accomplish micro-segmentation it is important to go through and check the communications with the servers and applications and verify they’re all still working correctly per the requirements given.

I would also like to show a table from VMware regarding the Segmentation Strategies. Make sure to start Small and work your way through your environments and systems.

Start with doing MacroSegmentation. Meaning start finding out what Environments can/cannot communicate with other Environments.

When that is completed Setup the MesoSegmentation; Meaning go through what Applications within your Environments can/cannot communicate with other Applications inside the same or outside the Environment.

And Lastly do the MicroSegmentation; Meaning go through what Systems inside the Application can/cannot communicate with other Systems inside the Applications and inside the Environments. Inception thinking is needed 🙂

Also a good idea when drawing out the different Environments, Applications and Systems withing each Application is to Build a Segmentation Flow Chart. With it you will get a picture drawn up on how things are connected and interacted with each other and also makes it much easier to establish what can/cannot communicate with each other.

A micro­segmentation approach powered by VMware NSX can address the inadequacy of East­West security controls that affect most data centers. The vRNI Visibility and Operations software helps to jumpstart the journey to micro­ segmentation by providing actionable insights into how workloads in a data center communicate and plan the segmentation accordingly.

Thanks for this time! /Jimmy

vExpert 2019!

Happy to update that I’m now a 2nd Time vExpert 2019

I also would like to congratulate all the other returning vExpert NSX members and welcome to all new members joining for the 1st time!

Link to the Announcements!

VMware NSX Data Center for vSphere 6.4.4 Released

So the latest patch update for NSX Datacenter for vSphere has been release as of 14 dec 2018.

The latest release finally include even more support in the HTML UI in vSphere.

NSX User Interface

  • VMware NSX – Functionality Updates for vSphere Client (HTML): The following VMware NSX features are now available through the vSphere Client: Logical Switches, Edge Appliance Management, Edge Services (DHCP, NAT), Edge Certificates, Edge Grouping Objects. For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.

Networking and Edge Services

  • Static Routes per Edge Service Gateway:increases from 2048 to 10,240 static routes for Quad Large and X-Large Edge Service Gateways.

Also some other issues has been resolved with the latest fix. Please stop by the Release notes page to read about these: Release Notes 6.4.4

This means that we can manage more of NSX features from vSphere. Still there is no functionality to access the Edges firewall, VPN and Routing but in time that will hopefully be release.

Have a great winter Holiday

VMware Specialist – Cloud Provider 2019

Today I recieved the latest certification called VMware Specialist – Cloud Provider 2019 after passing the exam last week.

The certification validates my expertise in deploying and managing VMware vCloud Director and demonstrates knowledge of the overall Cloud Provider Platform.

VMWARE VCLOUD DIRECTOR 9.5 Released and what’s new?

The new version of vCloud Director is being released and I wanted to do a quick writeup on what to expect with the new version and all the features that will be available.

Most interesting in my standpoint. If you are a developer working in vCD Cloud are the new integration with NSX-T and Kubernetes. Beeng able to provision containers into the vCD Cloud your company might have in place today. NSX-T is only in an initial integration at the moment, but will surely get full integration as we move forward with the product.

Read the full Blog about the new version. And checkout the datasheet Here.

What’s new in vCloud Director 9.5?

Deeper Integration with NSX

• Integrated into vCD: universal transport zone, universal logical switch and universal logical router now integrated into vCD
• Local egress is supported, active-active or active-standby
• Stretch L2 network across org VDCs in different vCenters/PVDCs in the   same site and across different sites
• Each network can be stretched across up to four Org VDCs
• IP address management (static and DHCP) for cross-VDC networks

Initial Integration with NSX-T

• NSX-T and NSX-V managers in the same vCD instance
• Regular vSwitch and DPDK vSwitch (ENS) for VLAN and Overlay
• Directed connected network (imported from NSX-T logical switch)
• Provider Virtual Datacenters allow clusters of hosts with and without ENS


Complete tenant user experience, including:
• User Management
• RBAC Management
• Organization Management

• Expanded Provider Portal
• RBAC Management
• Organization VDC Management

Improved RBAC

• Cascading levels of access
• Implement a flat, consistent, intuitive set of rights

What are the key features of VMware vCloud Director?

• Multi-tenant Resource Pooling: easily create virtual datacenters from common infrastructure to cater to heterogeneous enterprise needs. Policy- driven approach ensures enterprises have isolated virtual resources, independent role-based authentication and fine-grained access control.

• Multi-site Management: stretch data centers across sites and geographies and monitor these resources across sites from a single pane of glass.

• 3rd-party ISV Services: vCloud Director has an extensible UI that can be leveraged by 3rd-parties and Cloud Providers to natively integrate and publish services on the vCloud Director UI. For example, Dell EMC Avamar has natively integrated their Data Protection capabilities right onto the vCD UI.

• Datacenter Extension and Cloud Migration: enable simple, secure VM migration and data center extension with vCloud Director Extender. Allows for true hybridity, enterprise-driven workflows, seamless connectivity and cold or warm migration options.

• Operational Visibility and Insights: refreshed dashboard for centralized multi-tenant cloud management views. Leverage vRealize Operations’ native integration with vCloud Director using advanced analytics, chargeback and more for deep visibility into enterprise environments.

• Containers-as-a-Service: vCloud Director provides an easy on-ramp for enterprises, by delivering containers and VMs in the same virtual datacenter and faster time-to-consumption for Kubernetes, using the Container Services Extension.

2018 vExpert NSX

Happy to update that I’m now a vExpert NSX 2018.

I also would like to congratulate all the other returning vExpert NSX members and welcome to all new members joining for the 1st time!

Link to the Announcements:

Load more