All of this information is freely available in a few whitepapers that are a part of VMware vCloud Architecture ToolKit (vCAT) 2.0. These sets of documents are very in-depth and offer a great learning experience for anyone looking into vCloud Director. Note: I'm not discovering anything new, I am just merely pointing out some of the caveats and thought considerations that may be brought up.
vCloud Director extends the capabilities of the vSphere layer and focuses on delivering an IaaS model where by consumers can request resources from a cloud environment. vCloud Director also packages the vCloud API along with it that allows custom applications to be written so you can talk to a vCloud instance.
Let's dive into the first standout feature of vCloud 1.5: SQL Database support. Originally, vCloud Director was only supported on Oracle databases, which may have been a big influence into it's lack of early adoption. I have written an article called Installing vCloud Director 1.5 With SQL Server 2008 that details the steps to install vCloud Director using SQL Server. Some design considerations to take into account now is your SQL Database VM Sizing and perhaps having multiple SQL VMs. There are many components in a vCloud Design that utilize a SQL database: vCloud Director, vCenter(s), VUM(s), Chargeback(s), vCenter Orchestrator, and more. The size of your VM is now greatly effected if you have all these databases living on a single VM. Of course it can be done, but there is also the possibility to split it out into multiple VMs. A constraint to keep in mind for running vCloud on SQL or Oracle is cross-compatibility if you ever decide to switch. Moving from Oracle to SQL isn't an easy process as indicated in a 167 page document. VMware recommends a 4vCPU VM, 16GB of RAM and 100GB of Storage.
vCloud Director 1.5 supports vSphere 4.0 Update 2 and later. A design scenario to keep in mind is that if you deploy an ESXi version below vSphere 5, you will not be able to use some new features that are introduced. These new features are Fast Provisioning (Linked Clones), Stateless ESXi, VXLAN (1000v version 1.5 will support VXLAN on vSphere 4 hosts), Hardware version 8, and more.
Elastic Provider Virtual Datacenters (PvDC) are brand new in vCloud 1.5 that allow computer, network, and storage resources from multiple resource pools or clusters to be combined into a single offering. This type of offering can only be backed by the Pay-as-you-go Organizational Virtual Datacenters within vCloud Director.
Introduction of Fast Provisioning, or as it's known in the View world, Linked Clones. This might be one of the biggest gotchas in vCloud 1.5. The switch from legacy HA of ADM to Fault Domain Manager (FDM) in vSphere 5 still kept with it the 32 host cluster limit. Depending on your company policy or SLA, you may or may not be able to use Fast Provisioning inside of vCloud. If Fast Provisioning doesn't apply to you, then you can continue to abide by the vSphere limit of 32 hosts in a single cluster. If Fast Provisioning is something you can take advantage of, then you cannot exceed 8 hosts in a cluster using VMFS datastores. This is the same limit that is used by VMware View when utilizing Linked Clones. When using NFS datastores, your Fast Provisioned clusters can be larger than 8 and will then adhere to the vSphere maximums. This feature is going to greatly influence your design decision because you will need to better estimate the amount and types of workloads that will be running on those servers and keep in mind that elasticity by adding new servers to the cluster isn't an option with VMFS, so your storage protocol is now influenced. You should also create separate datastores dedicated for Fast Provisioned VMs and the maximum amount of Linked Clones that can be attached to a shadow VM is 30. In addition, you will need to account for enough capacity to introduce failover resources. Another great option for using Fast Provisioning is the ability to use it as a service tier in a portfolio offering. The Fast Provisioning model is introduced during an organization vDC creation.
Stateless ESXi hosts are compatible with vCloud Director 1.5. The design considerations are added benefits to running stateless such as adding servers on the fly and recovering a malfunctioning server very quickly. The constraint is ultimately being able to predict the spin up of hosts and cluster sizes. The stateless hosts must be configured for DHCP and you must attach image profiles via Auto Deploy and add rules using the Image Builder using PowerCLI. When performing this action, you need to add the vCloud Director vSphere Installation Bundle (VIB) which includes the vCloud Agent to the image profile. Quote from VMware documentation "Currently, this is a manual process as there is no API call to register VIBs
in an image profile. The vCloud Director VIB is loaded automatically when the host boots up. For preparation and un-preparation of stateless hosts, vCloud Director configures the agent using a host profile with an associated answer file. If the host is rebooted, the appropriate image profile is reloaded when the host starts back up. vCloud Director detects the state change, and the configuration is re-pushed to the host."
VXLAN and MAC-in-IP (MAC-in-UDP) Encapuslation enables the scaling of networks beyond the 4094 VLAN barrier. The design consideration is that all network devices that this will traverse need to be changed to Jumbo Frames of 1600 MTU or greater. I find it easier to change the MTU to 9000 on all the network equipment (including vDS) that will need jumbo frame support. The constraint is if your network has equipment that does not support the use of Jumbo Frames. Not changing the equipment to support jumbo frames will cause packet fragmentation that can kill network IO.
Currently, VXLAN with MAC-in-IP encapsulation is not part of vCD 1.5. MAC-in-MAC encapsulation is still the vCD-NI default, but setting jumbo frames across all your network equipment is still a requirement.
IPv6 is almost 100% cross-functional with vCloud Director. Every single piece of the stack is compatible with IPv6 except for vShield Edge. vShield Edge still requires IPv4 addressing for fenced vApps and NATing. If you feel that IPv6 is the way you want to go, you can connect vApps directly to external networks supporting IPv6 and bypass vShield Edge completely.
Storage vMotion and Storage DRS are not one in the same as in the base of vSphere 5. Within vSphere 5, Storage vMotion is a component of Storage DRS by moving workloads based on IO or size. Within vCloud Director, a Storage vMotion can only be accomplished if the target datastore is from the same Organization vDC as the vApp. In addition, you cannot do a Storage vMotion of a Fast Provisioned VM within the vSphere Client because that may fail the operation and end up balooning the disk. Instead, you must do a REST API Relocate_VM call. Currently, this can not be done in GUI. In addition, vCloud Director is unaware of Storage Clusters utilizing SDRS, therefore SDRS is not supported under vCloud Director 1.5 and should be disabled at the vSphere layer. Considering datastore size for Fast Provisioning operations is going to be a design criteria going forward. Fast Provisioning will allow you to store more copies, you just need to make sure the IOPS and storage are available within those LUNs.
Account lockouts have been implemented into vCloud 1.5 for additional security. These can be configured at the system level to specify the number of failed attempts and the timeout period to further secure the vCloud. By default it is not enabled so a good design practice to enable it and make it available to organization accounts.