Please go to - Design vCloud Director on VCE Vblock Systems - 2.0
Over the past few months, I've been working on a project with Chris Colotti and Sumner Burkhart of VMware along with Jeramiah Dooley and Sony Francis of VCE on a collaboration effort to design a best practice for running VMware vCloud Director on a Vblock.
It all began when I took the class VMware vCloud: Architecting the VMware Cloud and I had this urge to create a design so Vblocks could be deployed in a standard fashion so you could easily have vCD up and running on a Vblock in little time. There were many design considerations to take into account because the Vblock has many components and each of which has it's own constraints. I remember reading a blog from Chris Colotti called vCloud on Vblock – Findings from the field, so I got in touch with him via twitter to see if he would want to collaborate on such an effort. Once he was on board, Jeramiah and Sony reached out to me internally to see if they could join in. We decided it was best to draft up something that addresses everything we see today on current Vblock models along with vCloud Director 1.0.X and associated technology. This paper doesn't address the recently announced vCloud Director 1.5 components but most will remain unchanged except 1 or 2 sections depending upon changes in related technologies in the upcoming months.
This document is a joint effort between five people and doesn't reflect the supportability from our employers. This document is merely a reference architecture for running vCloud Director on a Vblock and it's associated technologies such as VNX, Fast Cache, Fully Automated Storage Tiering (FAST), Cisco Nexus 1000v, vNetwork Distributed Switch (vDS), vShield Edge, and more.
Please feel free to reach out to any of the authors if you have any questions related to the material.
8/22 - Version 1.1 - Physical Network considerations under vCloud Management Pod.
10/7 - Version 1.2 - Updated to reflect VCE's official whitepaper that Nexus 1000v is optional in a vCloud 1.0.x design. More to follow on changes with vCloud 1.5 and Nexus 1000v version 1.5
vCloud on Vblock Design Considerations Document Version 1.0 August 2011 Contributing Authors: Chris Colotti, Consulting Architect - VMware Kendrick Coleman, Senior vArchitect – VCE Jeramiah Dooley, Principal Solution’s vArchitect, SP & Verticals Group - VCE Sumner Burkart, Senior Consultant - VMware Sony Francis, Platform Engineering - VCE Table Of Contents Table Of Contents 3 Executive Summary 4 Disclaimer 4 Document Goals 4 Target Audience 4 Assumptions 5 Requirements 5 Management Infrastructure 7 Advanced Management Pod (AMP) 7 VMware vCloud Director Management Requirements 8 Additional Servers 8 Existing vSphere Instance 9 Consuming Vblock Blades 9 vCloud Management 10 Why Two VMware vCenter Servers? 10 AMP VMware vCenter 10 AMP Cluster Datacenter 10 vCloud Director Management Cluster Datacenter 11 Vblock VMware vCenter 11 vCenter Protection 11 Networking Infrastructure 11 The Cisco Nexus 1000V 12 Networking Solution for VMware vCloud Director and Vblock 13 Storage Infrastructure 16 Overview 16 FAST VP 16 Use Case #1: Standard Storage Tiering 16 Use Case #2: FAST VP-Based Storage Tiering 17 Tiering Policies 17 FAST Cache 18 Storage Metering and Chargeback 18 VMware vCloud Director and Vblock Scalability 20 Reference Links 21 ? Executive Summary Disclaimer Although this paper deals with some design considerations, it should be noted the opinions and ideas expressed in this paper are those of the authors and not of their respective companies. The contributing authors do work in the field and have collectively discussed ideas to help customers handle this particular solution. The ideas presented forth may not be 100% supported by VCE and/or VMware and are simply presented as options to solve the challenge of vCloud solutions on Vblock hardware technology. Document Goals The purpose of this document is to provide guidance and insight into some areas of interest when building a VMware vCloud solution on top of a Vblock hardware infrastructure. Both technologies provide flexibility in different areas to enable an organization, or service provider, to successfully deploy a VMware vCloud environment on VCE Vblock™ Infrastructure Platforms. To ensure proper architecture guidelines are met between Vblock and vCloud Director, certain design considerations need to be met. This solution brief is intended to provide guidance to properly architect and manage infrastructure, virtual and physical networking, storage configuration, and scalability of any VMware vCloud Director on Vblock environment. As VMware vCloud Director is being increasingly deployed on VCE Vblock, employees, partners and customers have been seeking additional information specific to a combined solution, which requires some additional considerations. We will address them in the following four specific target areas: • Management Infrastructure • Networking Infrastructure • Storage Infrastructure • Scalability Target Audience The target audience of this document is the individual with a highly technical background that will be designing, deploying, managing or selling a vCloud Director on Vblock solution, including, but not limited to; technical consultants, infrastructure architects, IT managers, implementation engineers, partner engineers, sales engineers, and potentially customer staff. This solutions brief is not intended to replace or override existing certified designs for both VMware vCloud Director or VCE Vblock, but instead, is meant to supplant knowledge and provide additional guidelines in deploying or modifying any environment that deploys the two in unison. Assumptions The following is a list of overall assumptions and considerations before utilizing information contained in this document: • Any reader designing or deploying should already be familiar with both VMware vCloud Director and VCE Vblock reference architectures and terminology • All readers should have sufficient understanding of the following subject areas or products: o Cisco Nexus 1000V administration o vNetwork Distributed Switch (vDS) administration o vSphere Best Practices and principles, including, but not limited to: • HA and DRS clusters • Fault tolerance o EMC Storage included as part of a Vblock: • FAST pools • Storage tiering • Disk technologies such as EFD, FC, and SATA o Physical and virtual networking areas relating to VLANs, subnets, routing, and switching o Database server administration (or access to existing enterprise database resources, including administration staff) o Extra components needed are not standardized in the VCE Vblock bill of materials Please note that vCloud Director API integration will not be addressed in this document. Requirements Recommendations contained throughout this document have considered the following design requirements and/or constraints: • VCE Vblock ships with one of the following AMP cluster configurations: o Mini Amp o HA Amp • The most recent version of the highly available (HA) AMP cluster utilizes a standalone EMC VNXe 3100 • A Vblock definition to UIM will only addresses a single UCSM domain which is a maximum of 64 UCS blades • Every VMware vCenter instance must be made highly available • A Cisco Nexus 1000V will be included in the design • EMC Ionix UIM will be used to provision VMware vSphere hosts that are members of each and every vCloud Director resource group Management Infrastructure The management infrastructure of both VMware vCloud Director and Vblock is critical to the availability of each and every individual component. The VCE Vblock management cluster controls the physical layer of the solution, while the VMware vCloud Director management cluster controls the virtual layer of the solution. Each layer is equally important and has its own special requirements – it is therefore imperative to understand what components manage each layer when designing a unified architecture. Advanced Management Pod (AMP) The AMP cluster is included with every Vblock instance and the desired AMP configuration for VMware vCloud Director on Vblock Platform is the HA AMP. The HA AMP is comprised of (2) two Cisco C200 rack mount servers and hosts all virtual machines necessary to manage the VCE Vblock hardware stack. Vblock virtual machine server components consist of, but aren’t necessarily limited to, EMC’s Ionix UIM, PowerPath Licensing, Unisphere, VMware’s vCenter and Update Manager. Currently, this cluster is configured with an EMC VNXe 3100, providing storage for all AMP management VMs. Since the AMP cluster is a design element of the VCE Engineering Vblock Reference Architecture, it should not be modified or removed in order to stay true to the original design. Changing the configuration of the AMP Cluster requires additional validation, input and review from various internal parties, and ultimately would not provide a timely solution. Additional justifications for not modifying this cluster include: • Cisco C200 Servers are not cabled and connected to Vblock SAN Storage • An AMP Cluster of only 2 nodes does not satisfy N+1 availability requirements of VMware vSphere • Utilizing the AMP cluster as a host platform for vCloud Director could possibly result in downtime and should be avoided VMware vCloud Director Management Requirements The current VMware vCloud Director Reference Architecture calls for separate management and compute clusters in order to provide a scalable VMware vCloud Director infrastructure. With the requirement for a dedicated and highly available vCloud management cluster, the solution is to create a second management cluster. Restating the need to leave the AMP management cluster unchanged, a second management cluster must be created and can be done in three different configurations. As shown in Figure 2 below, there are a significant number of virtual machines called for by the vCloud Director infrastructure; some mandatory, others optional, depending on the overall solution and existing infrastructure. In addition, VMware vCloud Director Reference Architecture dictates any vSphere vCenter Server configured to vCloud Director have additional security roles assigned to it in order to protect the virtual machines deployed into it. This becomes very difficult if all items are managed by a single vSphere vCenter, therefore two instances should be provided. Additional Servers The first scenario consists of four (4) Cisco C-200 hosts to be deployed to support vCloud Director. This vCloud management cluster will tie into the existing VCE Vblock fabric. This allows the (4) four C200 servers hosting vCloud management virtual machines to use the EMC SAN for storage and all network connections need to be made fully redundant by attaching them to the Cisco Nexus 5000 and MDS 9000 series switches. The (4) four C200 servers can be packaged with the Cisco Nexus 1000v and EMC PowerPath V/E components, but is not required. This is the recommended approach to run vCloud Director on Vblock because it allows greater scalability and resiliency to failures. Existing vSphere Instance Many customers adopting vCloud Director may already have an existing vSphere server farm. If the customer chooses to do so, they may use an existing vSphere server farm to provide resources for the vCloud management components. The existing vSphere instance must be fully redundant and have high bandwidth connections to the Vblock. The existing vSphere farm must also follow all the guidelines as shown above by providing at least (3) three to (4) four hosts dedicated to management to satisfy N+1 or N+2 redundancy. For customers pursuing this route, the vCenter instance controlling the Vblock will reside in the customer’s existing vSphere environment and needs to be migrated from the AMP. This is perfectly acceptable for vCloud Director design because the Vblock becomes dedicated as vCloud resources to be consumed. Consuming Vblock Blades The final option is to use (4) four Cisco B-series blades inside the Vblock. The blades used for vCloud management can be any standard blade pack offered by VCE. This approach will require the (4) four servers in the cluster to come from a minimum of (2) two different chassis. The blades will automatically be packaged with Cisco Nexus 1000v and EMC PowerPath V/E components. This approach is not recommended because like any solution, scalability proves to be a point of a limitation. Consuming (4) blades as a management cluster will ultimately remove the ability to scale up vCloud resources in a single Vblock to its full potential. vCloud Management Justifications supporting a second vCenter instance and management cluster: • The approach aligns with the VMware vCloud Director Reference Architecture, which calls for a separate management cluster (or “pod”) • It provides maximum scalability within the vCloud Director management cluster through addition of individual components • It ensures proper vSphere HA capacity for both N+1 redundancy and maintenance mode • Additional network and SAN ports requirements cannot be satisfied with the existing AMP cluster design • Adding additional Cisco C200 servers provides a much simpler solution than modifying any existing approved AMP cluster design • Creating a separate vCloud Director Pod removes any contention for resources or potential conflicts if all management virtual machines were hosted in a single HA AMP cluster The separation of each tier of management allows greater control of the Vblock, isolates VCE AMP management from VMware vCloud Director management, and preserves the current configuration(s) with added flexibility. Although there may be other designs possible that satisfy all requirements, the recommended approach was to separate the two environments completely. Why Two VMware vCenter Servers? Based on the architecture suggested above and aligning with the vCloud Director Reference architecture, we want to make sure readers of this document understand where each vCenter is not only hosted, but which vCenter Server is also managing what ESXi hosts and VMware Virtual Machines. AMP VMware vCenter The first instance of VMware vCenter Server will be hosted inside the Advanced Management Pod. This VMware vCenter will serve two primary functions and will be organized in two separate datacenter objects for separation. AMP Cluster Datacenter This datacenter has a single cluster object housing two (2) AMP C200 servers. Essentially this vCenter datacenter will be managing itself since it is also running in that same cluster. It will provide vCenter functions to these two servers such as Update Manager, templates, and cloning functions. This datacenter object will have one set of access roles and permissions. (Customers may or may not have access to these ESXi hosts depending on their agreement with VCE.) vCloud Director Management Cluster Datacenter This also has a single cluster object defined that is made up of four (4) Cisco C200 rack servers or the customers chosen vCloud Management Pod configuration as stated previously. This cluster may have separate distinct access roles and permissions than the first cluster. The customer will generally need full access to this by any vSphere administrators to manage the VMware Virtual Machines in the management pod. This cluster, however, is outside the vCenter Server which provides out of cluster management. This is a generally accepted best practice with vSphere Architecture. Vblock VMware vCenter The second VMware vCenter instance is hosted inside the vCloud management pod, the hosts which are in turn managed by the AMP VMware vCenter instance. Simply speaking, this will be a VMware Virtual Machine in the AMP vCenter instance running on the four-node VMware vCloud Management Cluster. This may have multiple datacenter and/or cluster objects depending on the number of UCS blades initially deployed and scaled up over time. Per the VMware vCloud Reference Architecture, this instance will only manage vCloud hosts and virtual machines. UIM will also point to this vCenter as it provisions UCS blades for consumption by VMware vCloud Director. Lastly, this vCenter instance will have completely separate permissions to protect vCloud controlled objects from being mishandled. vCenter Protection The existence of vCenter is critical in a vCloud Director implementation because vCenter is now a secondary layer in the vCloud Director Stack. The vCloud Director servers are a layer higher in the management stack and control the vCenter servers. The recommended approach is to protect the vCenter instance hosted inside the vCloud Management Pod by utilizing vCenter Heartbeat. This is not a required component of the vCloud Director on Vblock design. ? Networking Infrastructure VMware vCloud Director provides Layer-2 networking as isolated entities that can be provisioned on demand and consumed by tenants in the cloud. These isolated entities are created as network pools, which can be used to create organization networks which vApps rely on. vApps are the core building block for deploying a preset number of Virtual Machines configured for a specific purpose. When deployed, there are 3 different types of networks, which can be connected: • External (Public) Networks • External Org Networks (Direct connected or NAT-routed to external networks) • Internal Org Networks (Isolated, direct connected or NAT-routed to external networks) The virtual machines within a vApp can be placed on any one or more of the networks presented for varying levels of connectivity based on each use case. In addition, vCloud Director uses three types of pool types to create these networks. Below is a basic comparison of the three network pool types (for more detailed information, please refer to the VMware vCloud Director documentation): • Port Group Backed Pools o Benefits – supported by all three virtual switch types; Cisco Nexus 1000V, VMware vDS and vSwitch o Constraints – manual provisioning; vSphere backed switches have to be pre-configured; must be available on every host in cluster • VLAN backed Pools o Benefits – separation of traffic through use of VLAN tagging o Constraints – currently only supported by VMware vDS; consumes a VLAN ID for every network pool • vCD-NI backed Pools o Benefits – automated provisioning of network pools; consumption of just 1 VLAN ID o Constraints – currently only supported by VMware vDS; maximum performance requires an MTU size of at least 1524 on physical network ports (both host and directly attached switches) The Cisco Nexus 1000V The Cisco Nexus 1000V is an integral part of the Vblock platform, allowing for advanced feature sets of the Cisco NX-OS to live in the virtual space. The NX-OS gives network administrators the ability to see deeper into network traffic and inspect traffic that traverses the network. It interoperates with VMware vCloud Director, and extends the benefits of Cisco NX-OS features, feature consistency, and Cisco’s non-disruptive operational model to enterprise private clouds and service provider hosted public clouds managed by VMware vCloud Director. VMware vCloud Director Network Isolation (vCD-NI) is a VMware technology that provides isolated Layer-2 networks for multiple tenants of a cloud without consuming VLAN address space. vCD-NI provides Layer-2 network isolation by means of a network overlay technology utilizing MAC in MAC encapsulation and is not available with the Cisco Nexus 1000v at the time of this writing. The Cisco Nexus 1000v requires port groups to be pre-provisioned for use by VMware vCloud Director. ? Networking Solution for VMware vCloud Director and Vblock The Vblock solution for VMware vCloud Director takes an approach where both the Cisco Nexus 1000V and the VMware vNetwork Distributed Switch (vDS) are used in conjunction with each other. The logical Vblock platform build process will be done slightly differently with VMware vCloud Director on Vblock. Every ESXi host will have both a Nexus 1000V and a VMware vDS. Every Cisco UCS half width blade inside the Vblock platform comes with one M81KR (PALO) Virtual Interface card while Cisco UCS full width blades are configured with two. The M81KR is unique because each card has two 10GbE adapters that can allocate resources into virtual interfaces. The vCloud Director on Vblock solution uses the Cisco UCS M81KR adapters to present four (4) virtual 10GbE adapters to each ESX host. This doesn’t mean every host has 40Gb of available throughput, but all 4 virtual network interfaces share 20Gb of available bandwidth. Two (2) 10GbE adapters are given to each virtual switch, which allow for simultaneous use and full redundancy. This changes slightly when using a Cisco B-series full width blade. Since there are two M81KR (PALO) Virtual interface cards in each blade it has 4 10GbE adapters that resources can use. The vCloud Director on Vblock solution uses the Cisco UCS M81KR adapters to present four (4) 10GbE adapters to each ESXi host. Two (2) 10GbE adapters, one from each M81KR card, are given to each virtual switch type, which allow for simultaneous use and full redundancy. The VMware vCloud on Vblock solution uses the Cisco Nexus 1000V assigned to port group-backed network pools for everything entering and exiting the Vblock on external networks. This approach allows the network team to control everything on the network up to the Vblock components. Currently the Cisco Nexus 1000V capability extends only as far as pre-provisioned configuration of port groups in vSphere. VMware vCloud Director external network port groups must be created manually in VMware vCloud Director and then associated with a pre-provisioned vSphere port group. All external port groups need to be created on the Cisco Nexus 1000V by the network administrator and assigned as needed inside VMware vCloud Director. This approach allows the network team to maintain control of the network for every packet that is external to the VMware vCloud Director cloud. The VMware vDS is responsible for all External Organization and Internal Organization Networks which are internal to the cloud. This allows VMware vCloud Director to natively automate the process of creating new port groups that are backed with either VLAN-backed pools or vCD-NI-backed pools. VMware vDS gives cloud administrators the ability to dynamically create the VMware vCloud based isolated networks with little to no intervention by the network team. It is also recommended that the vCD-NI pools are used since they provide the greatest flexibility with the least number of required VLANs. External Org and/or Internal Org networks using network pools backed by VLAN or vCD-NI port groups, that are Layer-2 segments, route between hosts in the same VMware cluster. When a vApp (or VM inside a vApp) needs to access an external network, the traffic is routed internally on the ESX host from the VMware vDS to the Cisco Nexus 1000V by use of the vShield Edge appliance using a NAT-routed configuration. The vShield Edge appliance is configured with 2 NICs, one connected to an organization network on the vNetwork Distributed Switch, and one connected to an external network on the Cisco Nexus 1000V, bridging the two networks together. Additionally, a vApp could be configured to directly access an external network based on a specific use case and therefore would only be attached to the Cisco Nexus 1000V. The first diagram below illustrates basic connectivity of a NAT-routed vApp with VMware vShield Edge: The second alternative configuration where either the vApp (Internal Org) or External Org network could be direct attached to the External (public) network is shown below. In this case, virtual machines inside a vApp are essentially directly connected to the external network and therefore would not be able to take advantage of NAT and/or firewall functionality provided by vShield Edge and would be consuming external IP addresses from the external network pool. ? ? Storage Infrastructure Overview Storage is a key design element in a VMware vCloud environment, both at the physical infrastructure level, as well as the Provider Virtual Datacenter (VDC) level. The functionality of the storage layer can improve performance, increase scalability, and provide more options in the service creation process. EMC arrays at the heart of the VCE Vblock Infrastructure platform and offer a number of features that can be leveraged in a vCloud environment, including FAST VP, FAST Cache and the ability to provide a unified storage platform that can serve both file and block storage. FAST VP VNX FAST VP is a policy-based auto-tiering solution. The goal of FAST VP is to efficiently utilize storage tiers is to lower the overall cost of the storage solution by moving “slices” of colder data to high-capacity disks and to increase performance by keeping hotter slices of data on performance drives. In a VMware vCloud environment, FAST VP is a way for provider to offer a blended storage offering, reducing the cost of a traditional single-type offering while allowing for a wider range of customer-use cases and accommodating a larger cross-section of VMs with different performance characteristics. Use Case #1: Standard Storage Tiering In a non-FAST VP enabled array, typically multiple storage tiers are presented to the vCloud environment, and each of these offerings is abstracted out into separate Provider VDCs. For example, a provider may choose to provision an EFD (SSD/Flash) tier, a FC/SAS tier and a SATA tier, and then abstract these into a Gold, Silver and Bronze Provider VDCs. The customer then chooses resources from these for use in their Organizational VDC. This provisioning model is limited for a number of reasons: • VMware vCloud Director doesn’t allow for a non-disruptive way to move VMs from one Provider VDC to another, meaning the customer must provide for downtime if the vApp needs to be moved to a more appropriate tier • For workloads with a variable I/O personality, there is no mechanism to automatically migrate those workloads to a more appropriate tier of disk • With the cost of EFDs still being significant, creating an entire tier of them can be prohibitively expensive, especially with few workloads having an I/O pattern that takes full advantage of this particular storage medium One way in which the standard storage tiering model can be a benefit is when multiple arrays are being utilized to provide different kinds of storage of different to support different I/O workloads. Use Case #2: FAST VP-Based Storage Tiering On a Vblock platform that is licensed for FAST VP, there are ways to provide more flexibility and a more cost-effective platform when compared to a standard tiering model. Rather than using a single disk type per Provider VDC, companies can blend both the cost and performance characteristics of multiple disk types. Some examples of this would include: • Creating a FAST VP pool that contains 20% EFD and 80% FC/SAS disks as a “Performance Tier” offering for customers who may need the performance of EFD during certain times, but who don’t want to pay for that performance all the time. • Creating a FAST VP pool that contains 50% FC/SAS disks and 50% SATA disks as a “Production Tier” where most standard enterprise apps can take advantage of the standard FC/SAS performance, yet the ability to de-stage cold data to SATA disk brings the overall cost of the storage down per GB. • Creating a FAST VP pool that contains 90% SATA disks and 10% FC/SAS disks as an “Archive Tier” where mostly near-line data is stored, with the FC/SAS disks being used for those instances where the customer needs to go to the archive to recover data, or for customers who are dumping a significant amount of data into the tier. Tiering Policies FAST VP offers a number of policy settings in how data is placed, how often data is promoted and how data movement is managed. In a vCloud Director environment, the following policy settings are recommended to best accommodate the types of I/O workloads produced: • By default, the Data Relocation Schedule is set to migrate data 7 days a week, between 11pm and 6am, reflecting the standard business day, and to use a Data Relocation Rate of “Medium” which can relocate 300-400 GB of data per hour. In a vCloud environment, VCE recommends opening up the Data Relocation window to run 24-hours a day, but reduce the Data Relocation Rate to “Low.” This will allow for a constant promotion and demotion of data, yet will limit the impact on host I/O. • By default, FAST VP-enabled LUNs/Pools are set to use the “Auto-Tier,” spreading data across all tiers of disk evenly. In a vCloud environment, where customers are generally paying for the lower tier of storage but leveraging the ability to promote workloads to higher performing disk when needed, the VCE recommendation is to use the “Lowest Available Tier” policy. This places all data onto the lower tier of disk initially, keeping the higher tier of disk free for data that needs it. FAST Cache FAST Cache is an industry-leading feature, supported by all 300-series Vblock platforms, which extended the VNX array’s read-write cache and ensures that unpredictable I/O spikes are serviced at EFD speeds , which is of particular benefit in a vCloud environment. Multiple VMs, on multiple VMFS datastores spread across multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors as well as on the DRAM cache. FAST Cache, a standard feature on all Vblocks, mitigates the effects of this kind of I/O by extending the DRAM cache for both reads and writes, increasing the overall cache performance of the array, improving I/O during usage spikes and dramatically reducing the overall number of dirty pages and cache misses. Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache work together in concert to improve array performance. Data that has been promoted to an EFD tier will never be cached inside FAST Cache, ensuring that both options are leveraged in the most efficient way. In a vCloud Director environment, VCE recommends a minimum of 100 GB of FAST Cache, with the amount of FAST Cache increasing as the number of VMs increases. The following table details the recommendations from VCE: # of VMs FAST Cache Configuration 0-249 100GB Total (2x100GB, RAID1) 250-499 400GB Total (4x200GB, RAID1) 500-999 600GB Total (6x200GB, RAID1) 1000+ 1000GB Total (10x200GB, RAID1) The combination of FAST VP and FAST Cache allows the vCloud environment to scale better, support more VMs and a wider variety of service offerings, and protects against I/O spikes and bursting workloads in a way that is unique in the industry. These two technologies in tandem are a significant differentiator for the VCE Vblock infrastructure platform. Storage Metering and Chargeback Having flexibility in how you deliver storage offerings is important, but in a vCloud environment having the ability to meter and charge for that storage is equally critical. While not a required component of VMware vCloud Director on Vblock, this design uses the VMware vCenter Chargeback product, in conjunction with the VMware Cloud Director Data Collector and vShield Manager Data Collector. Configuration of this product is outside the scope of this paper, but resources can be found on the VMware website. After Chargeback is configured properly , Organizations created in vCloud Director will be imported into vCenter Chargeback, including all of the Organization VDCs, the media and template files, vApps, virtual machines and networks. Each level of the customer organization is represented in the vCenter Chargeback hierarchy, allowing reporting with as much granularity as necessary.? ? VMware vCloud Director and Vblock Scalability To understand the scalability of VMware vCloud Director on VCE Vblock we need to address items that will affect decisions and recommendations. First, every Vblock ships with EMC Ionix Unified Infrastructure Manager (UIM). EMC’s UIM software is used as a hardware-provisioning piece to deploy physical hardware in a Vblock platform. Second, while every Vblock also uses VMware vCenter to manage the vSphere layer, in vCloud deployments the vSphere layer is actually controlled by VMware vCloud Director. EMC Ionix UIM software communicates to VMware vCenter to provision physical blades with VMware ESX or ESXi and integrates them into vSphere objects that VMware vCloud Director can then consume. These can either be existing vSphere cluster objects or they can be completely new objects located in the same VMware vCenter. An existing VMware vCenter instance managing Vblock for vCloud Director resources can scale to the maximums set by VMware, which based on current documentation, is 1000 hosts. In the past, as more provisioned blades were needed, another UIM instance was created along with a new VMware vCenter instance. Since UIM is crucial to the orchestration of hosts, and the maximums of each product differ, recommendations can be based on individual customer requirements and the specific use case for UIM. As each new Vblock is deployed, orchestration workflows can discover the new Vblock, create UIM service offerings, associate them to specific vCenter instances, initiate new services on top of the Vblock, as well as provision new ESX hosts into vCenter clusters. Each additional Vblock, from a hardware perspective, mirrors the configuration of the first Vblock, the exception being each and every new one does not require a new vCenter. UIM on the other hand, is directed at the original vCenter service available in the vCloud Director Management Stack and new blades are provisioned and added to a new VMware cluster. As additional Vblocks are added to vCenter for vCloud Director capacity, the recommended maximum host configuration stands at 640 blades. Once the 640 blade maximum has been reached, a new vCenter instance becomes necessary and new Vblocks are then assigned to it. The design philosophy of architecting minimal VMware vCenter Servers, each representing a building block, enables customers to realize the strengths of Vblock scalability while reducing VMware vCenter and vCloud environment complexity. Customers simply purchase more compute resources (in the form of Vblocks) and add them to their VMware vCloud Director Cloud environment in a quick and convenient manner – especially in UIM-based deployments. By leveraging the rapid hardware provisioning of EMC Ionix UIM and the elasticity of VMware vCloud Director, the best of both worlds are joined to provide consistent, readily available, and scalable resource deployment for cloud consumers. Reference Links http://www.vmware.com/files/pdf/VMware-Architecting-vCloud-WP.pdf http://www.emc.com/collateral/software/white-papers/h8058-fast-vp-unified-storage-wp.pdf http://www.emc.com/collateral/hardware/white-papers/h8217-introduction-vnx-wp.pdf http://www.emc2.ro/collateral/hardware/white-papers/h8220-fast-suite-sap-vnx-wp.pdf Cisco Nexus 1000V Integration with VMware vCloud Director