Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

VCE Mega Launch Technical Details for 2/21 Product Announcements

On Thursday February 21st, VCE announced their mega-launch. There is a bunch of cool releases that have been announced. I'm going to cover a few certain topics:

  • What VCE set out to do, and our value over reference architecture
  • Vblock 100 Release
  • Vblock 200 Release
  • Vblock 300 & 700 Refreshes
  • Specialized Vblock Systems for SAP HANA
  • VCE's new Vision Intelligent Operations software

 

What VCE set out to do, and our value over reference architecture

VCE is now in it's third generation of Vblock products. I have been here since the beginning and seen the products change over the past 3 years. It's been an amazing roller coaster of fun and it's even better to start seeing the market shift. VCE now has some actual competitors that try to do what it is we do. Heck, even Gartner came out with a new category that clearly defines Fabric-Based infrastructure and guess what you don't see in there? VSPEX and FlexPod. Finally, everyone is starting to see that those two compete with one another, not Vblock. Even more so, customers are starting to realize that there is a new evolution in data center operations. It's no longer about building out infrastructure, instead it's about moving to a cloud infrastructure where orchestration plays a key role and monitoring the overall operations is more important. That means you need to quit focusing on hardware and start thinking much higher in the stack. Customers turn to VCE because of our success track record of having over 1000+ Vblocks in the field, the effort our QA teams spend on testing out versions of software and firmware so you don't have to, the speed to market VCE delivers, having over 1200+ employees world-wide that only focus on Vblock (find another company that strong that focuses on converged infrastructure), and a world-class support organization that assists in life cycle support. If you don't know much about VCE and the value we bring, please read my previous post The VCE Certification Matrix - Ensuring Integration, and reach out to me to learn more. VCE is going to sky rocket this year with these new announcements, so get ready to see a Vblock in a datacenter near you.

 

 

 

 

Vblock 100 Release

The Vblock 100 might be VCE's worst kept secret. This product was announced at EMC World in 2011 but was still under wraps for quite a while and was never available to our partner community. Now it has all changed. The Vblock 100 is GA and available to everyone. So why the Vblock 100?

 

The Vblock 100 is a solution for Small to Mid-sized data centers and Enterprise Distributed remote locations. With pre-defined fixed configurations, the Vblock System 100 can be shipped to your location in approximately 30 days of order—and fully operational in as little as two more.

 

So what's in the Vblock 100?

 

There are two models of the Vblock 100. BX and DX. The BX is a half-height 24RU cabinet while the DX is the standard 42RU cabinet.

The servers:

  • The Vblock System 100 BX is pre-configured with 3-4 Cisco UCS C220 M3 servers equipped with 2x 2.50GHz (6-cores) E5-2640 processor, 64GB memory . 2 configurations total.
  • The Vblock System 100 DX is pre-configured with 3-8 Cisco UCS C220 M3 servers equipped with 2x 2.50GHz (6-cores) E5-2640 processor, 96GB memory. 6 configurations total.
  • Flexible memory configuration of either 64GB for the BX model and 96GB for the DX model
  • VCE pre-installs VMware ESXi on the C220 server
  • The C220 Servers boot from SAN using iSCSI
  • Bare-metal install is supported on DX models only.
  • Equipped with 6 1GB NICs for vSphere to the network

The network:

  • The Vblock System 100 BX is pre-configured with 2 Catalyst 3750-X 24-port switches
  • The Vblock System 100 DX is pre-configured with 2 Catalyst 3750-X 48-port switches
  • Switches provide both iSCSI and NFS storage protocols.
  • Configured in pairs using StackWise technology
  • Cisco Nexus 1000v Essentials Virtual Switch
  • 8x1GB Uplinks are used to connect into the core network

The storage:

  • The Vblock System 100 BX is pre-configured with a VNXe 3150 Dual (2x1Gbps+2x10Gbps/SP)
  • The Vblock System 100 DX is pre-configured with a VNXe 3300 Dual (4x1Gbps+2x10Gbps/SP)
  • Serial attached SCSI (SAS) 600 GB 15K RPM 3.5” (Performance)
  • Nearline SAS (NL SAS) 2 TB 7.2K RPM 3.5” (Capacity)
  • Can be configured in RAID-5 (5(4+1) for the 100BX or 7(6+1) for the 100DX) or RAID-6 (6 (4 drives + 2 parity)
  • 10GB links are used from the storage to the network for storage traffic

The hypervisor:

  • VMware vSphere Enterprise Plus

The management:

  • Logical AMP (all management VMs live on the Vblock)
  • Vision Intelligent Operations

 

Vblock 200 Release

I'll be honest, this one gets my system going. This is pretty much the best bang for any virtualized datacenter instance. The Vblock 100 is great when you need to order 20 of them and don't need enterprise class performance. The Vblock 100 is great for those applications that need to be pushed near the edge for remote or branch offices like Cisco UC, VMware Mirage, Exchange, a few VDI desktops, etc. The Vblock 200 can be used for those use cases as well but it is geared at being the 300's little brother. The Vblock 200 is perfect for those businesses that want to step into 10GB and higher performance as their core system for running virtualized workloads. So what's in the Vblock 200?

 

There is only one model of the Vblock 200 but has a wide range of configurations. It comes in our standard 42RU rack.

The servers:

  • Four to 12 C220 M3 servers in increments of one server
  • VIC-1225 for converged networking (All interfaces are 10 Gb DCE)
  • Processors: three CPU options to closely match those offered in Vblock 300/Vblock 700 (Single CPU configurations are available)
  • Memory Options : 64GB, 96GB, 128GB, 192GB, 256GB
  • vSphere NMP or EMC PowerPath/VE multipath options
  • Boot from SAN using Fibre Channel

The network:

  • Two Nexus 5548UP switches
  • 10 GbE connectivity for servers
  • Unified Storage configs include 2x 10GB Ethernet
  • 8 Gb FC connectivity for storage
  • Flexible customer uplink options using 8x copper/optical, 1Gb/10Gb
  • One Catalyst 3750 switch for management
  • Cisco Nexus 1000v virtual switch supporting Advanced Edition and optional Essential Edition.

The storage:

  • VNX 5300 array
  • Unified is optional but 4U rack space is reserved
  • FAST Cache is mandatory
  • All other software modules are optional
  • Drive configuration has considerable flexibility
  • External NFS/CIFS is supported for unified configurations.  Dedicated storage must be allocated on the array (note – unlike the Vblock System 300, x-blades are shared for internal and external NAS connections)
  • Base configuration comes with 25-drive DPE, 8-drive vault, and FAST Cache
  • Add up to two more 25-drive DAEs for EFD and/or SAS drives
  • Add up to two 15-drive DAEs for NL-SAS drives
  • Boot LUNs are allocated out of production data pool for efficiency, which means no more dedicated boot LUN packs
  • RAID options: 4+1, 8+1, 6+2 and 14+2, 4+4
  • Drive options: 100/200GB 2.5” solid-state drive (extreme performance), 300/600/900GB 10K RPM 2.5” SAS (performance), 1/2/3TB 7.2K RPM 3.5” NL-SAS (capacity)

The hypervisor:

  • VMware vSphere Enterprise Plus

The management:

  • Logical AMP (all management VMs live on the Vblock)
  • Vision Intelligent Operations

 

Vblock 300 & 700 Refreshes

I'm not going to talk about everything inside of these Vblocks. These two workhorses have been the staple for VCE and they both scream "enterprise performance". I'll tell you what has changed. Remember this is all new stuff, same old stuff still applies.

 

The servers:

  • More blade models have been added to the VCE matrix such as B22s
  • Higher RAM density's are now supported (768GB in a single half-width blade)
  • Cisco 6248/6296UP Fabric Interconnects are now supported.
  • The addition of the 6296UP means we can have up to 16 UCS chassis in a single Vblock 300 (except EX) which is a total of 128 blades with 8 uplinks per chassis. In a Vblock 700, VCE still supports up to 384 blades in a single Vblock.
  • Cisco 2204XP/2208XP IOM are both supported. The 2208XP allows you to run 16 uplinks to each chassis to support high bandwidth throughput with very little over subscription compared to the 2204 with 8 uplinks.
  • VIC1280 now provides 40GB of bandwidth to each blade (not supported in the B250)
  • VIC1240 is designed for M3 family, 40GB of bandwidth to the FIs (20GB per fabric, 40GB per server)
  • up to 50% bare-metal support to deploy non-virtualized workloads
  • Disjoint layer 2 support for Vblock 700

The network:

  • Support for both Nexus 5548UP (960Gpbs throughput) and 5596UP (1920Gpbs throughput)
  • Nexus 7010 inside the Vblock 700
  • Nexus 9513 support for Vblock 700

The architecture:

  • Current - Uses MDS 9148 for SAN connectivity. Uses Nexus 55xx for Ethernet.
  • New Unified - Uses 55xx for both SAN and Ethernet connectivity

The storage:

  • New RAID configurations - RAID5 (8+1) and RAID6 (14+2)
  • Mixed RAID types in FAST (Fully Automated Storage Tiering) pools
  • Vblock 700 supports modular configuration with VMAX 10K, 20K, and 40K

 


 

Specialized Vblock Systems for SAP HANA

VCE is breaking new grounds. VCE has spun-off a new group called "Specialized Systems" internally. This group is responsible for making Vblock products that are pure appliance based models of computing. The same great stuff applies to these Vblocks. Single Support, certification matrices, etc. You might think it's not that hard, but believe me when I tell you there is so much happening behind the scenes to make an appliance based Vblock. VCE's first appliance based Vblock is for SAP HANA. SAP HANA has certified this Vblock appliance. I would say there are only a handful of SAP HANA certified infrastructures, but there's only 2 that I know of and one of them is VCE.

 

VCE's approach is to architect a modern approach to real-time business. You get all the same value that VCE brings with a Vblock now without you having to worry about building the infrastructure yourself. What's so different about SAP HANA?

 

SAP HANA is a modern platform for real-time analytics and applications. It enables organization to analyze business operations based on large volume and variety of detailed data in real-time, as it happens. In addition to real-time analytics, SAP is also delivering new class of real-time applications, powered by SAP HANA platform. The platform can be deployed as an appliance or delivered via the cloud. SAP in-memory computing is the core technology underlying SAP HANA platform.

 

VCE's SAP HANA appliance comes in two flavors, big and small. SAP HANA on Vblock Systems can be configured in a 3 Node Base (+1 Standby) or 7 Node Base (+1 Standby) based on Vblock System 300.

 

VCE provides factory pre-installation of HANA appliance hardware (server, network and storage), operating system, and SAP HANA Platform Edition Software:

  • Manufacturing also performs final HANA HA and throughput testing before shipping
  • VCE or partners finalize installation with an on site setup and configuration of the SAP HANA components, including:
  • Deployment in a customer’s data center
  • Connectivity to networks
  • SAP Solution Manager
  • Support for Secure Socket Layer (SSL)
  • Connectivity for SAP Support with an SAP software program known as SAP Router
  • Included is the EMC Secure Remote Support (ESRS) solution:
  • Software-based secure access point for remote support activities between EMC and HANA on Vblock Systems
  • IP-based connection enables fast, remote diagnosis and repair of potential problems before they impact production
  • 256-bit encryption and RSA digital certificates ensure highest security
  • Detailed audit capabilities enable compliance with regulatory and internal business requirements

 

Having a SAN-based architecture enables all nodes to have visibility to the full HANA database, and alleviates a lot of data copying and movement from one HANA node to another in situations where persistent storage is located in isolated storage systems on each node. The copying and movement of data not only consumes significant CPU resources of the nodes, but also consumes bandwidth on the interconnect fabric between nodes. Thus in a node-based arrangement recovery times will be significantly longer, and will also result in seriously degraded HANA database performance during the recovery process.

 

The use of a filesystem adds another software layer to the stack, which increases CPU utilization for I/O operations vs block-based storage. Any I/O to the persistent storage by the HANA system will require consumption of CPU resources as a result of the added software layer and will detract from the HANA appliance’s primary role -- to process in-memory data. Thus block-based implementation is a superior reference design to HANA systems that require a filesystem like NFS or GPFS.

 

The addition of the EMC storage API’s adds another powerful offload opportunity to the storage system CPU’s for all I/O operations. The storage systems in a powerful SAN-based architecture have powerful processors custom designed for manipulating, moving, and replicating large amounts of data. The API’s enable the HANA system to rapidly offload CPU intensive I/O operations to the storage system controllers, and allow the HANA system to allocate all available resources to manipulating in-memory data-motion. This offload acceleration to the HANA system becomes crucial during periods of high I/O to/from persistent storage such as is the case in HA and DR scenarios, but will offer significant value during normal operations as well.

 

The dirty technical details:

3 Node Base (+1 standby)

  • 4 x B440 Blades & 1 x VNX 5300
  • 1 x DPE – SAS 2.5” 300GB 10k
  • 13 x 300GB SAS used for VNX Vault, SAN Boot (SLES), and HANA binaries
  • 3 x DAE – SAS 2.5” 600GB 10k
  • 40 x 600GB SAS 4+1 RAID5 pack used for HANA LOG and DATA LUNs
  • 2 x 600GB Hot-spares

 

7 Node Base (+1 standby)

  • 8 x B440 Blades & 2 x VNX 5300’s
  • 2 x DPE – SAS 2.5” 300GB 10k
  • 26 x 300GB SAS used for VNX Vault, SAN Boot (SLES), and HANA binaries
  • 6 x DAE – SAS 2.5”600GB 10k
  • 80 x 600GB SAS 4+1 RAID5 pack used for HANA LOG and DATA LUNs
  • 4 x 600GB Hot-spares

 

VCE's new Vision Intelligent Operations software

I saved this one for last and there is a very good reason. For the longest time, people have always wondered "what makes Vblock so different?" Today, many customers already see the benefit VCE brings in all of our value (see first paragraph). You may think just because you own V+C+E that you build your own Vblocks. That could be the furthest thing from the truth. Just because you own V+C+E, you are not capable of delivering all the operational benefits that VCE drives. VCE is now introducing software that is a new layer of foundation for everything you could want to do in your data center. Introducing, VCE Vision Intelligent Operations.

 

Previously, operations spent its time analyzing logs from every piece of infrastructure and there were always a multitude of systems trying to compete for your attention. VCE is setting out to change that by creating an overall abstraction layer of the Vblock. I want to put a stake in the ground and state that VCE is NOT creating another window, or another management piece of software. Instead, the goal is to allow existing management applications such as vCenter, vCOPs, CA, BMC, Nagios, Xangati, VKernel, VMturbo, name anything else, etc. to no longer be reliant on individual elements, but instead, see the Vblock as a single piece of converged infrastructure.

 

 

Let's examine how this all works. First, there is a discovery engine that collects both physical and logical characteristics of the Vblock and bring everything into an in-object memory database. This database or "system library" is what makes all the fun actually happen. Individual element managers are polled at intervals (based on time and events) to determine if there have been any physical or logical changes. This can be firmware/software updates, cables being removed or plugged in, new UCS chassis added to FIs, new DAEs, removal of disks, etc. More technically, SMI-S and CLI from the Storage Processors, CIMC from UCS C servers, SNMP traps from Nexus and MDS Switches, XML and SNMP from UCS Manager for blade servers, and CIM from vCenter all contribute to the system library. This discovery happens at both the physical and logical levels. This is a completely agentless process to always maintain an up to date object model and determine a health state. Each component inside the Vblock essentially has its own system library, but we are combining all those pieces to make everything else happen as a single unit of converged infrastructure.

 

With the system library, we are now able to poll it and get any sort of variable or characteristic we want. This makes the Vblock an intelligent informer to the operator. Later on, we can also push to it to make any changes we want (aka Orchestration using vCAC, vCO, CIAC, OpenStack, Cloupia, etc).

 

If we examine closer on the discovery portion, the system library is polled to determine "is this a Vblock"? The polling that takes place is first verifying that all of the physical components are there. Second, verifying the system cabling and port map which is described in a XML file is correctly aligned. This is where we really determine the difference between V+C+E and a VCE Vblock. Every Vblock is cabled the same way and VCE knows which ports are designated to certain functions. There is a relationship for the mappings.

 

Even though we now have 4 Vblock models in our product line, there is a single VCE Intelligent Operations code base that covers all Vblock models. This allows for new developments to happen in a very repeatable manner while covering all the core platforms.

 

The identification stage is really simple. After the Vblock as been discovered, we can assign a label to that Vblock. In addition, the discovery of the Vblock then applies a model based on what is discovered, for instance Vblock 300HX. Instead of having to know which storage array, servers, and network components are all interacting with one another and creating a label for each, we can put a Vblock stamp across all these pieces and correlate that to a label such as "Serial #3451" or "Houston" or "Tiles 23-25" or "SQL, Sharepoint, & Exchange". Now we can create a unique single system identity for every Vblock. One piece of software is able to correlate all these components into a single logical unit. Every Vblock by default is given a serial number. This makes support easier by being able to look up all the components of your Vblock under one entry. The support staff knows when the Vblock was delivered and how the base system was configured. In addition, this "identity" is made available to consumers of the API or SNMP. For large data centers, this is extremely helpful. Think about how many Nexus 5548s there are in your DC? Some may have 100+. How are you able to manage the SNMP traps coming from those Nexus switches and how do you know what each is connected to and responsible for? This is where the identification can create a masking layer for the nexus switches in the Vblock and you know exactly what it belongs to, where it's located, and what it's responsible for.

 

The validation principal is pretty fascinating stuff. Remember the top paragraph where i talked about The VCE Certification Matrix - Ensuring Integration? This is where the automation of "checking against" the matrix comes to play. This automatically ensures the reliability and performance of a Vblock system without you having to go and check every element manager and piece of the stack individually. The accelerates resolution of problems to VCE support and validates successful matrix upgrades. There is an API available for the validation that can be exported to any other software. The API allows anyone to customize the matrix or add new benchmarks. These scans can all be scheduled and filtered using search criteria. A screenshot of this will be shown later on. The module for this software was written in a very "templatable" manner. In the future, we can explore other templates such as PCI, HIPPA, SOX, vSphere Hardening Guide, etc and benchmark that. As this develops, things such as an "easy button" for applying the entire vSphere Hardening Guide can be done.

 

Health Monitoring goes hand in hand with the discovery engine. Every discovery, based on either time or events, generates health based information. All the information collected is tested against VCE's design practices to determine the health score. We know what state components are in and can correlate that into upstream consumption used to identify any issues for operators. This leads to faster troubleshooting and time spent doing so has more quality because there is a streamlined information flow. The short-list of things considered when computing the score is the FI, UCS Chassis, Blade, SAE, DPS, Disks , SPs, 1000v, 9148, and N5K states. This is all represented below by vCOPs.

 

Logging is important to everyone. Tracking changes is critical to any enterprise. The problem exists when you get a flood of logs from many different IPs and need to determine who belongs to what. What does you syslog server look like when it's receiving information from a pod infrastructure? 32 servers, 2 ethernet switches, 2 SAN switches, Blade chassis, and multiple SPs probably looks like a mess. VCE takes the identity configured of your Vblock and correlates that into all the logging of all the sub-components. A couple of the ways we convey logging information is via Syslog, SNMP, and AMQP streams. This allows any monitoring software today to monitor a Vblock. Nothing is private to the system library, so anything it knows about, you can know about. There is functionality built-in to allow segregated application logs from AAA logs. SNMP traps are displyad as Vblock traps. Syslog data streams are passed though un-altered. The AMQP stream is for system related events. Each one of these takes multiple streams for all the sub-components and presents them as one. The cool part about this...monitoring protocols that didn't exist for individual components before, well, now they do. The AMQP stream is an event level alerting mechanism and almost every hardware product doesn't offer this type of alerting. Even, vSphere doesn't offer AMQP events. Now, we can utilize the vCO AMQP Plug-in to trigger orchestrated events from any event we see pertinent in the Vblock. Let your mind start going wild of all things that you can do.

 

The last piece to this is the Open API. VCE wants to allow extensibility for developers to be creative. This will spawn new management tools, applications, and software services. Using the SNMP or REST interface, developers can create tools, applications, or plug-ins for existing applications to report Vblock system performance or do overall health monitoring. You can create orchestrated events by responding to the AMQP bus. Look to the future for using this API to intelligently control the Vblock using products such as vCO, vCAC, CIAC, Cloupia, BMC, CA, OpsWare, and maybe even OpenStack. Think of the "easy button" to provision a new VLAN that touches every piece of your infrastructure. Inside of the SDK, there is going to be a "virtual Vblock" to develop applications without even owning a Vblock. The SDK contains Documentation, Java Libraries, databases (for VB100, VB200, VB300, and VB700 systems), and sample code (REST, AMQP Client and Publisher, and System Assurance examples). The SDK will all be distributed through a developer portal going live soon at http://www.vce.com/developer.

 

VCE is introducing 2 new products that use the Vision Intelligent Operations API. First is the vCenter Plug-in. On the home screen of the vSphere Web Client, there will be a new Vblock Logo. After clicking that Logo, you will be brought to the tree view where you can see your physical Vblock inside of vCenter. This is where we begin to start bridging the gap of the physical and virtual world. You can see how each is tied to one another. In this first view, we can see where we have assigned an identity to our Vblock as "MRO Red VB700". A serial number has also been assigned to the Vblock as a whole. The system health state is also defined as Critical which allows us to drive down even deeper to see where the issue lies. These screenshots are taken from the beta product that we are announcing today. Look forward to seeing even more information when this product goes GA.

 

We can continue to expand out our Vblock to dig further into the sub-components. We are pulling information related to every component and can see each's System Health, serial number, firmware versions, etc. A lot of good information can be pulled from here. At the same time, we can also see that Chassis 2 Blade 4 is r7mx2 ESXi host. Much easier to decipher that information that typical ways such through UCSM.

 

Speaking of UCSM, from this screen we are also able to launch access to individual element managers. Instead of having to remember a boat load of IP addresses, a simple right click brings us a new menu. We can launch UCSM directly from the vSphere Web Client to make any quick changes to service profiles. We can Rename anything we want in the stack. As well as launch the VCE Help menu.

 

Remember when we talked about automating the process of the matrix checking? Well here it is. On the root Vblock menu item, there is a "Manage" tab. Within here, you can schedule runs or you can run it immediately. This is a sample that shows you we are checking this Vblock 700 against certification Matrix 3.0.2. We can see that alot of components don't match up correctly. The score is weighted based on the most critical components. Such as having your Flare code not match up with the supported vSphere version.

 

If we go into each individual tab, we can see what pass and what failed. If we drill into the network tab we can clearly see all the networking components.

 

Lastly, we have the VCE Vision for vCenter Operations Plug-in. This plug-in is able to monitor everything from individual switch ports to fans inside the Vblock. A very powerful plug-in that lets you drill very far down to find the root cause of issues. We can also get an overall health status of our Vblock and each individual sub-component. This heat map shows what we need to inspect in our Vblock to get it back to peak performance.

 

I'm merely scratching the surface of what's inside of these plug-ins. I would encourage you to get in touch with a VCE partner to learn more and request demos of the software.

 

To sum it up, Vision is an API for the Vblock that creates a new layer of abstraction for Intelligent Operations. But why is this so important? Lets think bigger picture. I hope by now, you're reading this and realizing that infrastructure is boring. Sure, I gave you some pretty cool specs on new Vblock models, but if you really think spending your time architecting vSphere environments has career longevity, I have some bad news for you. vSphere administrators need to start thinking higher in the stack. Higher in the stack is pushing towards a cloud model. A cloud model can't be achieved with silos focused on implementing different infrastructure pieces. If you remove all the time spent on delivering core infrastructure and start figuring out how to deliver IaaS, PaaS, etc utilizing Cloud products and orchestration, you will become more streamlined in your operations. Vblocks help you achieve that efficiency by spending quality time focused on operational value. Vision gets you further and enables operational efficiency by collecting the sum of its parts and intelligently informs you of what's important and where to focus your attention.

Related Items

Related Tags