Recently while working on a project to implement a new datacenter infrastructure, I had the chance to implement and architect the final design o f the solution. I would like to share some high-level configuration for people utilizing these technologies as I begin to start blogging my experiences in the data center.
This architecture included the following solutions in the final configuration and will be running VMware vSphere.
- HP BLc7000 Chassis with redudant Onboard Administrator cards.
- (2) HP Flex 10/10D Virtual Connect
- (4) HP BL460c Xeon 12Core – E5-2697v2 – 384gb RAM – 8gb Micro SD (vSphere Embedded)
- (2) HP 5800-24G (JC099A) – IRF-Stack
- Nimble CS-460g (2.4TB SSD Cache – 25-50TB Usable)
- (2) Cisco 3750X-24G
- vSphere 5.1 Enterprise Plus
“The BladeSystem c7000 enclosure provides all the power, cooling, and I/O infrastructure needed to support modular server, interconnect, and storage components today and throughout the next several years. The enclosure is 10U high and holds up to 16 server and/or storage blades plus optional redundant network and storage interconnect modules.”
In this design I worked to provide the most availability possible in the solution to minimize downtime and ability to balance and shift workloads on demand. In this environment there was already a vCenter in place and this new infrastructure was added to an existing vCenter infrastructure. This was setup as an individual cluster within the existing VMware infrastructure to segment the new resources.
HP Virtual Flex Connect
Utilizing the HP Virtual Connect Flex-10/10D Module for c-Class Blade System it allowed me to wire once, then add, move and change network connections to thousands of virtual servers in minutes instead of days or weeks — from one console without affecting your LAN and SAN.
In this design we carved out the dedicated port speed accordingly to allow our vSphere hosts to have the appropriate management, vm network, vMotion, storage and DMZ networks utilizing the 10G links to our core and storage networks. I have provided below a document of the configuration of segmentation of bandwidth in the HP chassis broken out below:
- Management / VM Network – 2Gb
- SAN Network – 4Gb
- DMZ- 1Gb
- vMotion – 3Gb
Again, the main benefit of the virtual connect is be able to reallocate this port speed on the fly, so nothing is set in stone and we can modify this anytime as the business requires.
HP 5800 Switch
Adding onto an existing cluster, we continued to add the new HP 5800 switches into an existing IRF (Intelligent Resilient Framework) configuration in the secondary rack. This design benefits and offers all the advantages in simplicity, flatter topology for support with scalable performance. The IRF fabric can be configured for full N+1 redundancy, while mission-critical virtualization capabilities such as live migration and application mobility are available across the IRF domain.
We utilized Link Aggregation to our Nimble CS-460g infrastructure to maximize availablity and “on-demand” throughput performance. Should a network failure occur, IRF can deliver rapid recovery and network reconvergence in under 50 milliseconds—much faster than the several seconds required for STP.
In-Service-Software-Upgrade: IRF delivers a network-based In-Service-Software-Upgrade (ISSU) capability that allows an individual IRF-enabled switch to be taken offline for servicing or software upgrades without affecting traffic going to other switches in the IRF domain.
See below for a diagram of the IRF interconnects.
According to Nimble: “CS-Series arrays are based on Nimble’s patented Cache Accelerated Sequential Layout (CASL) architecture. CASL has been designed from the ground up to optimize performance, capacity, and data protection – at an affordable price. CASL innovations accelerate read and write performance while optimizing capacity and snapshot efficiency. Built in data protection simplifies backup and disaster recovery.”
In this configuration I leveraged the availability and isolation of our SAN network to provide redundancy to each controller within the array. The Nimble device uses several IP address configuration based on your device. Ultimately this will be all iSCSI provisioned to our VMware cluster utlizing Jumbo frames across the stack.
Look for more to come around the configuration of this growing environment.