Quantcast
Channel: zhurachel – Rachel Zhu Blog
Viewing all articles
Browse latest Browse all 39

CVD: FlexPod Reference Architecture for a 2000 Seat Virtual Desktop Infrastructure with Citrix XenDesktop 7.1 on VMware vSphere 5.1: Architecture

$
0
0

This CVD provides a 2000-Seat Virtual Desktop Infrastructure using Citrix XenDesktop 7.1 built on Cisco UCS B200-M3 blades with NetApp FAS 3200-series and the VMware vSphere ESXi 5.1 hypervisor platform.

The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, once the reference architecture contained in this document is built, it can easily be scaled as requirements and demands change. This includes scaling both up (adding additional resources within a UCS Domain) and out (adding additional UCS Domains and NetApp FAS Storage arrays).

The 2000-user XenDesktop 7 solution includes Cisco networking, Cisco UCS and NetApp FAS storage, which fits into a single data center rack, including the access layer network switches.

cvd rack

The workload contains the following hardware:

  • Two Cisco Nexus 5548UP Layer 2 Access Switches
  • Two Cisco UCS 6248UP Series Fabric Interconnects
  • Two Cisco UCS 5108 Blade Server Chassis with two 2204XP IO Modules per chassis
  • Four Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 384GB RAM, and VIC1240 mezzanine cards for the 550 hosted Windows 7 virtual desktop workloads with N+1 server fault tolerance.
  • Eight Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 256 GB RAM, and VIC1240 mezzanine cards for the 1450 hosted shared Windows Server 2012 server desktop workloads with N+1 server fault tolerance.
  • Two Cisco UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240 mezzanine cards for the infrastructure virtualized workloads
  • Two node NetApp FAS 3240 dual controller storage system running Data ONTAP cluster mode, 4 disk shelves, converged and 10GE ports for FCoE and NFS/CIFS connectivity respectively.
  • (Not Shown) One Cisco UCS 5108 Blade Server Chassis with 3 UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240 mezzanine cards for the Login VSI launcher infrastructure

Our design goal is a high availability, high performance and high efficiency end to end solution. Now I will explain how we achieve this goal on server, network and storage.

Server

The logical architecture of the validated is designed to support 2000 users within two chassis and fourteen blades, which provides physical redundancy for the chassis and blade servers for each workload.

In vCenter, we created 3 resource pool and followed N+1 high availability to achieve our end to end server, network and storage resilient goal.

  • 2 UCS servers for infrastructure VMs
  • 6 for 550 hosted VDI VMs
  • 8 for 64 hosted shared VMs for 1450 users

Network

We configured a fully redundant and highly-available network. Configuration guidelines are provided that refer to which redundant component is being configured with each step, whether that be A or B. For example Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly.

For best performance, We use 10 GbE and jumbo frame network between UCS and storage.

cvd network

5 VLANs are configured to ensure QOS on UCS and switches.

Screen Shot 2014-02-26 at 10.36.58 PM

Screen Shot 2014-02-26 at 10.47.00 PM

Storage

Screen Shot 2014-02-26 at 10.52.45 PM

This is the first Clustered Data ONTAP CVD.

With the release of NetApp clustered Data ONTAP (clustered ONTAP), NetApp was the first to market with enterprise-ready, unified scale-out storage. Developed from a solid foundation of proven Data ONTAP technology and innovation, clustered ONTAP is the basis for virtualized shared storage infrastructures that are architected for nondisruptive operations over the lifetime of the system. For details on how to configure clustered Data ONTAP with VMware® vSphere™, refer to TR-4068: VMware vSphere 5 on NetApp Data ONTAP 8.x Operating in Cluster-Mode.

All clustering technologies follow a common set of guiding principles. These principles include the following:

  • Nondisruptive operation. The key to efficiency and the basis of clustering is the ability to make sure that the cluster does not fail—ever.
  • Virtualized access is the managed entity. Direct interaction with the nodes that make up the cluster is in and of itself a violation of the term cluster. During the initial configuration of the cluster, direct node access is a necessity; however, steady-state operations are abstracted from the nodes as the user interacts with the cluster as a single entity.
  • Data mobility and container transparency. The end result of clustering—that is, the nondisruptive collection of independent nodes working together and presented as one holistic solution—is the ability of data to move freely within the boundaries of the cluster.
  • Delegated management and ubiquitous access. In large complex clusters, the ability to delegate or segment features and functions into containers that can be acted upon independently of the cluster means the workload can be isolated; it is important to note that the cluster architecture itself must not place these isolations. This should not be confused with security concerns around the content being accessed.

I will discuss the detail of storage architecture , solution best practice and test results on this solution in my future blogs.



Viewing all articles
Browse latest Browse all 39

Trending Articles