Note : Use this as concept design, not say you should but the speed has been amazing.
Quick explanation of the diagram:
To continue reading this article please view it at source here : http://www.applicationdelivery.co.uk/i-got-99-problems-but-iops-aint-one/
posted by Guest - 02/13/2011Re: vDisk cached in PVS Ramserver 2008 r2 will do it by default as long as the vdisk is attached via block level storage.» replyposted by Guest - 09/29/2011Re: Re: vDisk cached in PVS RamHi, I have a phys server 2008 R2 PVS server that has block based storage (ISCSI LUN) presented to it from a NetApp filer.It is not caching the vDisk into RAM - 48GB available and only utilizing 2GB.Can anyone confirm that this works with 2008R2.- Citrix Best practices guide is ambiguous: http://support.citrix.com/servlet/KbServlet/download/25649-102-649146/Provisioning%20Services%205%206%20Best%20Practices%20External%201.2.pdfvDisk Store: • Use a disk subsystem that causes the Windows Server to cache the vDisk o not NFS or Windows 2008 R2 Does this imply that the Windows2008R2 server with block based storage does not work? Or does this mean that if using a CIFS share on a Windows 2008R2 server for the vDisk it will not work?Cheers, Chris.» reply
posted by Guest - 09/29/2011Re: Re: vDisk cached in PVS RamHi, I have a phys server 2008 R2 PVS server that has block based storage (ISCSI LUN) presented to it from a NetApp filer.It is not caching the vDisk into RAM - 48GB available and only utilizing 2GB.Can anyone confirm that this works with 2008R2.- Citrix Best practices guide is ambiguous: http://support.citrix.com/servlet/KbServlet/download/25649-102-649146/Provisioning%20Services%205%206%20Best%20Practices%20External%201.2.pdfvDisk Store: • Use a disk subsystem that causes the Windows Server to cache the vDisk o not NFS or Windows 2008 R2 Does this imply that the Windows2008R2 server with block based storage does not work? Or does this mean that if using a CIFS share on a Windows 2008R2 server for the vDisk it will not work?Cheers, Chris.» reply
posted by Guest - 02/01/2011Re: What? No HA?he has both write caches set as D:\ - PVS will switch in HA» replyposted by Guest - 03/01/2011Re: Re: What? No HA?This configuration won't support active failover. If PVS1 goes down, you would need to cycle any target devices connected to PVS1 so their write-cache kicks over to PVS2 on reboot. PVS is only capable of active failover for HA vDisks when the write-cache is set to Local Device HD or shared between PVS servers (SSD SAN for example). Since he's using FusionIO cards in the PVS hosts, that's local PVS storage (Cache on Server Disk) which will not support active failover. Feel free to test this theory by marking the server down or stopping the stream service. Do the targets fail over to the second PVS box?One strategy to support active HA failover is FusionIO in the Hypervisors with VHDs presented to the VMs for the write-cache on Local HD option. Blazing fast performance, but you have to use a Hypervisor that guarantees driver support for your FusionIO cards. This can be an issue if you're booting Hypervisors from manufacturer provided USB that aren't on the latest Hypervisor release, so be sure to check and double check for support! Also have to way cost of storage (i.e. FusionIO cards in Hypervisors), size of VHD for write-cache and scalability of provisioned VMs per host. Feel free to poke holes in my logic if you see any! In the end, we're all working toward a better tomorrow after all. :-) Thanks, DY» reply
posted by Guest - 03/01/2011Re: Re: What? No HA?This configuration won't support active failover. If PVS1 goes down, you would need to cycle any target devices connected to PVS1 so their write-cache kicks over to PVS2 on reboot. PVS is only capable of active failover for HA vDisks when the write-cache is set to Local Device HD or shared between PVS servers (SSD SAN for example). Since he's using FusionIO cards in the PVS hosts, that's local PVS storage (Cache on Server Disk) which will not support active failover. Feel free to test this theory by marking the server down or stopping the stream service. Do the targets fail over to the second PVS box?One strategy to support active HA failover is FusionIO in the Hypervisors with VHDs presented to the VMs for the write-cache on Local HD option. Blazing fast performance, but you have to use a Hypervisor that guarantees driver support for your FusionIO cards. This can be an issue if you're booting Hypervisors from manufacturer provided USB that aren't on the latest Hypervisor release, so be sure to check and double check for support! Also have to way cost of storage (i.e. FusionIO cards in Hypervisors), size of VHD for write-cache and scalability of provisioned VMs per host. Feel free to poke holes in my logic if you see any! In the end, we're all working toward a better tomorrow after all. :-) Thanks, DY» reply
posted by Guest - 02/01/2011Re: this seems like a pretty complex architecturewhat is the cost per IOPS v this?cost 135000 write IOPS here is around 10k. how many SAS disks would you need to provide that?» reply
posted by Guest - 01/31/2011Re: I use VMware and have no problemsthe hypervisor is irrelevant.» replyposted by Guest - 09/29/2011Re: Re: I use VMware and have no problemsHypervisor is not irrelevant. - Try and mount any SSD card into a XenServer. + XenServer requires 32bit drivers for the SSD as the XenServer kernel (DOM0? - I forget the exact name) is still 32-bit, even though it is marketed as an 64-bit OS/hypervisor.In the design above, you remove this constraint by attaching the SSD (Fusion IO which does not have a 32bit driver) to an OS that it has drivers for.. so yes, using the above architecture, then it is hypervisor independent.Cheers, Chris.» reply
posted by Guest - 09/29/2011Re: Re: I use VMware and have no problemsHypervisor is not irrelevant. - Try and mount any SSD card into a XenServer. + XenServer requires 32bit drivers for the SSD as the XenServer kernel (DOM0? - I forget the exact name) is still 32-bit, even though it is marketed as an 64-bit OS/hypervisor.In the design above, you remove this constraint by attaching the SSD (Fusion IO which does not have a 32bit driver) to an OS that it has drivers for.. so yes, using the above architecture, then it is hypervisor independent.Cheers, Chris.» reply