GetVirtual – Virtualization 4 All

My vision what Microsoft Virtualization and Cloud Computing bring to IT World

Tag Archives: Storage

Storage QoS on VMs in Hyper-V 2012 R2

With Windows Server 2012 R2, Hyper-V includes the ability to set certain quality-of-service (QoS) parameters for storage on the virtual machines. Storage QoS provides storage performance isolation in a multitenant environment and mechanisms to notify you when the storage I/O performance does not meet the defined threshold to efficiently run your virtual machine workloads.

Storage QoS allows you to plan for and gain acceptable performance from the investment in storage resources you can:

  • Specify the maximum IOPS allowed for a virtual hard disk that is associated with a virtual machine.
  • Receive a notification when the specified minimum IOPS for a virtual hard disk is not met.
  • Monitor storage-related metrics through the virtual machine metrics interface.

 Benefits

Storage QoS provides the ability to specify a maximum input/output operations per second (IOPS) value for your virtual hard disk. An administrator can throttle the storage I/O to stop a tenant from consuming excessive storage resources that may impact another tenant.

An administrator can also set a minimum IOPS value. They will be notified when the IOPS to a specified virtual hard disk is below a threshold that is needed for its optimal performance.

The virtual machine metrics infrastructure is also updated, with storage related parameters to allow the administrator to monitor the performance and chargeback related parameters.

Maximum and minimum values are specified in terms of normalized IOPS where every 8 K of data is counted as an I/O.

 Requirements

Storage QoS requires that the Hyper-V role is installed. The Storage QoS feature cannot be installed separately. When you install Hyper-V, the infrastructure is enabled for defining QoS parameters associated with your virtual hard disks.

NOTE: Storage QoS is not available if you are using shared virtual hard disks.

 How to use

Virtual hard disk maximum IOPS

Storage QoS provides the following features for setting maximum IOPS values (or limits) on virtual hard disks for virtual machines:

  • You can specify a maximum setting that is enforced on the virtual hard disks of your virtual machines. You can define a maximum setting for each virtual hard disk.
  • Virtual disk maximum IOPS settings are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments.
  • You can use the WMI interface to control and query the maximum IOPS value you set on your virtual hard disks for each virtual machine.
  • Windows PowerShell enables you to control and query the maximum IOPS values you set for the virtual hard disks in your virtual machines.
  • Any virtual hard disk that does not have a maximum IOPS limit defined defaults to 0.
  • The Hyper-V Manager user interface is available to configure maximum IOPS values for Storage QoS.

Virtual hard disk minimum IOPS

Storage QoS provides the following features for setting minimum values (or reserves) on virtual hard disks for virtual machines:

  • You can define a minimum IOPS value for each virtual hard disk, and an event-based notification is generated when the minimum IOPS value is not met.
  • Virtual hard disk minimum values are specified in terms of normalized IOPS. IOPS are measured in 8 KB increments.
  • You can use the WMI interface to query the minimum IOPS value you set on your virtual hard disks for each virtual machine.
  • Windows PowerShell enables you to control and query the minimum IOPS values you set for the virtual hard disks in your virtual machines.
  • Any virtual hard disk that does not have a minimum IOPS value defined will default to 0.
  • The Hyper-V Manager user interface is available to configure minimum IOPS settings for Storage QoS.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Overview of Storage with Windows Server 2012 R2

Storage solutions play a critical role in the modern datacenter. Windows Server 2012 R2 was designed with a strong focus on storage capabilities, from the foundation of the storage stack up, to improvements ranging from provisioning storage to how data is clustered, transferred across the network, and ultimately accessed and managed. With flexible capabilities that can be combined to meet your business needs, Windows Server 2012 R2 storage solutions deliver the efficiency, performance, resiliency, availability, and versatility you need at every level.

High-performance storage on industry-standard hardware

Windows Server 2012 R2 provides a rich set of storage features allowing you to take advantage of lower-cost industry-standard hardware rather than purpose-built storage devices, without you having to compromise on performance or availability.

For example, Storage Spaces provides sophisticated virtualization enhancements to the storage stack that you can use to pool multiple physical hard disk units together and provide feature-rich, highly resilient, and reliable storage arrays to your workloads. You can use Storage Spaces to create storage pools, which are virtualized administration units that are aggregates of physical disk units. With these storage pools, you can enable storage aggregation, elastic capacity expansion, and delegated administration. You can also create virtual disks with associated attributes that include a desired level of resiliency, thin or fixed provisioning, and automatic or controlled allocation on diverse storage media.

Storage tiering, a new feature in Windows Server 2012 R2, is a great example of how storage performance can be dramatically enhanced while using lower-cost industry standard hardware. With storage tiering, low cost, high capacity spinning disks are used to store less frequently used data, while high-speed solid state disks are reserved to store frequently used data. Storage tiering builds on storage virtualization with Storage Spaces by assigning solid state drives (SSD) and hard disk drives (HDD) to the same storage pool and using them as different tiers in the same tiered space. Windows Server 2012 R2 recognizes the tiers and optimizes them by moving often used “hot” data to the SSD tier. Windows Server 2012 R2 tracks data temperature and moves data at the sub-file level; only “hot” regions of a file (such as VHD or database) need to move to SSDs, the “cold” regions can reside on HDDs.

Since Windows Server 2012, with a feature referred to as SMB Direct, the SMB protocol has provided support for Remote Direct Memory Access (RDMA) network adapters, which allows storage performance capabilities that rival Fiber Channel. RDMA network adapters enable this performance capability by operating at full speed with very low latency due to the ability to bypass the kernel and perform write and read operations directly to and from memory. This capability is possible since reliable transport protocols are implemented on the adapter hardware and allow for zero-copy networking with kernel bypass. With this capability, applications, including SMB, can perform data transfers directly from memory, through the adapter, to the network, and then to the memory of the application requesting data from the file share.

Continuous application availability and robust recovery

Windows Server 2012 R2 reduces server downtime and application disruption by letting you store server application data on file shares and obtain a similar level of reliability, availability, manageability, and high performance that would typically be expected from a high-end Storage Area Network (SAN).

Introduced in Windows Server 2012, SMB Transparent Failover allows you to transparently move SMB file shares between the file server cluster nodes, without noticeable interruption of service for the SMB client. This is useful for planned events (for example, when you need to perform maintenance on a node) or surprise events (for example, when a hardware failure causes a node to fail). This is achieved regardless of the kind of operation that was underway when the failure occurred.

One the main advantages of file storage over block storage is the ease of configuration, paired with the ability to configure folders that can be shared by multiple clients. Windows Server 2012 has taken file-based storage one step further by introducing the SMB Scale-Out feature, which provides the ability to share the same folders from multiple nodes of the same cluster. This is made possible by the use of Cluster Shared Volumes (CSV), which since Windows Server 2012 support file sharing. New in Windows Server 2012 R2, SMB sessions can now also be managed per share (not just per file server), increasing flexibility. And SMB Scale-out now also offers finer-grained load distribution by distributing workloads from a single client across many nodes of a scale-out file server.

Another innovation around Windows Server 2012 R2 is the Windows Azure Hyper-V Recovery Manager offering, a related service which offers a robust recovery solution that takes advantage of Hyper-V Replica.

For organizations with two or more datacenters looking to protect vital workloads running in their datacenter, Windows Azure Hyper-V Recovery Manager enables them to combine Windows Azure, System Center Virtual Machine Manager, and Hyper-V Replica to deliver planned and cost-effective business continuity of workloads. With Windows Azure Hyper-V Recovery Manager, datacenters can be protected by automating the replication of the virtual machines that compose them at a secondary location. Windows Azure Hyper-V Recovery Manager also provides continuous health monitoring of the primary datacenter, and it helps automate the orderly recovery of services in the event of a site outage at the primary datacenter. Virtual machines are started in an orchestrated fashion to help restore service quickly. This process can also be used for testing recovery without disruption to services, or temporarily transferring services to the secondary location.

Comprehensive storage management and backup

Windows Server 2012 R2 provides great management and backup capabilities that help you better manage your storage capacity whether you have a single server or multiple servers, whether you have one class of storage or a variety of storage solutions, and whether you have a Windows only or a heterogeneous environment.

Storage QoS is a new feature in Windows Server 2012 R2 that allows you to restrict disk throughput for overactive or disruptive virtual machines and can be configured dynamically while the virtual machine is running. For maximum bandwidth applications, it provides strict policies to throttle IO to a given virtual machine to a maximum IO threshold. For minimum bandwidth applications, it provides policies for threshold warnings that alert of an IO starved VM when the bandwidth does not meet the minimum threshold.

Also, to help improve storage management efficiency and offset that cost, Windows Server 2012 R2 comes with a set of storage management APIs and provider interfaces that enables administrators to centrally manage disparate storage resources and solutions, such as SANs and storage arrays, from a centralized “single pane of glass” interface. Manageable resources can include SANs that are SMI-S compliant, storage devices with proprietary hardware that has compatible third-party storage management providers, or storage devices that are already being allocated through the use of Storage Spaces. This storage management capability will allow administrators to configure and manage all of the storage devices throughout their organization or management sphere through an easy-to-use management interface that they are already familiar with, the Server Manager in Windows Server.

By using Server Manager, administrators can populate server groups with file servers or storage clusters that leverage Storage Spaces, or reach out to populate manageable devices that have SMI-S agents enabled.

If you have a small number of servers to protect and you currently have no backup solution or you are using the inbox Windows Server Backup tool on these servers, Windows Azure Backup is a separate offering that extends the capabilities of Windows Server Backup and System Center Data Protection Manager to deliver simple and reliable off-site data protection at the cost of cloud storage. It is suitable for any workload, such as file servers, SharePoint, SQL, Exchange, and others.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

How SCVMM 2012 manage storage

The key piece within the new storage architecture for SCVMM 2012 is the Storage Service.  This service is part of SCVMM 2012 and will not be available separately.  The storage service provides the communication capabilities when talking to the storage subsystems through the SMI-S providers.  SMI-S providers are provided by the storage vendor.  This model insures that the SAN provider tools still work and that SCVMM 2012 does not conflict with those tools

With SCVMM 2012 and the private cloud support, it was important for SCVMM 2012 to provide more management of the underlying fabric.  With respect to storage SCVMM 2012 has moved with a strategic approach.  The storage management is standards based, so the Storage Management Interface – Specification (SMI-S) was selected.  The storage management had to be fully integrated and consistent across array vendors into SCVMM 2012.  Using SMI-S exposes the array functionality required by SCVMM 2012.  All the features and value add that a storage vendor can be exposed using native tools the vendor provides or through PowerShell support.  SMI-S will not enhance or hinder a storage arrays capability.

SCVMM 2012 still supports iSCSI, Fibre Channel and NPIV.  N_Port ID Virtualization (NPIV) on a Fibre Channel SAN. NPIV uses Host Bus Adapter (HBA) technology, which creates virtual HBA ports on hosts by abstracting the underlying physical port. This support enables a single physical Fibre Channel HBA port to function as multiple logical ports, each with its own identity. Each virtual machine can then attach to its own virtual HBA port and be independently zoned to a distinct and dedicated World Wide Port Name (WWPN). For more information about NPIV and HBA technology, refer to the documentation of your HBA vendor.

Architecturally within the storage service within SCVMM 2012 WMI is used to communicate with the higher levels of the storage subsystem and into SMI-S.  SCVMM 2012 through CIM-XML will communicate with the storage vendor supplied SMI-S provider.  The vendor supplied component will either run in the storage controller itself or more commonly require a separate Windows OS instance.  The vendors SMI-S provider will then talk directly to the storage controller and the disk arrays. Microsoft does not recommend running the provider on the SCVMM 2012 Server.

 

The SCVMM 2012 host agent has always collected the following storage information from the host:

  • Disk
  • Volumes
  • Host Bus Adapters
  • iSCSI
  • Fibre Channel

Once the HBA initiator logs on to the fabric through fibre channel or iSCSI, discovery can begin.  The storage can be queried to supply all the discoverable endpoints that are available to the host.  The system does not supply any array or model information.

Therefore the storage available to a host is queried and available to be viewed within SCVMM 2008 R2.  As no detail about where the disk resides the SCVMM 2012 administrator does not have sufficient information to be able to understand the relationship between the virtual machine, the physical host and the physical storage.

With SCVMM 2012 and the use of the SMI-S management protocol queries much deeper into the storage can be initiated.  This therefore enables much great management and reporting to be returned into SCVMM 2012.  There are two levels of discovery.

With SCVMM 2012 and the use of the SMI-S management protocol queries much deeper into the storage can be initiated.  This therefore enables much great management and reporting to be returned into SCVMM 2012.  There are two levels of discovery.

The Level 1 discovery with the SMI-S protocol returns the following additional information:

  • The Storage Array
  • Storage Groups and Initiators
  • Endpoints
  • Hardware IDs
  • Pools

 

Once the pool information has been enumerated and SCVMM 2012 is configured which pools you would like it to manage level 2 discovery will occur.  At that point SCVMM 2012 will enumerate the LUNs.  Once the LUNs have been enumerated SCVMM 2012 will start mapping LUNs to the hosts through Storage Groups.

Storage Groups are stored in the array and are a collection LUNs and initiators.  When a LUN is masked or unmasked from a host storage group is being modified, therefore enabling or disabling a host to see a LUN.  With this information SCVMM 2012 now has the ability to report from the LUN what host is connected to it.

If the LUN is a CSV all nodes that have access to the LUN in the cluster would be returned.