GetVirtual – Virtualization 4 All

My vision what Microsoft Virtualization and Cloud Computing bring to IT World

Tag Archives: Windows Server 2012

Capacity Planner for Hyper-V Replica

Hyper-V in Windows Server 2012 includes a new capability called Hyper-V Replica. Hyper-V Replica allows administrators to replicate their virtual machines from a primary server/cluster to a replica server/cluster. The Capacity Planner for Hyper-V Replica guides the IT administrator to design the server, storage and network infrastructure which is required to successfully deploy Hyper-V Replica.

clip_image001

After reviewing the license terms, Click on ‘I accept the license term’ and click on ‘Next’.

clip_image002

Before proceeding from this page, ensure that a Hyper-V Replica server/cluster has been enabled to receive replication traffic from this primary server/cluster. As part of collecting various metrics, the capacity planner attempts to send a temporary VHD from the primary server/cluster to the replica server/cluster. This allows the tool to study the network characteristics of the link between the primary and replica server.

If your primary or replica server is part of a cluster, ensure that the Hyper-V Replica Broker role is added to the cluster.

clip_image003

Specify the following parameters in this screen and click ‘Next’:

Primary Server/Cluster details:

a. For a standalone primary server, enter the server name or FQDN.

b. If your primary server is part of a cluster, enter the FQDN of the (primary cluster) Hyper-V Replica Broker Client Access Point (CAP).

Replica Server/Cluster details:

a. For a standalone replica server, enter the server name or FQDN.

b. If your replica server is part of a cluster, enter the FQDN of the (replica cluster) Hyper-V Replica Broker Client Access Point.

Estimated WAN Bandwidth:

a. Enter the estimated WAN bandwidth link speed between the primary and replica server/cluster.

Duration of collecting metrics:

a. Enter an appropriate interval for which the metrics need to be collected. It is highly recommended that the tool is run during ‘production hours’ which ensures that the most representative data is collected. Running the tool for a short duration (eg: 10mins) may not give quality data.

clip_image004

The tool connects to the primary server and enumerates the virtual machines which are running on the primary. Ensure the following:

1) You are an administrator on the primary server/cluster. Remote-WMI is used to enumerate the virtual machines on the primary server – ensure that the right set of firewalls and permissions are set to allow this call to execute.

2) Ensure that replication has not been enabled on any of the VMs which are on the primary server/cluster.

3) Ensure that the VMs on the primary server/cluster are running.

The following details needs to be provided in the page:

1) Temporary VM location: As part of collecting various metrics, the tool creates a temporary VM on your primary server/cluster and enables replication on the VM. This allows the tool to study the network characteristics between the primary and replica server. Provide a location on the primary server/cluster in which this VHD/VM can be created. In a clustered deployment, ensure that the location is accessible from all the nodes in the cluster.

2) (Optional) Certificate: If your primary and replica servers are in a workgroup [or] if certificate based authentication is being used in your Hyper-V Replica environment, you should provide the required certificate in this page.

3) Select VMs and VHDs: You can select the VMs and VHDs on which the metrics need to be collected. If you are not planning to enable replication on any specific VM/VHD, you can uncheck the VM in this screen.

Click ‘Next’ after providing all the inputs.

clip_image006

The tool now captures the metrics in the background. The tool will run for a few minutes beyond the duration of the run. You can continue to operate on your VM during the duration of the run. Once completed, the screen will look as follows:

clip_image008

Click on ‘View Report’ to go over the recommendations.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Statistics of a NIC Teaming

If the UI window is sufficiently tall a statistics tile appears at the bottom of the Team tile and the Adapters and Interfaces tile. These statistics windows reflect the traffic of the selected team and selected team member. If you don’t see the statistics try making the UI window a little taller.

clip_image001

Viewing statistics for a team interface

If the Team Interfaces tab is selected in the Adapters and Interfaces tile the statistics at the bottom of the Adapters and Interfaces tile will be those of the selected team interface.

clip_image003

Setting frequency of Statistics updates

The frequency of statistics updates and other updates can be set by selection Settings in the Servers tile Tasks menu. Selecting this item brings up the General Settings dialog box.

clip_image005

The two drop-down lists in this dialog box allow the user to change how often the UI is refreshed. The settings apply equally to all servers in the servers list.

This menu also allows the administrator to decide whether or not adapters that are not able to be part of a team should be shown in the UI. By default these non-teamable adapters are not shown.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Modifying a NIC Team through GUI and PowerShell

Modifying a team through the UI

Within the UI, modifications to the team can be done by selecting a team in the Team tile, right-clicking on the team, and selecting the Modify Team action. Selecting Modify Team will pop-up the Team properties dialog box. This dialog box is very similar to the

In the Team properties dialog box the following actions can be accomplished:

  • Rename the team: Select the team name and edit it.
  • Add team members: Select additional adapters from the Member Adapters tile
  • Remove team members: De-select adapters from the Member Adapters tile. At least one adapter must be selected.

clip_image002

If the Additional properties drop-down item is selected then the Teaming mode and Load distribution mode may also be modified. This Additional properties drop-down also allows the administrator to select a standby adapter when active-standby mode is desired.

clip_image004

Modifying a team through Windows PowerShell
Renaming a team

To rename Team1 and give it the name TeamA, the Windows PowerShell is:

Rename-NetLbfoTeam Team1 TeamA

Changing the teaming mode

The Windows PowerShell options for teaming mode are:

  • SwitchIndependent
  • Static
  • LACP

To change Team1 to an 802.1ax LACP team, the Windows PowerShell is:

Set-NetLbfoTeam Team1 ‑TeamingMode LACP

The “-TeamingMode” flag can be abbreviated “-TM”, as in

Set-NetLbfoTeam Team1 –TM LACP

Note: For security reasons teams created in VMs may only operate in SwitchIndependent mode.

Changing the load distribution algorithm

The Windows PowerShell options for load distribution algorithm are:

  • TransportPorts
  • IPAddresses
  • MacAddresses
  • HyperVPort

To change Team1’s Load balancing algorithm to Hyper-V Ports, the Windows PowerShell is:

Set-NetLbfoTeam Team1 ‑LoadBalancingAlgorithm HyperVPorts

The “-LoadBalancingAlgorithm” flag can be abbreviated “-LBA”, as in

Set-NetLbfoTeam Team1 ‑LBA HyperVPorts

To change the Teaming mode and Load balancing algorithm at the same time,

Set-NetLbfoTeam Team1 ‑TM LACP ‑LBA HyperVPorts

Note: Teams created in VMs may not use the HyperVPort load distribution algorithm.

Adding new members to the team

To add NIC1 to Team1 the Windows PowerShell command is:

Add-NetLbfoTeamMember NIC1 Team1

Removing members from the team

To remove NIC1 from Team1 the Windows PowerShell command is:

Remove-NetLbfoTeamMember NIC1 Team1

Setting a team member to be the Standby Adapter

A team member can be set as the Standby Adapter through Windows PowerShell:

Set-NetLbfoTeamMember NIC4 -AdministrativeMode Standby

At most one team member may be in standby mode at any point in time. If a different team member is already in standby mode that team member must be returned to active mode before this Windows PowerShell cmdlet will succeed.

Adding new interfaces to the team

To add a new interface to the team select the Team in the Teams Tile and the Team Interfaces tab in the Adapters and Interfaces tile. Select the Tasks menu in the Adapters and Interfaces tile, then select Add Interface.

clip_image005

Selecting the Add Interface action item pops-up the New team interface dialog box.

clip_image006

Since only one team interface, the primary team interface, can be in Default mode, the new team interface must have a specific VLAN value. As the specific VLAN value is entered the name of the interface will be modified to be the team name followed by the VLAN value of this team interface. The interface name can be modified to any other name (duplicates are not allowed) if the administrator chooses to do so.

Selecting OK will create the new team interface.

clip_image008

The Windows PowerShell to add a team interface with VLAN 42 to Team1 is

Add-NetLbfoTeamNIC Team1 42

Modifying team interfaces

There are only two modifications that can be done to a team interface:

  • change the team interface name and/or
  • change the VLAN ID.

To modify the team interface VLAN ID select and then right-click the team interface in the Team Interfaces tab. Select the Properties action item.

clip_image010

This pops-up the Network Adapter Properties dialog box. This dialog box has some useful information about the team interface. It also has the box where the new VLAN ID can be entered. If a new VLAN ID is entered and the team name is the one the system provided when the team interface was created the team interface name will be changed to reflect the new VLAN ID. If the team interface name has been previously changed then the team name will not be changed when the new VLAN ID is entered.

clip_image012

To modify a team interface’s VLAN ID in Windows PowerShell

Set-NetLbfoTeamNIC “Team1 ‑ VLAN 42” -VlanID 15

Just as in the UI, changing the VLAN ID will cause the team interface name to change if the team interface name is still the same as the one the system created when the team interface was created. I.e., if the team interface name is <teamName ‑ VLAN xx> where xx is the VLAN ID of the team interface, then the VLAN ID portion of the team interface name will be modified to reflect the new VLAN ID.

Removing interfaces from the team

To delete a team interface, select and then right-click the team interface in the Team Interfaces tab. Select the Delete team interface action item. A confirmation dialog box will pop-up. Once confirmed the team interface is deleted.

The Primary team interface (i.e., the one that was created when the team was created) can’t be deleted except by deleting the team.

To delete a team interface in Windows PowerShell

Remove-NetLbfoTeamNIC “Team1 ‑ VLAN 42”

Deleting a team

To delete a team from the server select the team in the Teams tile. Right-click the team and select the Delete team action item.

clip_image014

A confirmation dialog box will be displayed. Once confirmed the team will be deleted.

To delete a team in Windows PowerShell

Remove-NetLbfoTeam Team1

To remove all teams from the server in Windows PowerShell (i.e., to clean up the server),

Get-NetLbfoTeam | Remove-NetLbfoTeam

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Creating a NIC teaming

There are two ways to invoke the New Team dialog box:

  • Select the Tasks menu in the Teams tile and then select New Team, or
  • Right click on an available adapter in the Network Adapters tab and select the Add to new team item. Multi-select works for this: you can select multiple adapters, right-click on one, select Add to new team, and they will all be pre-marked in the New Team dialog box.

Both of these will cause the New Team dialog box to pop-up.

clip_image002

When the New Team dialog box pops-up there are two actions that MUST be taken before the team can be created:

  • A Team name must be provided, and
  • One or more adapters must be selected to be members of the team

Optionally, the administrator may select the Additional properties item and configure the teaming mode, load distribution mode, and the name of the first (primary) team interface.

clip_image004

In Additional properties the Load distribution mode drop-down provides only two options: Address Hash and Hyper-V Port. The Address Hash option in the UI is the equivalent of the TransportPorts option in Windows PowerShell. To select additional Address hashing algorithms use Windows PowerShell as described below.

This is also the place where those who want to have a Standby adapter in to set the Standby adapter. Selecting the Standby adapter drop-down will give a list of the team members. The administrator can set one of them to be the Standby Adapter. A Standby adapter is not used by the team unless and until another member of the team fails. Standby adapters are only permitted in Switch Independent mode. Changing the team to any Switch Dependent mode will cause all members to be made active members.

When the team name, the team members, and optionally any additional properties (including the Primary team interface name or standby adapter) have been set to the administrator’s choices, the administrator will click on the OK button and the team will be created. Team creation may take several seconds and the NICs that are becoming team members will lose communication for a very short time.

Teams can also be created through Windows PowerShell. The Windows PowerShell to do exactly what these figures have shown is New-NetLbfoTeam Team1 NIC1,NIC2

Teams can be created with custom advanced properties.

New-NetLbfoTeam Team1 NIC1,NIC2 -TeamingMode LACP ‑LoadBalancingAlgorithm HyperVPorts

If the team is being created in a VM, you MUST follow the instructions to allow guest teaming as described in previous post (NIC teaming on Virtual Machines).

Checking the status of a team

Whenever the NIC Teaming UI is active the current status of all NICs in the team, the status of the team, and the status of the server will be shown. In the picture bellow, in the Network Adapters tab of the Adapters and Interfaces tile, NIC 3 shows as faulted. The reason given is Media Disconnected (i.e., the cable is unplugged). This causes the team, Team1, to show a Warning as it is still operational but degraded. If all the NICs in the team were faulted it would show Fault instead of Warning. The server, DONST-R710, now shows Warning. If the team was not operational the server indication would be Fault. This makes it easy to scan the list of servers to see if there are any problems.

clip_image006

     

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    The components of the NIC Teaming Management UI

    The NIC Teaming management UI consists of 3 primary windows (tiles):

    • The Servers tile
    • The Teams tile
    • The Adapters and Interfaces tile

    clip_image002[5]

    The Adapters and Interfaces tile is shared by two tabs:

    • The Network Adapters tab
    • The Team Interfaces tab

    Each tile or tab has a set of columns that can be shown or hidden. The column chooser menus are made visible by right-clicking on any column header. (For illustrative purposes the screen shot in the picture bellow shows a column chooser in every tile. Only one column chooser can be active at a time.)

    Contents of any tile may be sorted by any column. To sort by a particular column left click on the column title. In the picture bellow the Servers tile is sorted by Server Name; the indication is the little triangle in the Name column title in the Servers tile.

    clip_image004[5]

    Each tile also has a Tasks dropdown menu and a right-click context menu. The Tasks menus can be opened by clicking on the Tasks box at the top right corner of the tile and then any available task in the list can be selected. The right-click context menus are activated by right-clicking in the tile. The menu options will vary based on context. (For illustrative purposes the screen shot in the picture bellow shows all the Tasks menus and a right-click menu in every tile. Only one right-click menu or Tasks menu can be active at any time.). clip_image006

    In the picture bellow shows the Tasks menus and right-click menu for the Team Interfaces tab.

    clip_image007[6]

     

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    Windows Server 2012 NIC Teaming tools for troubleshooting

    NIC Teaming and the powerful administration tools in Windows Server 2012 are very powerful tools that can be misused, misconfigured, and may cause loss of connectivity if the administrator isn’t careful. Here are some common issues:

    Using VLANs

    VLANs are another powerful tool. There are a few rules for using VLANs that will help to make the combination of VLANs and NIC Teaming a very positive experience.

    1) Anytime you have NIC Teaming enabled, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.

    2) Anytime you have NIC Teaming enabled, you must not set VLAN filters on the NICs using the NICs advanced properties settings. Let the teaming software or the Hyper-V switch (if present) do the filtering.

    VLANs in a Hyper-V host

    1) In a Hyper-V host VLANs should be configured only in the Hyper-V switch, not in the NIC Teaming software. Configuring team interfaces with VLANs can easily lead to VMs that are unable to communicate on the network due to collisions with VLANs assigned in the Hyper-V switch.

    VLANs in a Hyper-V VM

    1) The preferred method of supporting multiple VLANs in a VM is to provide the VM multiple ports on the Hyper-V switch and associate each port with a VLAN. Never team these ports in the VM as it will certainly cause communication problems.

    2) If the VM has multiple SR-IOV VFs make sure they are on the same VLAN before teaming them in the VM. It’s easily possible to configure the different VFs to be on different VLANs and, like in the previous case, it will certainly cause communication problems.

    3) The only safe way to use VLANs with NIC Teaming in a guest is to team Hyper-V ports that are

    a. Each connected to a different Hyper-V switch, and

    b. Each configured to be associated with the same VLAN (or all associated with untagged traffic only).

    c. If you must have more than one VLAN exposed into a guest OS consider renaming the ports in the guest to indicate what the VLAN is. E.g., if the first port is associated with VLAN 12 and the second port is associated with VLAN 48, rename the interface vEthernet to be vEthernetVLAN12 and the other to be vEthernetVLAN48. (Renaming interfaces is easy using the Windows PowerShell Rename-NetAdapter cmdlet or by going to the Network Connections panel in the guest and renaming the interfaces.

    Interactions with other teaming solutions

    Some users will want to use other NIC teaming solutions for a variety of reasons. This can be done but there are some risks that the system administrator should be aware of.

    1. If the system administrator attempts to put a NIC into a 3rd party team that is presently part of a Microsoft NIC Teaming team, the system will become unstable and communications may be lost completely.

    2. If the system administrator attempts to put a NIC into a Microsoft NIC Teaming team that is presently part of a 3rd party teaming solution team the system will become unstable and communications may be lost completely.

    As a result it is STRONGLY RECOMMENDED that no system administrator ever run two teaming solutions at the same time on the same server. The teaming solutions are unaware of each other’s existence resulting in potentially serious problems.

    In the event that an administrator violates these guidelines and gets into the situation described above the following steps may solve the problem.

    1. Reboot the server. Forcibly power-off the server if necessary to get it to reboot.

    2. When the server has rebooted run this Windows PowerShell cmdlet:

    Get-NetLbfoTeam | Remove-NetLbfoTeam

    3. Use the 3rd party teaming solution’s administration tools and remove all instances of the 3rd party teams.

    4. Reboot the server again.

    Microsoft continues its longstanding policy of not supporting 3rd party teaming solutions. If a user chooses to run a 3rd party teaming solution and then encounters networking problems, the customer should call their teaming solution provider for support. If the issue is reproducible without the 3rd party teaming solution in place, please report the problem to Microsoft.

    Disabling and Enabling with Windows PowerShell

    The most common reason for a team to not be passing traffic is that the team interface is disabled. We’ve seen a number of cases where attempts to use the power of Windows PowerShell have resulted in unintended consequences. For example, the sequence:

    Disable-NetAdapter *

    Enable-NetAdapter *

    does not enable all the netadapters that it disabled. This is because disabling all the underlying physical member NICs will cause the team interface to be removed and no longer show up in Get-NetAdapter. Thus the Enable-NetAdapter * will not enable the team NIC since that adapter has been removed. It will however enable the member NICs, which will then cause the team interface to show up. The team interface will still be in a “disabled” state since you have not enabled it. Enabling the team interface will cause traffic to begin to flow again.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    NIC teaming on Virtual Machines

    NIC Teaming in a VM only applies to VM-NICs connected to external switches. VM-NICs connected to internal or private switches will show as disconnected when they are in a team.

    clip_image002

    NIC teaming in Windows Server 2012 may also be deployed in a VM. This allows a VM to have virtual NICs (synthetic NICs) connected to more than one Hyper-V switch and still maintain connectivity even if the physical NIC under one switch gets disconnected. This is particularly important when working with Single Root I/O Virtualization (SR-IOV) because SR-IOV traffic doesn’t go through the Hyper-V switch and thus cannot be protected by a team in or under the Hyper-V host. With the VM-teaming option an administrator can set up two Hyper-V switches, each connected to its own SR-IOV-capable NIC.

    · Each VM can have a virtual function (VF) from one or both SR-IOV NICs and, in the event of a NIC disconnect, fail-over from the primary VF to the back-up adapter (VF).

    · Alternately, the VM may have a VF from one NIC and a non-VF VM-NIC connected to another switch. If the NIC associated with the VF gets disconnected, the traffic can fail-over to the other switch without loss of connectivity.

    clip_image004

    Note: Because fail-over between NICs in a VM might result in traffic being sent with the MAC address of the other VM-NIC, each Hyper-V switch port associated with a VM that is using NIC Teaming must be set to allow teaming There are two ways to enable NIC Teaming in the VM:

    1) In the Hyper-V Manager, in the settings for the VM, select the VM’s NIC and the Advanced Settings item, then enable the checkbox for NIC Teaming in the VM.

    clip_image005

    2) Run the following Windows PowerShell cmdlet in the host with elevated (Administrator) privileges.

    Set-VMNetworkAdapter -VMName <VMname> -AllowTeaming On

    Teams created in a VM can only run in Switch Independent configuration, Address Hash distribution mode (or one of the specific address hashing modes). Only teams where each of the team members is connected to a different external Hyper-V switch are supported.

    Teaming in the VM does not affect Live Migration. The same rules exist for Live Migration whether or not NIC teaming is present in the VM.

    No teaming of Hyper-V ports in the Host Partition

    Hyper-V virtual NICs exposed in the host partition (vNICs) must not be placed in a team. Teaming of virtual NIC’s (vNICs) inside of the host partition is not supported in any configuration or combination. Attempts to team vNICs may result in a complete loss of communication in the event that network failures occur.

    Feature compatibilities

    NIC teaming is compatible with all networking capabilities in Windows Server 2012 with five exceptions: SR-IOV, RDMA, Native host Quality of Service, TCP Chimney, and 802.1X Authentication.

    · For SR-IOV and RDMA, data is delivered directly to the NIC without passing it through the networking stack (in the host OS in the case of virtualization). Therefore, it is not possible for the team to look at or redirect the data to another path in the team.

    · When QoS policies are set on a native or host system and those policies invoke minimum bandwidth limitations, the overall throughput through a NIC team will be less than it would be without the bandwidth policies in place.

    · TCP Chimney is not supported with NIC teaming in Windows Server 2012 since TCP Chimney has the entire networking stack offloaded to the NIC.

    · 802.1X Authentication should not be used with NIC Teaming and some switches will not permit configuration of both 802.1X Authentication and NIC Teaming on the same port.

    Feature

    Comments

    Datacenter bridging (DCB)

    Works independent of NIC Teaming so is supported if the team members support it.

    IPsec Task Offload (IPsecTO)

    Supported if all team members support it.

    Large Send Offload (LSO)

    Supported if all team members support it.

    Receive side coalescing (RSC)

    Supported in hosts if any of the team members support it. Not supported through Hyper-V switches.

    Receive side scaling (RSS)

    NIC teaming supports RSS in the host. The Windows Server 2012 TCP/IP stack programs the RSS information directly to the Team members.

    Receive-side Checksum offloads (IPv4, IPv6, TCP)

    Supported if any of the team members support it.

    Remote Direct Memory Access (RDMA)

    Since RDMA data bypasses the Windows Server 2012 protocol stack, team members will not also support RDMA.

    Single root I/O virtualization (SR-IOV)

    Since SR-IOV data bypasses the host OS stack, NICs exposing the SR-IOV feature will no longer expose the feature while a member of a team. Teams can be created in VMs to team SR-IOV virtual functions (VFs).

    TCP Chimney Offload

    Not supported through a Windows Server 2012 team.

    Transmit-side Checksum offloads (IPv4, IPv6, TCP)

    Supported if all team members support it.

    Virtual Machine Queues (VMQ)

    Supported when teaming is installed under the Hyper-V switch.

    QoS in host/native OSs

    Use of minimum bandwidth policies will degrade throughput through a team.

    Virtual Machine QoS (VM-QoS)

    VM-QoS is affected by the load distribution algorithm used by NIC Teaming. For best results use HyperVPorts load distribution mode.

    802.1X authentication

    Not compatible with many switches. Should not be used with NIC Teaming.

    NIC Teaming and Virtual Machine Queues (VMQs)

    VMQ and NIC Teaming work well together; VMQ should be enabled anytime Hyper-V is enabled. Depending on the switch configuration mode and the load distribution algorithm, NIC teaming will either present VMQ capabilities to the Hyper-V switch that show the number of queues available to be the smallest number of queues supported by any adapter in the team (Min-queues mode) or the total number of queues available across all team members (Sum-of-Queues mode). Specifically,

    · if the team is in Switch-Independent teaming mode and the Load Distribution is set to Hyper-V Port mode, then the number of queues reported is the sum of all the queues available from the team members (Sum-of-Queues mode);

    · Otherwise the number of queues reported is the smallest number of queues supported by any member of the team (Min-Queues mode).

    Here’s why.

    · When the team is in switch independent/Hyper-V Port mode the inbound traffic for a VM will always arrive on the same team member. The host can predict which member will receive the traffic for a particular VM so NIC Teaming can be more thoughtful about which VMQ Queues to allocate on a particular team member. NIC Teaming, working with the Hyper-V switch, will set the VMQ for a VM on exactly one team member and know that inbound traffic will hit that queue.

    · When the team is in any switch dependent mode (static teaming or LACP teaming), the switch that the team is connected to controls the inbound traffic distribution. The host’s NIC Teaming software can’t predict which team member will get the inbound traffic for a VM and it may be that the switch distributes the traffic for a VM across all team members. As a result the NIC Teaming software, working with the Hyper-V switch, programs a queue for the VM on every team member, not just one team member.

    · When the team is in switch-independent mode and is using an address hash load distribution algorithm, the inbound traffic will always come in on one NIC (the primary team member) – all of it on just one team member. Since other team members aren’t dealing with inbound traffic they get programmed with the same queues as the primary member so that if the primary member fails any other team member can be used to pick up the inbound traffic and the queues are already in place.

    There are a few settings that will help the system perform even better.

    Each NIC has, in its advanced properties, values for *RssBaseProcNumber and *MaxRssProcessors.

    · Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2). This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the system processing so the network processing should be steered away from this physical processor. (Some machine architectures don’t have two logical processors per physical processor so for such machines the base processor should be greater than or equal to 1. If in doubt assume your host is using a 2 logical processor per physical processor architecture.)

    · If the team is in Sum-of-Queues mode the team members’ processors should be, to the extent possible, non-overlapping. For example, in a 4-core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.

    · If the team is in Min-Queues mode the processor sets used by the team members must be identical.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    Overview of NIC Teaming (LBFO) in Windows Server 2012

    NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of

    · bandwidth aggregation, and/or

    · traffic failover to maintain connectivity in the event of a network component failure.

    This feature has long been available from NIC vendors but until now NIC teaming has not been included with Windows Server.

    The following sections address:

    · NIC teaming architecture

    · Bandwidth aggregation (also known as load balancing) mechanisms

    · Failover algorithms

    · NIC feature support – stateless task offloads and more complex NIC functionality

    · A detailed walkthrough how to use the NIC Teaming management tools

    NIC teaming is available in Windows Server 2012 in all editions, both Server Core and Full Server versions. NIC teaming is not available in Windows 8, however the NIC teaming User Interface and the NIC Teaming Windows PowerShell Cmdlets can both be run on Windows 8 so that a Windows 8 PC can be used to manage teaming on one or more Windows Server 2012 hosts.

    Existing architectures for NIC teaming

    Today virtually all NIC teaming solutions on the market have an architecture similar to that shown in Figure 1.

    image

    Figure 1 – Standard NIC teaming solution architecture and Microsoft vocabulary

    One or more physical NICs are connected into the NIC teaming solution common core, which then presents one or more virtual adapters (team NICs [tNICs] or team interfaces) to the operating system. There are a variety of algorithms that distribute outbound traffic between the NICs.

    The only reason to create multiple team interfaces is to logically divide inbound traffic by virtual LAN (VLAN). This allows a host to be connected to different VLANs at the same time. When a team is connected to a Hyper-V switch all VLAN segregation should be done in the Hyper-V switch instead of in the NIC Teaming software.

    Configurations for NIC Teaming

    There are two basic configurations for NIC Teaming.

    Switch-independent teaming. This configuration does not require the switch to participate in the teaming. Since in switch-independent mode the switch does not know that the network adapter is part of a team in the host, the adapters may be connected to different switches. Switch independent modes of operation do not require that the team members connect to different switches; they merely make it possible.

      • Active/Standby Teaming: Some administrators prefer not to take advantage of the bandwidth aggregation capabilities of NIC Teaming. These administrators choose to use one NIC for traffic (active) and one NIC to be held in reserve (standby) to come into action if the active NIC fails. To use this mode set the team in Switch-independent teaming. Active/Standby is not required to get fault tolerance; fault tolerance is always present anytime there are at least two network adapters in a team.
      • Switch-dependent teaming. This configuration that requires the switch to participate in the teaming. Switch dependent teaming requires all the members of the team to be connected to the same physical switch.

    There are two modes of operation for switch-dependent teaming:

    Generic or static teaming (IEEE 802.3ad). This mode requires configuration on both the switch and the host to identify which links form the team. Since this is a statically configured solution there is no additional protocol to assist the switch and the host to identify incorrectly plugged cables or other errors that could cause the team to fail to perform. This mode is typically supported by server-class switches.

    Dynamic teaming (IEEE 802.1ax, LACP). This mode is also commonly referred to as IEEE 802.3ad as it was developed in the IEEE 802.3ad committee before being published as IEEE 802.1ax. IEEE 802.1ax works by using the Link Aggregation Control Protocol (LACP) to dynamically identify links that are connected between the host and a given switch. This enables the automatic creation of a team and, in theory but rarely in practice, the expansion and reduction of a team simply by the transmission or receipt of LACP packets from the peer entity. Typical server-class switches support IEEE 802.1ax but most require the network operator to administratively enable LACP on the port.

    Both of these modes allow both inbound and outbound traffic to approach the practical limits of the aggregated bandwidth because the pool of team members is seen as a single pipe.

    Algorithms for traffic distribution

    Outbound traffic can be distributed among the available links in many ways. One rule that guides any distribution algorithm is to try to keep all packets associated with a single flow (TCP-stream) on a single network adapter. This rule minimizes performance degradation caused by reassembling out-of-order TCP segments.

    NIC teaming in Windows Server 2012 supports the following traffic distribution algorithms:

    Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic. There is an advantage in using this scheme in virtualization. Because the adjacent switch always sees a particular MAC address on one and only one connected port, the switch will distribute the ingress load (the traffic from the switch to the host) on multiple links based on the destination MAC (VM MAC) address. This is particularly useful when Virtual Machine Queues (VMQs) are used as a queue can be placed on the specific NIC where the traffic is expected to arrive. However, if the host has only a few VMs, this mode may not be granular enough to get a well-balanced distribution. This mode will also always limit a single VM (i.e., the traffic from a single switch port) to the bandwidth available on a single interface. Windows Server 2012 uses the Hyper-V Switch Port as the identifier rather than the source MAC address as, in some instances, a VM may be using more than one MAC address on a switch port.

    Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.

    The components that can be specified as inputs to the hashing function include the following:

    • Source and destination MAC addresses
    • Source and destination IP addresses
    • Source and destination TCP ports and source and destination IP addresses

    The TCP ports hash creates the most granular distribution of traffic streams resulting in smaller streams that can be independently moved between members. However, it cannot be used for traffic that is not TCP or UDP-based or where the TCP and UDP ports are hidden from the stack, such as IPsec-protected traffic. In these cases, the hash automatically falls back to the IP address hash or, if the traffic is not IP traffic, to the MAC address hash.

    Interactions between Configurations and Load distribution algorithms

    Switch Independent configuration / Address Hash distribution

    This configuration will send packets using all active team members distributing the load through the use of the selected level of address hashing (defaults to using TCP ports and IP addresses to seed the hash function).

    Because a given IP address can only be associated with a single MAC address for routing purposes, this mode receives inbound traffic on only one team member (the primary member). This means that the inbound traffic cannot exceed the bandwidth of one team member no matter how much is getting sent.

    This mode is best used for:

    a) Native mode teaming where switch diversity is a concern;

    b) Active/Standby mode teams; and

    c) Teaming in a VM.

    It is also good for:

    d) Servers running workloads that are heavy outbound, light inbound workloads (e.g., IIS).

    Switch Independent configuration / Hyper-V Port distribution

    This configuration will send packets using all active team members distributing the load based on the Hyper-V switch port number. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port is affinitized to exactly one team member at any point in time.

    Because each VM (Hyper-V port) is associated with a single team member, this mode receives inbound traffic for the VM on the same team member the VM’s outbound traffic uses. This also allows maximum use of Virtual Machine Queues (VMQs) for better performance over all.

    This mode is best used for teaming under the Hyper-V switch when

    a) The number of VMs well-exceeds the number of team members; and

    b) A restriction of a VM to not greater than one NIC’s bandwidth is acceptable

    Switch Dependent configuration / Address Hash distribution

    This configuration will send packets using all active team members distributing the load through the use of the selected level of address hashing (defaults to 4-tuple hash).

    Like in all switch dependent configurations, the switch determines how to distribute the inbound traffic among the team members. The switch is expected to do a reasonable job of distributing the traffic across the team members but it has complete independence to determine how it does so.

    Best used for:

    a) Native teaming for maximum performance and switch diversity is not required; or

    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver.

    Switch Dependent configuration / Hyper-V Port distribution

    This configuration will send packets using all active team members distributing the load based on the Hyper-V switch port number. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port is “affinitized” to exactly one team member at any point in time.

    Like in all switch dependent configurations, the switch determines how to distribute the inbound traffic among the team members. The switch is expected to do a reasonable job of distributing the traffic across the team members but it has complete independence to determine how it does so.

    Best used when:

    a) Hyper-V teaming when VMs on the switch well-exceed the number of team members and

    b) When policy calls for switch dependent (e.g., LACP) teams and

    a) When the restriction of a VM to not greater than one NIC’s bandwidth is acceptable.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter:  @mdnoga

    VLANs with Hyper-V Network Virtualization

    Isolating different departments’ virtual machines can be a challenge on a shared network. When entire networks of virtual machines must be isolated, the challenge becomes even greater. Traditionally, VLANs have been used to isolate networks, but VLANs are very complex to manage on a large scale. The following are the primary drawbacks of VLANs:

    · Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved. Moreover, frequent reconfigurations of the physical network to add or modify VLANs increases the risk of an outage.

    · VLANs have limited scalability because typical switches support no more than 1,000 VLAN IDs (with a maximum of 4,095).

    · VLANs cannot span multiple subnets, which limits the number of nodes in a single VLAN and restricts the placement of virtual machines based on physical location.

    In addition to these drawbacks, virtual machine IP address assignment presents other key issues when organizations move to the cloud:

    · Required renumbering of service workloads.

    · Policies that are tied to IP addresses.

    · Physical locations that determine virtual machine IP addresses.

    · Topological dependency of virtual machine deployment and traffic isolation.

    The IP address is the fundamental address that is used for layer-3 network communication because most network traffic is TCP/IP. Unfortunately, when moving to the cloud, the addresses must be changed to accommodate the physical and topological restrictions of the datacenter. Renumbering IP addresses is cumbersome because all associated policies that are based on IP addresses must also be updated.

    The physical layout of a datacenter influences the permissible potential IP addresses for virtual machines that run on a specific server or blade that is connected to a specific rack in the datacenter. A virtual machine provisioned and placed in the datacenter must adhere to the choices and restrictions regarding its IP address. The typical result is that datacenter administrators assign IP addresses to the virtual machines and force virtual machine owners to adjust all the policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged.

    To solve these problems, Windows Server 2012 introduces Hyper-V Network Virtualization, a new feature that enables you to isolate network traffic from different business units or customers on a shared infrastructure, without having to use VLANs. Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments. You can even use Network Virtualization to transparently integrate these private networks into a pre-existing infrastructure on another site.

    Hyper-V Network Virtualization extends the concept of server virtualization to permit multiple virtual networks, potentially with overlapping IP addresses, to be deployed on the same physical network. With Network Virtualization, you can set policies that isolate traffic in a dedicated virtual network independently of the physical infrastructure. The following figure illustrates how you can use Network Virtualization to isolate network traffic that belongs to two different customers. In the figure, a Blue virtual machine and a Yellow virtual machine are hosted on a single physical network, or even on the same physical server. However, because they belong to separate Blue and Yellow virtual networks, the virtual machines cannot communicate with each other even if the customers assign these virtual machines IP addresses from the same address space.

    image

    To virtualize the network, Hyper-V Network Virtualization uses the following elements:

    · Two IP addresses for each virtual machine.

    · Generic Routing Encapsulation (GRE).

    · IP address rewrite.

    · Policy management server.

    IP addresses

    Each virtual machine is assigned two IP addresses:

    · Customer Address (CA) is the IP address that the customer assigns based on the customer’s own intranet infrastructure. This address lets the customer exchange network traffic with the virtual machine as if it had not been moved to a public or private cloud. The CA is visible to the virtual machine and reachable by the customer.

    · Provider Address (PA) is the IP address that the host assigns based on the host’s physical network infrastructure. The PA appears in the packets on the wire exchanged with the Hyper-V server hosting the virtual machine. The PA is visible on the physical network, but not to the virtual machine.

    The layer of CAs is consistent with the customer’s network topology, which is virtualized and decoupled from the underlying physical network addresses, as implemented by the layer of PAs. With Network Virtualization, any virtual machine workload can be executed without modification on any Windows Server 2012 Hyper-V server within any physical subnet, if Hyper-V servers have the appropriate policy settings that can map between the two addresses.

    This approach provides many benefits, including cross-subnet live migration, customer virtual machines running IPv4 while the host provider runs an IPv6 datacenter or vice-versa, and using IP address ranges that overlap between customers. But perhaps the biggest advantage of having separate CAs and PAs is that it lets customers move their virtual machines to the cloud with minimal reconfiguration.

    Generic Routing Encapsulation

    GRE is a tunneling protocol (defined by RFC 2784 and RFC 2890) that encapsulates various network layer protocols inside virtual point-to-point links over an Internet Protocol network. Hyper-V Network Virtualization in Windows Server 2012 uses GRE IP packets to map the virtual network to the physical network. The GRE IP packet contains the following information:

    · One customer address per virtual machine.

    · One provider address per host that all virtual machines on the host share.

    · A Tenant Network ID embedded in the GRE header Key field.

    · Full MAC header.

    The following figure illustrates GRE in a Network Virtualization environment.

    clip_image004

    IP Address Rewrite

    Hyper-V Network Virtualization uses IP Address Rewrite to map the CA to the PA. Each virtual machine CA is mapped to a unique host PA. This information is sent in regular TCP/IP packets on the wire. With IP Address Rewrite, there is little need to upgrade existing network adapters, switches, and network appliances, and it is immediately and incrementally deployable today with little impact on performance. The following figure illustrates the IP Address Rewrite process.

    clip_image006

    Policy management server

    The setting and maintenance of Network Virtualization capabilities require using a policy management server, which may be integrated into the management tools used to manage virtual machines.

     

     

    Network Virtualization example

    Blue Corp and Yellow Corp are two companies that want to move their Microsoft SQL Server infrastructures into the cloud, but they want to maintain their current IP addressing. Thanks to the new Network Virtualization feature of Hyper-V in Windows Server 2012, Cloud is able to accommodate this request, as shown in the following figure.

    clip_image001

    Before moving to the hosting provider’s shared cloud service:

    · Blue Corp ran a SQL Server instance (named SQL) at the IP address 10.1.1.1 and a web server (named WEB) at the IP address 10.1.1.2, which uses its SQL server for database transactions.

    · · Yellow Corp ran a SQL Server instance, also named SQL and assigned the IP address 10.1.1.1, and a web server, also named WEB and also at the IP address 10.1.1.2, which uses its SQL server for database transactions.

    Both Blue Corp and Yellow Corp move their respective SQL and WEB servers to the same hosting provider’s shared IaaS service where they run the SQL virtual machines in Hyper-V Host 1 and the WEB virtual machines in Hyper-V Host 2. All virtual machines maintain their original intranet IP addresses (their CAs):

    · CAs of Blue Corp virtual machines: SQL is 10.1.1.1, WEB is 10.1.1.2.

    · CAs of Yellow Corp virtual machines: SQL is 10.1.1.1, WEB is 10.1.1.2.

    Both companies are assigned the following PAs by their hosting provider when the virtual machines are provisioned:

    · PAs of Blue Corp virtual machines: SQL is 192.168.1.10, WEB is 192.168.1.12.

    · PAs of Yellow Corp virtual machines: SQL is 192.168.1.11, WEB is 192.168.1.13.

    The hosting provider creates policy settings that consist of an isolation group for Yellow Corp that maps the CAs of the Yellow Corp virtual machines to their assigned PAs, and a separate isolation group for Blue Corp that maps the CAs of the Blue Corp virtual machines to their assigned PAs. The provider applies these policy settings to both Hyper-V Host 1 and Hyper-V Host 2.

    When the Blue Corp WEB virtual machine on Hyper-V Host 2 queries its SQL server at 10.1.1.1, the following occurs:

    · Hyper-V Host 2, based on its policy settings, translates the addresses in the packet from:
    Source: 10.1.1.2 (the CA of Blue Corp WEB)
    Destination: 10.1.1.1 (the CA of Blue Corp SQL)
    to
    Source: 192.168.1.12 (the PA for Blue Corp WEB)
    Destination: 192.168.1.10 (the PA for Blue Corp SQL)

    · When the packet is received at Hyper-V Host 1, based on its policy settings, Network Virtualization translates the addresses in the packet from:
    Source: 192.168.1.12 (the PA for Blue Corp WEB)
    Destination: 192.168.1.10 (the PA for Blue Corp SQL)
    back to
    Source: 10.1.1.2 (the CA of Blue Corp WEB)
    Destination: 10.1.1.1 (the CA of Blue Corp SQL) before delivering the packet to the Blue Corp SQL virtual machine.

    When the Blue Corp SQL virtual machine on Hyper-V Host 1 responds to the query, the following happens:

    · Hyper-V Host 1, based on its policy settings, translates the addresses in the packet from:
    Source: 10.1.1.1 (the CA of Blue Corp SQL)
    Destination: 10.1.1.2 (the CA of Blue Corp WEB)
    to
    Source: 192.168.1.10 (the PA for Blue Corp SQL)
    Destination: 192.168.1.12 (the PA for Blue Corp WEB)

    · When Hyper-V Host 2 receives the packet, based on its policy settings, Network Virtualization translates the addresses in the packet from:
    Source: 192.168.1.10 (the PA for Blue Corp SQL)

    Destination: 192.168.1.12 (the PA for Blue Corp WEB)
    to
    Source: 10.1.1.1 (the CA of Blue Corp SQL)
    Destination: 10.1.1.2 (the CA of Blue Corp WEB) before delivering the packet to the Blue Corp WEB virtual machine.

    A similar process for traffic between the Yellow Corp WEB and SQL virtual machines uses the settings in the Yellow Corp isolation group. With Network Virtualization, Yellow Corp and Blue Corp virtual machines interact as if they were on their original intranets, but they are never in communication with each other, even though they are using the same IP addresses. The separate addresses (CAs and PAs), the policy settings of the Hyper-V hosts, and the address translation between CA and PA for inbound and outbound virtual machine traffic, all act to isolate these two sets of servers from each other.

    Setting and maintaining Network Virtualization capabilities requires the use of a policy management server, which may be integrated into tools used to manage virtual machines.

    Two techniques are used to virtualize the IP address of the virtual machine. The preceding example with Blue Corp and Yellow Corp shows IP Rewrite, which modifies the CA IP address of the virtual machine’s packets before they are transferred on the physical network. IP Rewrite can provide better performance because it is compatible with existing Windows networking offload technologies such as VMQs.

    The second IP virtualization technique is GRE Encapsulation (RFC 2784). With GRE Encapsulation, all virtual machines packets are encapsulated with a new header before being sent on the wire. GRE Encapsulation provides better network scalability because all virtual machines on a specific host can share the same PA IP address. Reducing the number of PAs means that the load on the network infrastructure associated with learning these addresses (IP and MAC) is greatly reduced.

    Requirements

    Network Virtualization requires Windows Server 2012 and the Hyper-V server role.

    Summary

    With Network Virtualization, you now can isolate network traffic from different business units or customers on a shared infrastructure, without having to use VLANs. Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments. Finally, you can use Network Virtualization to transparently integrate these private networks into a pre-existing infrastructure on another site.

    Network Virtualization benefits include:

    · Tenant network migration to the cloud with minimum reconfiguration or effect on isolation. Customers can keep their internal IP addresses while they move workloads onto shared IaaS clouds, minimizing the configuration changes needed for IP addresses, DNS names, security policies, and virtual machine configurations. In software-defined, policy-based datacenter networks, network traffic isolation does not depend on VLANs, but is enforced within Hyper-V hosts, based on multitenant isolation policies. Network administrators can still use VLANs for traffic management of the physical infrastructure if the topology is primarily static.

    · Tenant virtual machine deployment anywhere in the datacenter. Services and workloads can be placed or migrated to any server in the datacenter while keeping their IP addresses, without being limited to physical IP subnet hierarchy or VLAN configurations.

    · Simplified network and improved server/network resource use. The rigidity of VLANs, along with the dependency of virtual machine placement on physical network infrastructure, results in overprovisioning and underuse. By breaking this dependency, Virtual Server Virtual Networking increases the flexibility of virtual machine workload placement, thus simplifying network management and improving server and network resource use. Server workload placement is simplified because migration and placement of workloads are independent of the underlying physical network configurations. Server administrators can focus on managing services and servers, while network administrators can focus on overall network infrastructure and traffic management.

    · Works with today’s hardware (servers, switches, appliances) to maximize performance. Network Virtualization can be deployed in today’s datacenter, and yet is compatible with emerging datacenter “flat network” technologies, such as TRILL (Transparent Interconnection of Lots of Links), an IETF standard architecture intended to expand Ethernet topologies.

    Full management through Windows PowerShell and WMI. You can use Windows PowerShell to script and automate administrative tasks easily. Windows Server 2012 includes Windows PowerShell cmdlets for Network Virtualization that let you build command-line tools or automated scripts for configuring, monitoring, and troubleshooting network isolation policies.

     

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    Storage and Continuous Availability Enhancements in Windows Server 2012

    Windows Server 2012 will provide a continuum of availability options to protect against a wide range of failure modes in different tiers – storage, network, and compute. These options will enable higher levels of availability and cost-effectiveness, as well as easier deployment for all customers – from small business to mid-market to enterprises – and across single servers, multiple servers, and multi-site cloud environments. Windows Server 2012delivers on continuous availability by efficiently utilizing industry standard storage, network and server components. That means many IT organizations will have capabilities they couldn’t previously afford or manage.

    SMB 2.2 transparent failover, along with SMB 2.2 Multichannel and SMB 2.2 Direct, enables customers to deploy storage for workloads such as Hyper-V and SQL Server on cost efficient, continuously available, high performance Windows Server 2012 File Servers.

    Below are some of the key features we’re delivering in Windows Server 2012 involving SMB 2.2.

    • Transparent Failover and node fault tolerance with SMB 2.2. Supporting business critical server application workloads requires the connection to the storage back end to be continuously available. The new SMB 2.2 server and client cooperate to provide transparent failover to an alternative cluster node for all SMB 2.2 operations for both planned moves and unplanned failures.
    • Fast data transfers and network fault tolerance with SMB 2.2 Multichannel. With Windows Server 8, customers can store application data (such as Hyper-V and SQL Server) on remote SMB 2.2 file shares. SMB2.2 Multichannel provides better throughput and multiple redundant paths from the server (e.g., Hyper-V or SQL Server) to the storage on a remote SMB2.2 share. Network path failures are automatically and transparently handled without application service disruption.
    • Scalable, fast and efficient storage access with SMB2 Direct. SMB2 Direct (SMB over RDMA) is a new storage protocol in Windows Server 8. It enables direct memory-to-memory data transfers between server and storage, with minimal CPU utilization, while using standard RDMA capable NICs. SMB2 Direct is supported on all three available RDMA technologies (iWARP, InfiniBand and RoCE.) Minimizing the CPU overhead for storage I/O means that servers can handle larger compute workloads (e.g., Hyper-V can host more VMs) with the saved CPU cycles.
    • Active-Active File sharing with SMB 2.2 Scale Out. Taking advantage of the single namespace functionality provided by Cluster Shared Volumes (CSV) v2, the File Server in Windows Server 2012can provide simultaneous access to shares, with direct I/O to a shared set of drives, from any node in a cluster. This allows utilization of all the network bandwidth into a cluster and load balancing of the clients, in order to optimize client experience.
    • Volume Shadow Copy Service (VSS) for SMB 2.2 file shares. Remote VSS provides application-consistent shadow copies for data stored on remote file shares to support app backup and restore scenarios.

    Alongside the SMB 2.2 Server implementation in Windows Server 2012, Microsoft is working with two leading storage companies, NetApp and EMC, to enable them to fully integrate SMB 2.2 into their stacks and provide Hyper-V over SMB 2.2 solutions. Having NetApp and EMC on board not only demonstrates strong industry support of SMB 2.2 as a protocol of choice for various types of customers, but also highlights how the industry is aligned with our engineering direction and its support for our Windows Server 2012 storage technology.