GetVirtual – Virtualization 4 All

My vision what Microsoft Virtualization and Cloud Computing bring to IT World

Category Archives: System Center 2012 R2 Virtual Machine Manager

How to plan your System Center Virtual Machine Manager Networks

For several times I have just ending up to explain how we should handle with networks on SCVMM in different scenarios. Here is the base that I use to planning SCVMM networks for all scenarios.

SCVMM provides many options when you plan to connect your virtual machines to a physical network. You can use these options on their own or in a mixed environment, depending on your needs.

  • VLAN-based configuration – You can use familiar virtual area network (VLAN) technology for network isolation. You can manage those networks as they are, using SCVMM to simplify the management process.
  • No isolation – You can get direct access to the logical network with a VM network. This is the simplest configuration, where the VM network is the same as the logical network on which it is configured. This configuration is appropriate for a network through which you will manage a host.
  • Network virtualization – You can support multiple tenants (also called clients or customers) with their own networks, isolated from the networks of others. With this isolation, your tenants can use any IP addresses that they want for their virtual machines, regardless of the IP addresses that are used on other VM networks. Also, you can allow your tenants to configure some aspects of their own networks, based on limits that you specify. Network virtualization abstracts the physical address space and presents a virtual address space of the tenants.
  • Use external networks – You can use a vendor network-management console that allows you to configure settings on your forwarding extension, for example, settings for logical networks, network sites, and VM networks. SCVMM will import those settings.
  • No virtual networking – Networks that don’t require access by VMs do not use VM networks. For example, storage networks.

Networking Level

How SCVMM networking can be used

Physical Fabric

Fabric administrators can maintain network hardware (such as network adapters and switches) without requiring other administrators or users to understand it. Fabric administrators can maintain a stable physical network configuration while still being able to provide flexibility to others who need specific IP address spaces for their virtual machines.

Logical Networks and Logical Switches

Fabric administrators can create logical networks and logical switches as an underlying configuration that is straightforward to maintain and is not visible to tenant administrators or users.

VM Networks

Tenant administrators can create VM network easily, making it easy to respond when users need additional or different IP address spaces. (Tenant administrators can also control resource usage through user role quotas.)
Self-service users can create virtual machines and connect them to VM networks without having to involve tenant administrators.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Windows Server 2012 R2 Hyper-V vs VMware vSphere 5.5

 

On my search for comparing the newest version of VMware with Hyper-V 2012 R2 I found this chart that I think it will summarize what you need to know about the two Hypervisors.

How to compare?


Rather than simply comparing feature-by-feature using just simple check-marks in each category, I’ll try to provide as much detail as possible for you to intelligently compare each area.  For each comparison area, I’ll rate the related capabilities with the following color coded rankings:

  • Supported – Fully supported without any additional products or licenses
  • Limited Support – Significant limitations when using related feature, or limitations in comparison to the competing solution represented
  • Not Supported – Not supported at all or without the addition of other product licensing costs

Licensing

 

Microsoft 
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

# of Physical CPUs per License

2

1

With Microsoft, each Datacenter Edition license provides licensing for up to 2 physical CPUs per Host.  Additional licenses can be “stacked” if more than 2 physical CPUs are present.

With VMware, a vSphere 5.5 Enterprise Plus license must be purchased for each physical CPU.  This difference in CPU licensing is one of the factors that can contribute to increased licensing costs.  In addition, a minimum of one license of vCenter Server 5.5 is required for vSphere deployments.

# of Managed OSE’s per License

Unlimited

Unlimited

Both solutions provide the ability to manage an unlimited number of Operating System Environments per licensed Host.

# of Windows Server VM Licenses per Host

Unlimited

0

With VMware, Windows Server VM licenses must still be purchased separately. In environments virtualizing Windows Server workloads, this can contribute to a higher overall cost when virtualizing with VMware.

VMware does include licenses for an unlimited # of VMs running SUSE Linux Enterprise Server per Host.

Includes Anti-virus / Anti-malware protection

Yes – System Center Endpoint Protection agents included for both Host and VMs with System Center 2012 R2

Yes – Includes vShield Endpoint Protection which deploys as EPSEC thin agent in each VM + separate virtual appliance.

 

Includes full SQL Database Server licenses for management databases

Yes – Includes all needed database server licensing to manage up to 1,000 hosts and 25,000 VMs per management server.

No Must purchase additional database server licenses to scale beyond managing 100 hosts and 3,000 VMs with vCenter Server Appliance.

VMware licensing includes an internal vPostgres database that supports managing up to 100 hosts and 3,000 VMs via vCenter Server Appliance.

Includes licensing for Operations Monitoring and Management.

Yes – Included in System Center 2012 R2

No – Operations Monitoring and Management requires separate license for vCenter Operations Manager or upgrade to vSphere with Operations Management

 

Includes licensing for Private Cloud Management capabilities – pooled resources, self-service, delegation, automation, elasticity, chargeback/showback

Yes – Included in System Center 2012 R2

No – Private Cloud Management capabilities require additional cost of VMware vCloud Suite.

 

 

Virtualization Scalability

 

Microsoft 
Windows Server 2012 R2 
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

Maximum # of Logical Processors per Host

320

320

With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum # of Logical Processors supported per Host.

Maximum Physical RAM per Host

4TB

4TB

With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum Physical RAM supported per Host.

Maximum Active VMs per Host

1,024

512

 

Maximum Virtual CPUs per VM

64

64

When using VMware FT, only 1 Virtual CPU per VM can be used.

Hot-Adjust Virtual CPU Resources to VM

Yes – Hyper-V provides the ability to increase and decrease Virtual Machine limits for processor resources on running VMs.

Yes – Can Hot-Add virtual CPUs for running VMs on selected Guest Operating Systems and adjust Limits/Shares for CPU resources.

VMware Hot-Add CPU feature requires supported Guest Operating System. Check VMware Compatibility Guide for details.

VMware Hot-Add CPU feature not supported when using VMware FT

Maximum Virtual RAM per VM

1TB

1TB

When using VMware FT, only 64GB of Virtual RAM per VM can be used.

Hot-Add Virtual RAM to VM

Yes ( Dynamic Memory )

Yes

Requires supported Guest Operating System.

Dynamic Memory Management

Yes ( Dynamic Memory )

Yes (Memory Ballooning) Note that memory overcommit is not supported for VMs that are configured as an MSCS VM Guest Cluster.

VMware vSphere 5.5 also supports another memory technique: Transparent Page Sharing (TPS).  While TPS was useful in the past on legacy server hardware platforms and operating systems, it is no longer effective in many environments due to modern servers and operating systems supporting Large Memory Pages (LMP) for improved memory performance.

Guest NUMA Support

Yes

Yes

NUMA = Non-Uniform Memory Access.  Guest NUMA support is particularly important for scalability when virtualizing large multi-vCPU VMs on Hosts with a large number of physical processors.

Maximum # of physical Hosts per Cluster

64

32

 

Maximum # of VMs per Cluster

8,000

4,000

 

Virtual Machine Snapshots

Yes – Up to 50 snapshots per VM are supported.

Yes – Up to 32 snapshots per VM chain are supported, but VMware only recommends 2-to-3.

In addition, VM Snapshots are not supported for VMs using an iSCSI initiator.

 

Integrated Application Load Balancing for Scaling-Out Application Tiers

Yes – via System Center 2012 R2 VMM

No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.

 

Bare metal deployment of new Hypervisor hosts and clusters

Yes – via System Center 2012 R2 VMM

Yes – Vmware Auto Deploy and Host Profiles supports bare metal deployment of new hosts into an existing cluster, but does not support bare metal deployment of new clusters.

 

Bare metal deployment of new Storage hosts and clusters

Yes – via System Center 2012 R2 VMM

No

 

GPU Virtualization for Advanced VDI Graphics

Yes – Server GPUs can be virtualized and shared across VDI VMs via RemoteFX.

Yes – via virtual GPU support.

 

Virtualization of USB devices

Yes – Client USB devices can be passed to VMs via Remote Desktop connections. Direct redirection of USB storage from Host possible with Windows-to-Go certified devices.  Direct redirection of other USB devices possible with third-party solutions.

Yes – via USB Pass-through support.

 

 

VM Portability, High Availability and Disaster Recovery

 

Microsoft 
Windows Server 2012 R2 
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

Live Migration of running VMs

Yes – Unlimited concurrent Live VM Migrations.  Provides flexibility to cap at a maximum limit that is appropriate for yourdatacenter architecture.

Yes – but limited to 4 concurrent vMotions per host when using 1GbE network adapters and 8 concurrent vMotions per host when using 10GbE network adapters.

 

Live Migration of running VMs without shared storage between hosts

Yes – Supported via Shared Nothing Live Migration

Yes – Supported via Enhanced vMotion.

 

Live Migration using compression of VM memory state

Yes – Supported via Compressed Live Migration, providing up to a 2X increase in Live Migration speeds.

No

 

Live Migration over RDMA-enabled network adapters

Yes – Supported via SMB-Direct Live Migration, providing up to a 10X increase in Live Migration speeds.

No

 

Live Migration of VMs Clustered with Windows Server Failover Clustering (MSCS Guest Cluster)

Yes – by configuring relaxed monitoring of MSCS VM Guest Clusters.

No – based on documented vSphere MSCS Setup Limitations

 

Highly Available VMs

Yes – Highly available VMs can be configured on a Hyper-V Host cluster.  If the application running inside the VM is cluster aware, a VM Guest Cluster can also be configured via MSCS for faster application failover times.

Yes – Supported by VMware HA, but with the limitations listed above when using MSCS VM Guest Clusters.

 

Failover Prioritization of Highly Available VMs

Yes – Supported by clustered priority settings on each highly available VM.

Yes

 

Affinity Rules for Highly Available VMs

Yes – Supported by preferred cluster resource owners and anti-affinity VM placement rules.

Yes

 

Cluster-Aware Updating for Orchestrated Patch Management of Hosts.

Yes – Supported via included Cluster-Aware Updating (CAU) role service.

Yes – Supported by vSphere 5.5 Update Manager, but if using vCenter Server Appliance, need separate 64-bit Windows OS license for Update Management server.  If supporting more than 5 hosts and 50 VMs, also need a separate SQL database server.

 

Guest OS Application Monitoring for Highly Available VMs

Yes

Yes – Provided by vSphere App HA, but limited to only the following applications: Apache Tomcat, IIS, SQL Server, Apache HTTP Server, SharePoint, SpringSource Runtime.

 

VM Guest Clustering via Shared Virtual Hard Disk files

Yes – Provided via native Shared VHDX support for VM Guest Clusters

Yes – But only Single-Host VM Guest Clustering supported via Shared VMDK files.  For VM Guest Clusters that extend across multiple hosts, must use RDM instead.

 

Intelligent Placement of new VM workloads

Yes – Provided via Intelligent Placement in System Center 2012 R2

Yes – Provided via vSphere DRS, but without ability to intelligently place fault tolerant VMs using VMware FT.

 

Automated Load Balancing of VM Workloads across Hosts

Yes – Provided via Dynamic Optimization in System Center 2012 R2

Yes – Provided via vSphere DRS, but without ability to load-balance VM Guest Clusters using MSCS.

 

Power Optimization of Hosts when load-balancing VMs

Yes – Provided via Power Optimization in System Center 2012 R2

Yes – Provided via vSphere DRS, with the same limitations listed above for Automated Load Balancing.

 

Fault Tolerant VMs

No – The vast majority of application availability needs can be supported via Highly Available VMs and VM Guest Clustering on a more cost-effective and more-flexible basis than software-based fault tolerance solutions.  If required for specific business applications, hardware-based fault tolerance server solutions can be leveraged where needed.

Yes – Supported via VMware FT, but there are a large number of limitations when using VMware FT, including no support for the following when using VMware FT: VM Snapshots, Storage vMotion, VM Backups via vSphere Data Protection, Virtual SAN, Multi-vCPU VMs, More than 64GB of vRAM per VM.

Software-based fault tolerance solutions, such as VMware FT, generally have significant limitations.  If applications require more comprehensive fault tolerance than provided via Highly Available VMs and VM Guest Clustering, hardware-based fault tolerance server solutions offer an alternative choice without the limits imposed by software-based fault tolerance solutions.

Backup VMs and Applicatons

Yes – Provided via included System Center 2012 R2Data Protection Manager with support for Disk-to-Disk, Tape and Cloud backups.

Yes – Only supports Disk-to-Disk backup of VMs via vSphere Data Protection.  Application-level backup integration requires separately purchased vSphere Data Protection Advanced.

 

Site-to-Site Asynchronous VM Replication

Yes – Provided via Hyper-V Replica with 30-second, 5-minute or 15-minute replication intervals. Minimum RPO = 30-seconds.

Hyper-V Replica also supports extended replication across three sites for added protection.

Yes – Provided via vSphere Replication with minimum replication interval of 15-minutes. Minimum RPO = 15-minutes.

In VMware solution, Orchestrated Failover of Site-to-Site replication can be provided via separately licensed VMware SRM.

In Microsoft solution, Orchestrated Failover of Site-to-Site replication can be provided via included PowerShell at no additional cost. Alternatively, a GUI interface for orchestrating failover can be provided via the separately licensed Windows Azure HRM service.

 

Storage

 

Microsoft 
Windows Server 2012 R2 
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

Maximum # Virtual SCSI Hard Disks per VM

256 ( Virtual SCSI )

60 ( PVSCSI )
120 ( Virtual SATA )

 

Maximum Size per Virtual Hard Disk

64TB

62TB

vSphere 5.5 support for 62TB VMDK files is limited to when using VMFS5 and NFS datastores only. 

In vSphere 5.5, VMFS3 datastores are still limited to 2TB VMDK files. 

In vSphere 5.5, Hot-Expand, VMware FT, Virtual Flash Read Cache and Virtual SAN are not supported with 62TB VMDK files.

Boot VM from Virtual SCSI disks

Yes ( Generation 2 VMs )

Yes

 

Hot-Add Virtual SCSI VM Storage for running VMs

Yes

Yes

 

Hot-Expand Virtual SCSI Hard Disks for running VMs

Yes

Yes – but not supported with new 62TB VMDK files.

 

Hot-Shrink Virtual SCSI Hard Disks for running VMs

Yes

No

 

Storage Quality of Service

Yes ( Storage QoS )

Yes ( Storage IO Control )

In VMware vSphere 5.5, Storage IO Control is not supported for RDM disks.

In Windows Server 2012 R2, Storage QoS is not supported for Pass-through disks.

Virtual Fibre Channel to VMs

Yes ( 4 Virtual FC ports per VM )

Yes ( 4 Virtual FC ports per VM )

vSphere 5.5 Enterprise Plus also includes a software initiator for FCoE support for VMs. 

While not included inbox in Windows Server 2012 R2, a no-cost ISV solution is available here to provide FCoE support for Hyper-V VMs.

Live Migrate Virtual Storage for running VMs

Yes – Unlimited concurrent Live Storage migrations. Provides flexibility to cap at a maximum limit that is appropriate for your datacenter architecture.

Yes – but only up to 2 concurrent Storage vMotion operations per host / only up to 8 concurrent Storage vMotion operations per datastore.  Storage vMotion is also not supported for MSCS VM Guest Clusters.

 

Flash-based Read Cache

Yes – Using SSDs in Tiered Storage Spaces, limited up to 160 physical disks and 480 TB total capacity.

Yes – but only up to 400GB of cache per virtual disk / 2TB cumulative cache per host for all virtual disks.

 

Flash-based Write-back Cache

Yes – Using SSDs in Storage Spaces for Write-back Cache.

No

 

SAN-like Storage Virtualization using commodity hard disks.

Yes – Included in Windows Server 2012 R2 Storage Spaces.

No

VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.

Automated Tiered Storage between SSD and HDD using commodity hard disks.

Yes – Included in Windows Server 2012 R2 Storage Spaces.

No

VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.

Can consume storage via iSCSI, NFS, Fibre Channel and SMB 3.0.

Yes

Yes – Except no support for SMB 3.0.

 

Can present storage via iSCSI, NFS and SMB 3.0.

Yes – Available via included iSCSI Target Server, NFS Server and Scale-out SMB 3.0 Serversupport.  All roles can be clustered for High Availability.

No

VMware provides vSphere Storage Appliance as a separately licensed product to deliver the ability to present NFS storage.

Storage Multipathing

Yes – via MPIO and SMB Multichannel

Yes – via VAMP

 

SAN Offload Capability

Yes – via ODX

Yes – via VAAI

 

Thin Provisioning and Trim Storage

Yes – Available via Storage Spaces Thin Provisioning and NTFS Trim Notifications.

Yes – but trim operations must be manually processed by running esxcli vmfs unmap command to reclaim disk space.

 

Storage Encryption

Yes – via BitLocker

No

 

Deduplication of storage used by running VMs

Yes – Available via included Data Deduplication role service.

No

 

Provision VM Storage based on Storage Classifications

Yes – via Storage Classifications in System Center 2012 R2

Yes – via Storage Policies, formerly called Storage Profiles, in vCenter Server 5.5

 

Dynamically balance and re-balance storage load based on demands

Yes – Storage IO load balancing and re-balancing is automatically handled on-demand by both SMB 3.0 Scale Out File Server and Automated Storage Tiers in Storage Spaces.

Yes – Performed via Storage DRS, but limited in load-balancing frequency.  The default DRS load-balance interval only runs at 8-hour intervals and can be adjusted to run load-balancing only as often as every 1-hour.

 

Integrated Provisioning and Management of Shared Storage

Yes – System Center 2012 R2 VMM includes storage provisioning and management of SAN Zoning, LUNS and Clustered Storage Servers.

No

 

 

Networking

 

Microsoft 
Windows Server 2012 R2 
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

Distributed Switches across Hosts

Yes – Supported by Logical Switches in System Center 2012 R2

Yes

 

Extensible Virtual Switches

Yes – Several partners offer extensions today, such as Cisco, NEC, Inmon and 5nine. Windows Server 2012 R2 offers new support for co-existence of Network Virtualization and Switch Extensions.

Replaceable, not extensible – VMware virtual switch is replaceable, not incrementally extensible with multiple 3rd party solutions concurrently

 

NIC Teaming

Yes – Up to 32 NICs per NIC Team.  Windows Server 2012 R2 provides new Dynamic Load Balancing mode using flowlets to provide efficient load balancing even between a small numbers of hosts.

Yes – Up to 32 NICs per Link Aggregation Group

 

Private VLANs (PVLAN)

Yes

Yes

 

ARP Spoofing Protection

Yes

No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.

 

DHCP Snooping Protection

Yes

No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.

 

Router Advertisement Guard Protection

Yes

No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.

 

Virtual Port ACLs

Yes – Windows Server 2012 R2 adds support for Extended ACLs that include Protocol, Src/Dst Ports, State, Timeout & Isolation ID

Yes – via new Traffic Filtering and Marking policies in vSphere 5.5 distributed switches

 

Trunk Mode to VMs

Yes

Yes

 

Port Monitoring

Yes

Yes

 

Port Mirroring

Yes

Yes

 

Dynamic Virtual Machine Queue

Yes

Yes

 

IPsec Task Offload

Yes

No

 

Single Root IO Virtualization (SR-IOV)

Yes

Yes – SR-IOV is supported by vSphere 5.5 Enterprise Plus, but without support for vMotion, Highly Available VMs or VMware FT when using SR-IOV.

 

Virtual Receive Side Scaling ( Virtual RSS )

Yes

Yes ( VMXNet3 )

 

Network Quality of Service

Yes

Yes

 

Network Virtualization

Yes – Provided via Hyper-V Network Virtualization based on NVGRE protocol and in-box Site-to-Site NVGRE Gateway.

No – Requires additional purchase of VMware NSX

 

Integrated Network Management of both Virtual and Physical Network components

Yes – System Center 2012 R2 VMM supports integrated management of virtual networks, Top-of-Rack (ToR) switches and integrated IP Address Management

No

 

Guest Operating Systems

 

Microsoft 
Windows Server 2012 R2 
+ System Center 2012 R2 Datacenter Editions

VMware 
vSphere 5.5 Enterprise Plus + vCenter Server 5.5

Notes

Windows Server 2012 R2

Yes

Yes

 

Windows 8.1

Yes

Yes

 

Windows Server 2012

Yes

Yes

 

Windows 8

Yes

Yes

 

Windows Server 2008 R2 SP1

Yes

Yes

 

Windows Server 2008 R2

Yes

Yes

 

Windows 7 with SP1

Yes

Yes

 

Windows 7

Yes

Yes

 

Windows Server 2008 SP2

Yes

Yes

 

Windows Home Server 2011

Yes

No

 

Windows Small Business Server 2011

Yes

No

 

Windows Vista with SP2

Yes

Yes

 

Windows Server 2003 R2 SP2

Yes

Yes

 

Windows Server 2003 SP2

Yes

Yes

 

Windows XP with SP3

Yes

Yes

 

Windows XP x64 with SP2

Yes

Yes

 

CentOS 5.7, 5.8, 6.0 – 6.4

Yes

Yes

 

CentOS Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

 

Red Hat Enterprise Linux 5.7, 5.8, 6.0 – 6.4

Yes

Yes

 

Red Hat Enterprise Linux Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

 

SUSE Linux Enterprise Server 11 SP2 & SP3

Yes

Yes

 

SUS Linux Enterprise Desktop 11 SP2 & SP3

Yes

Yes

 

OpenSUSE 12.1

Yes

Yes

 

Ubuntu 12.04, 12.10, 13.10

Yes

Yes Currently13.04 in the 13.x distros

 

Ubuntu Desktop 12.04, 12.10, 13.10

Yes

Yes Currently13.04 in the 13.x distros

 

Oracle Linux 6.4

Yes – Oracle has certified its supported products to run on Hyper-V and Windows Azure

Yes However, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.

 

Mac OS X 10.7.x & 10.8.x

No

Yes – However, see note to the right.  Based on current Apple EULA, this configuration may not be legally permitted in your environment.

Note that according to the Apple EULA for Mac OS X, it is not permitted to install Mac OS X on any platform that is not Apple-branded hardware. If you choose to virtualize Mac OS X on non-Apple hardware platforms, it’s my understanding that you’re violating the terms of the Apple EULA.

Sun Solaris 10

No

Yes However, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.

 

If you’re looking for the full list of Guest Operating Systems supported by each platform, you can find the full details at the following locations:

Managing Heterogeneous Hypervisor Environments

In certain scenarios, you may find that a mix of virtualization platforms is needed to cost-effectively support all the features and Guest Operating Systems for which you’re looking, in which case you’ll be pleased to find that Microsoft System Center 2012 R2 also supports Private Cloud management across heterogeneous hypervisors, including Hyper-V, VMware vSphere and Citrix XenServer. 

Summary

As you can see, both Microsoft Windows Server 2012 R2 / System Center 2012 R2 and VMware vSphere 5.5 offer lots of enterprise-grade virtualization features.  Hopefully this comparison was useful to you in more granularly evaluating each platform for your environment.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

 

 

 

Note: See here the full article

System Center 2012 R2 Infrastructure overview

The notion of infrastructure provisioning is about enabling enterprises and service providers to provision physical, virtual, and cloud infrastructure that meets key requirements such as workload scale and performance, multi-tenancy, and chargeback.

Enterprise-class performance

When virtualizing top-tier applications, you need a virtualization platform and virtualization management solution that can provide the necessary scale and performance to meet your business requirements. Many virtualization efforts do not realize their full potential; in many instances, it is due to the lack of adequate datacenter management which can lead to uncontrolled VM sprawl. Simultaneously, the datacenter management solution has to be flexible enough so it builds on your existing infrastructure investments. For example, applications might be deployed on physical servers and consuming SAN-based storage. Also, most customers have to support a diverse datacenter infrastructure environment to deliver on the requirements of their application counterparts.

System Center 2012 R2 delivers best-in-class management for Windows Server environments by supporting the scale and performance delivered by Windows Server 2012 R2. In this context, customers should note that Microsoft is slated to deliver System Center 2012 R2 simultaneously with Windows Server 2012 R2 so that you can plan your infrastructure deployments with the confidence and knowledge that System Center will enable them to take maximum advantage of native platform capabilities. The Virtual Machine Manager (VMM) component of System Center 2012 R2 plays a critical role in enabling virtualization-management scale – for instance, a single VMM server can support up to 1,000 hosts and up to 25,000 virtual machines. As another example, VMM enables Dynamic Memory changes as well as snapshots of running VMs without downtime.

clip_image002

To enable maximum flexibility and operational efficiency for customers, VMM enables storage management across a variety of storage approaches across file and block storage. For who have invested in block-based storage like SAN, VMM supports VM connectivity to SANs through virtual fibre channel switches. This enables IT staff to virtualize the most demanding workloads and connect them directly to the highest tier storage platforms.

Microsoft developed System Center to provide robust support for heterogeneous datacenter management – Dynamic Memory support for Linux VMs being an example. In fact, approximately 25% of System Center instances deployed today also manage Linux operating environments.

Simplified provisioning and migration

As a next step, organizations should consider industry-standard server technologies as an alternative to specialty hardware technologies for big budget infrastructure spending like storage and disaster recovery. These technologies have advanced to the point where they offer many of the capabilities and the performance of specialty hardware, for a fraction of the price. To ensure that scarce IT staff can focus on strategic IT projects versus keeping the train running, they should continue to invest in automation technologies to ensure predictable deployments while mitigating chances of human error.

With Windows Server 2012, Microsoft delivered File and Storage Services (which included Storage Spaces), which is predicated on the use of industry-standard storage that’s completely managed by server software. These storage technologies are designed to provide availability, resiliency, and performance that would normally be expected from high-end hardware. With System Center 2012 R2, VMM supports at-scale management of these storage technologies – for instance, bare-metal provisioning of scale-out Windows File Server clusters, discovery of physical disks, and creation of virtualized storage pools.

To reduce time, effort and downtime required to upgrade from Windows Server 2012, Windows Server 2012 R2 is slated to offer the ability to automatically upgrade Hyper-V clusters (based on Windows Server 2012) to Windows Server 2012 R2 using System Center. The VMM component has a cross-version migration capability that enables Hyper-V Live Migration of workloads from Windows Server 2012 hosts to Windows Server 2012 R2 hosts. Microsoft is also enabling faster deployments of System Center by providing service templates and runbooks for multiple components such as Service Manager, Data Protection Manager, and Operations Manager.

SCVMM also simplifies cross-datacenter disaster recovery of VM-based infrastructure services by providing the private cloud abstraction layer in the source and destination datacenters. This is enabled by System Center working in conjunction with Hyper-V Replica (for VM replication) and Windows Azure Hyper-V Recovery Manager (for automated recovery orchestration). Without this capability, we would be looking at alternatives like expensive SAN-based replication.

Finally, the Orchestrator component of System Center 2012 R2 continues to enable general purpose datacenter automation thereby driving consistency and predictability in provisioning processes like server deployment, patching, and upgrades.

Multitenant cloud infrastructure

As cloud computing adoption increases, large enterprises and hosters are looking to take their datacenter infrastructure to the next level of scale and efficiency and scale, with requirements such as multi-tenancy, bring-your-own-IP flexibility, chargeback, and infrastructure standardization. Many enterprises are also exploring showback and chargeback solutions to incentivize the right infrastructure consumption behaviours by their internal customers.

With System Center 2012, Microsoft enabled multi-hypervisor private clouds for enterprise IT to deliver infrastructure as a pool of automated resources and carve out datacenter capacity for use by their LOB counterparts. Building on that, System Center 2012 SP1 delivered support for multitenant environments (for service providers and large enterprises) through support for virtual networks and the ability to aggregate multiple instances of System Center infrastructure with the Service Provider Foundation (SPF) API.

Building on this strong foundation, System Center 2012 R2 strengthens Microsoft’s software-defined networking solution by enabling provisioning of multitenant edge gateways to bridge physical and virtual datacenters – this will enable flexible workload mobility in hybrid cloud computing models. System Center 2012 R2 enables chargeback for multitenant environments with granular infrastructure metering combined with the ability to do analytics on business and operational metrics. Customers can also take advantage of Cloud Cruiser (ISV, who is part of the Microsoft partner alliance) cost analytics for a more fully featured chargeback solution.

Extend System Center to provision Windows Azure infrastructure

System Center 2012 R2 provides a unified tool to provision and manage virtual machines into on-premises and Windows Azure environments, including easy workload portability without a need for format conversion. The App Controller component of System Center 2012 R2 enables migration of on-premises Hyper-V VMs into Windows Azure Virtual Machines. Once in Windows Azure, the Virtual Machine can be managed (including operations like start, stop) through the App Controller user interface.

The Orchestrator component of System Center 2012 R2 provides a Windows Azure Integration Pack for at-scale provisioning and management of Windows Azure Virtual Machines and Windows Azure Storage in an automated manner.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

What’s new in System Center 2012 R2 – Virtual Machine Manager?

During the last TechEd North America, Microsoft wraps off what will be the new System Center 2012 R2. The upgrade follows the impressive number of new features on Windows Server 2012 R2 as well as improvements to existing capabilities in Windows Server 2012.

Here are some of the new and improved features related to System Center 2012 R2 – Virtual Machine Manager (SCVMM):

Infrastructure improvements

  • Guest and host support for Windows 2012 R2
  • Auto-task resume after VMM server failover
  • Expanded scope for update management
  • Updated management packs:
    • Better integration with chargeback and reporting
    • Additional dashboards

Networking improvements

  • Site-to-site networking
  • IP Address Management (IPAM) integration
  • Simplified guest IP management
  • Top of rack switch integration
  • Making forwarding extensions for Hyper-V extensible switch work with Hyper-V network virtualization (Cisco 1KV and NVGRE)

Storage improvements

  • Synthetic fibre channel support
  • Management of zones
  • Offloaded Data Transfer (ODX) support
  • Shared VHDX support
  • Provision scale-out file server cluster from bare metal
  • Integration with differencing disks

Services improvements

  • Run scripts on first machine on a tier
  • Shared VHDx across members of a tier
  • Service Setting for Service Topology
  • Service deployments work for VMs on Xen

VM and cloud improvements

  • Differencing disks
  • Live cloning
  • Online VHDX resize
  • Grant permissions to users for each cloud
  • Ability to inject files into VM prior to the first boot

In my opinion one of the biggest news is the recommendation and best practices to have SCVMM on a VM on same virtualization platform that SCVMM is managing. This change a lot in your System Center design and infrastructure if you want to implement a High-Available and resilience System Center environment.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga