Pages

Showing posts with label vmware. Show all posts
Showing posts with label vmware. Show all posts

Thursday, May 14, 2015

EMC VPLEX : Extending VMWare Functionality Across Data Centers

VPLEX is a storage virtualization appliance. It sits between the storage arrays and hosts and virtualizes the presentation of storage arrays, including non-EMC arrays. Storage is then configured and presented to the host. It delivers data mobility and availability across arrays and sites. VPLEX is a unique virtual storage technology that enables mission critical applications to remain up and running during any of a variety of planned and unplanned downtime scenarios. VPLEX permits painless, nondisruptive data movement, taking technologies like VMware and other clusters that were built assuming a single storage instance and enabling them to function across arrays and across distance. 


VPLEX key use cases comprise:

  • Continuous operations – VPLEX enables active/active data centers with zero downtime
  • Migration/tech refresh – VPLEX provides accelerated and nondisruptive migrations and technology refresh
  • Oracle RAC functionality – VPLEX extends Oracle Real Application Clusters (RAC) and other clusters over distance
  • VMware functionality – VPLEX extends VMware functionality across distance while enhancing availability
  • MetroPoint Topology – VPLEX with EMC RecoverPoint delivers a 3-site continuous protection and operational recovery solution
The EMC VPLEX family includes three models:



EMC VPLEX Local:

EMC VPLEX Local delivers availability and data mobility across arrays. VPLEX is a continuous availability and data mobility platform that enables mission-critical applications to remain up and running during a variety of planned and unplanned downtime scenarios.


EMC VPLEX Metro:

EMC VPLEX Metro delivers availability and data mobility across sites. VPLEX Metro with AccessAnywhere enables active-active, block level access to data between two sites within synchronous distances.  Host application stability needs to be considered. It is recommended that depending on the application that consideration for Metro be =< 5ms latency. The combination of virtual storage with VPLEX Metro and virtual servers allows for the transparent movement of VM’s and storage across longer distances and improves utilization across heterogeneous arrays and multiple sites.

EMC VPLEX Geo:

EMC VPLEX Metro delivers availability and data mobility across sites. VPLEX Geo with AccessAnywhere enables active-active, block level access to data between two sites within asynchronous distances. Geo improves the cost efficiency of resources and power.  It provides the same distributed device flexibility as Metro but extends the distance up to 50ms of network latency. 

Tuesday, April 21, 2015

Software Defined Storage

SDS is a class of storage solutions that can be used with commodity storage media and compute hardware; where storage media and compute hardware have no special intelligence embedded in them.  All the intelligence of data management and access is provided by a software layer. The solution may provide some or all the feature of modern enterprise storage systems like scale up and out architecture, reliability and fault tolerance, high availability, unified storage management and provisioning, geographically distributed data center awareness and handling, disaster recovery, QoS, resource pooling, integration with existing storage infrastructure, etc. It may provide some or all data access methods like file, block and object.

A generic data flow in a SDS solution is explained in the figure below:



VMware defines the Software-defined Storage Architecture as follows:

SDS is a new approach to storage that enables a fundamentally more efficient operational model. We can accomplish this by:
  • Virtualizing the underlying hardware through the Virtual Data Plane
  • Automating storage operations across heterogeneous tiers through the Policy-Driven Control Plane

Virtual Data Plane


In the VMware SDS model, the data plane, responsible for storing data and applying data services (snapshots, replication, caching, and more, is virtualized by abstracting physical hardware resources and aggregating them into logical pools of capacity (virtual datastores) that can be flexibly consumed and managed. By making the virtual disk the fundamental unit of management for all storage operations in the virtual datastores, exact combinations of resources and data services can be configured and controlled independently for each VM. 

The VMware implementation of the virtual data plane is delivered through:
  • Virtual SAN – for x-86 hyperconverged storage
  • vSphere Virtual Volumes – for external storage (SAN/NAS)

Policy-Driven Control Plane

In the VMware SDS model, the control plane acts as the bridge between applications and infrastructure, providing standardized management and automation across different tiers of storage. Through SDS, storage classes of service become logical entities controlled entirely by software and interpreted through policies. Policy-driven automation simplifies provisioning at scale, enables dynamic control over individual service levels for each VM and ensures compliance throughout the lifecycle of the application. 

The policy-driven control plane is programmable via public APIs used to control policies via scripting and cloud automation tools, which in turn enable self-service consumption of storage for application tenants. 

The VMware implementation of the policy-driven control plane is delivered through:

  • Storage Policy-Based Management – provides management over external storage (SAN/NAS) through vSphere Virtual Volumes and over x86 storage through Virtual SAN.

Nutanix which is another player in the field of Software-defined Storage follows a similar approach but the controller here is a seperate VM on top of hypervisor and requires Nutanix hardware to implement the approach.


You can read more on software defined storage in this ebook written by Scott Lowe


Coho Data which is based out of Sunnyvale, California uses a SDN enabled data stream switch to connect the VMs to storage implemented as MicroArray Nodes containing PCIe flash and hard drives.

Data Hypervisor Software on the MicroArray virtualizes storage hardware to create a high performance, bare metal object store that scales to support different application needs without static storage tiers.

Coho Data Architecture: http://www.cohodata.com/coho-scale-out-storage-architecture





Thursday, April 16, 2015

Advantages of Distributed vSwitch

The dvSwitch feature which is available in Enterprise Plus Edition or above includes all the capabilities of vSwitch, plus the following additional capabilities:

1. Bidirectional Virtual Machine Rate Limiting(Traffic Shaping): vSwitch can perform traffic shaping on outbound traffic only. The dvSwitch can also perform traffic shaping on inbound traffic. Traffic shaping is used when traffic shaping need to be imposed on virutal machines.

2. Centralized vCenter Administration and Provisioning: dvSwitches are administered and provisioned from within vCenter, meaning that there is a single configuration to manage which is better than managing individual vSwitches.

3. Cisco Nexus 1000V Virtual Switch: Third party dvSwitches like Cisco 1000V can be used which introduces features like ACLs, port security and more. Moreover, it gives environment to which they already understand how to use.

4. Dynamic Adjustment of Load-Based NIC teaming: Regularly checks the load on each NIC. If one NIC is overloaded, a port-NIC assignment will occur to attempt to balance the load. Thus, this process keeps the load on teamed NIC balanced.

5. Enhanced Security and monitoring for vMotion traffic: Virtual machine networking state, including counters and port statistics, is tracked as virtual machines are migrated with vMotion from host to host in a dvSwitch. This provides a more consistent view of the virtual machine’s network interfaces, regardless of the VM’s location or migration history, and simplifies the troubleshooting and network monitoring for virtual machines.

6. IEEE 802.1p tagging: IEEE 802.1p tagging is a standard used to provide quality of service (QoS) at the media access control(MAC) level. This capability can be used to generate I/O resources and is applied to outbound network traffic.

7. LLDP:  LLDP is a standard based(802.1AB) vendor neutral discovery protocol. It is used to discover information about network devices.

8. NetFlow: Netflow available in vSphere version 5 and above allows monitoring of traffic flows. This Netflow data helps in capacity planning and ensure that I/O resources are properly used in the virtual infrastructure.

9. Network I/O Control: Network I/O Control Network I/O Control allows the creation of resource pools containing network bandwidth. Administrators can create new resource pools to associate with
port groups and specify 802.1p tags, allowing different virtual machines to be in different resource pools. This allows a subset of virtual machines to be given a higher or lower share of bandwidth than the others.

10. Port Mirroring: Port mirroring is when a network switch sends a copy of network packet from a port or an entire VLAN to a network monitoring device connected to another switch port. This is also known as switched port analyzer(SPAN) on Cisco Switches. Port mirroring is used for monitoring and troubleshooting.

11. Private VLAN Support: Private VLAN which is a nested VLAN is VLAN located within a VLAN. It is used to provide isolation between computers on the same subnet. The first VLAN is primary whereas the nested VLANs are secondary.

There are three types of PVLAN ports:

  • Promiscuous: Can communicate with all ports including isolated and community ports.
  • Isolated: Can communicate only with promiscuous ports
  • Community: Can communicate with the same secondary PVLAN and promiscuous PVLAN
12. Management Network Rollback and Recovery: This feature is used to ease management network use in the dvSwitch. It works by detecting configuration changes to the management network.

13. Network Health Check: This feature is used to help vSphere administrator quickly identify configuration errors in the network. It monitors VLAN, MTU and network adapter teaming at regular intervals. If these checks fail, a warning will be displayed in the vSphere web client.

14. Link Aggregation Control Procotol (LACP) : LACP is a standards-based link aggregation protocol used to group physical network adapter into a single logical link. The dynamic implementation included in the dvSwitch allows verification of correct setup and features automatic configuration, negotiation and renegotation of detected link failures.

15. Traffic filtering and marking: This feature is used for fi ltering and priority tagging for network traffic to virtual machines, VMkernel adapters, or physical adapters. It can be used to protect these connections from security attacks, to fi lter out unwanted traffic, or toestablish QoS.

Wednesday, April 15, 2015

VMWare vMotion Migration Process

vMotion is used to move a powered on virtual machine from one host to another

VMWare vMotion can be be used to :

  • Improve overall hardware utilization
  • Allow continued virtual machine operation while accomodating scheduled hardware downtime.
  • Allow vSphere Distributed Resource Scheduler(DRS) to balance virtual machines between hosts.
How vMotion Migration works:

vMotion Migration is achieved by moving the memory state from one host to another across the vMotion network which is private, non-routed, gigabit or faster network connection between two hosts involved in the vMotion migration.

However, vMotion can be done only if certain requirements are met:

1. The hosts between which the vMotion need to be performed must have a shared storage.
2. The participating hosts must have identical networks.

In addition to that, requirements for VM are:

1. A virtual machine should not have connection to a virtual device with a local image mounted.
2. A virtual machine must not have connection to an internal switch(vSwitch with zero uplinks)
3. A virtual machine must not have CPU affinity configured.

Moreover, source and destination hosts must have:

1. Visibility to all storage (Fiber Channel, iSCSI or NAS) used by the virtual machine.
2. At least a gigabit ethernet network
  • Four concurrent vMotion migration on a 1 Gbps network
  • Eight concurrent vMotion migration on a 10 Gbps network
3. Access to same physical network
4. Compatible CPUs