Pages

Thursday, May 14, 2015

EMC VPLEX : Extending VMWare Functionality Across Data Centers

VPLEX is a storage virtualization appliance. It sits between the storage arrays and hosts and virtualizes the presentation of storage arrays, including non-EMC arrays. Storage is then configured and presented to the host. It delivers data mobility and availability across arrays and sites. VPLEX is a unique virtual storage technology that enables mission critical applications to remain up and running during any of a variety of planned and unplanned downtime scenarios. VPLEX permits painless, nondisruptive data movement, taking technologies like VMware and other clusters that were built assuming a single storage instance and enabling them to function across arrays and across distance. 


VPLEX key use cases comprise:

  • Continuous operations – VPLEX enables active/active data centers with zero downtime
  • Migration/tech refresh – VPLEX provides accelerated and nondisruptive migrations and technology refresh
  • Oracle RAC functionality – VPLEX extends Oracle Real Application Clusters (RAC) and other clusters over distance
  • VMware functionality – VPLEX extends VMware functionality across distance while enhancing availability
  • MetroPoint Topology – VPLEX with EMC RecoverPoint delivers a 3-site continuous protection and operational recovery solution
The EMC VPLEX family includes three models:



EMC VPLEX Local:

EMC VPLEX Local delivers availability and data mobility across arrays. VPLEX is a continuous availability and data mobility platform that enables mission-critical applications to remain up and running during a variety of planned and unplanned downtime scenarios.


EMC VPLEX Metro:

EMC VPLEX Metro delivers availability and data mobility across sites. VPLEX Metro with AccessAnywhere enables active-active, block level access to data between two sites within synchronous distances.  Host application stability needs to be considered. It is recommended that depending on the application that consideration for Metro be =< 5ms latency. The combination of virtual storage with VPLEX Metro and virtual servers allows for the transparent movement of VM’s and storage across longer distances and improves utilization across heterogeneous arrays and multiple sites.

EMC VPLEX Geo:

EMC VPLEX Metro delivers availability and data mobility across sites. VPLEX Geo with AccessAnywhere enables active-active, block level access to data between two sites within asynchronous distances. Geo improves the cost efficiency of resources and power.  It provides the same distributed device flexibility as Metro but extends the distance up to 50ms of network latency. 

Monday, May 11, 2015

Overlay Networks

The idea of an "overlay network" is that some form of encapsulation is used to decouple a network service from the underlying infrastructure. Per-service state is restricted at the edge of the network and the underlying physical infrastructure of the core has little or no visibility of the actual services offered. This layering approach enables the core network to scale and evolve independently of the offered services.

The best example of this is the internet itself. Internet is an overlay network on top of a solid optical infrastructure. The underlying infrastructure is called the "underlay network" The majority of paths in the Internet are now formed over a  DWDM infrastructure that creates a virtual topology between routers and utilizes several forms of switching to interconnect routers together. Also, the idea of MPLS L2/L3 VPNs is essentially an overlay network of services on top of an MPLS transport network. The label edge routers (LER) encapsulates every packet arriving from an enterprise site with two labels. A VPN label identifying essentially the enterprise context and a transport label that identifies how the packet should be forwarded through the core MPLS network. In this way, this is a double overlay.

One of the main advantages of overlays is that they provide the ability to rapidly and incrementally deploy new functions through edge-centric innovations. New solutions or applications can be organically added to the existing underlay infrastructure by adding intelligence to the edge nodes of the network.  This is why overlays often emerge as solutions to address the requirements of specific applications over an existing basic infrastructure, where either the required functions are missing in the underlay network, or the cost of total infrastructure upgrade is prohibitive from an economic standpoint.

Saturday, May 9, 2015

Software Defined Storage Solution from Coho Data

For a long time, networking was defined by some distributed protocols like BGP, OSPF, MPLS, STP and so on. Each network device in the topology would run these protocols and collectively they made the internet work. They accomplished the miraculous job of connecting the plethora of devices that make up the internet. However, the amount of effort required to configure, troubleshoot and maintain these devices was enormous. Add to that the cost of upgrading these devices every few years. Collectively, these costs compelled the networking industry to come up with a solution to these problems.

SDN was introduced few decades back. The concept of separating the brain from the device was a radical idea which spread very fast across the networking industry. SDN introduced centralized control to the network. Hence, whole of the network can now be controlled from a single device. This centralized controller evaluates the entire topology and pushes down instructions to individual device thus making sure that each device is working as efficiently as possible. The SDN controller is also able to single-handedly track the resource utilization and respond to failure thus minimizing the down time.

SDN simplified networking to a great extent. However, storage which is complementary to networking was still implemented in the same old way at that time. Coho Data, which is based out of Sunnyvale California took the effort to redefine storage using the concept of software defined networking. It has introduced a control-centric-architecture to storage.

Here's a graphical representation of how the storage controller looks like:


SDSC (Software Defined Storage Controller) is the central decision making engine that runs within the Coho Cluster. It evaluates the system and makes decisions regarding two specific points of control data placement and connectivity. At any point, the SDSC can respond to change by either moving client connections or by migrating data. These two knobs turn out to be remarkably powerful tools in making the system perform well.

The strong aspect of this solution is its modular nature. Not only the storage device are completely new and innovative, innovation has also been done in the switching fabric thus facilitating the migration of data. The solution makes sure that performance is not degraded when the storage capacity scales.

Tiering in Coho Architecture:



Coho’s microarrays are directly responsible for implementing automatic tiering of data that is stored on them. Tiering happens in response to workload characteristics, but a simple characterization of what happens is that as the PCIe flash device fills up, the coldest data is written out to the lower tier. This is illustrated in the diagram below.

All new data writes go to NVMe flash. Effectively, this top tier of flash has the ability to act as an enormous write buffer, with the potential to absorb burst writes that are literally terabytes in size. Data in this top tier is stored sparsely at a variable block size.

As data in the top layer of flash ages and that layer fills, Coho’s operating environment (called Coast) actively migrates cold data to the lower tiers within the microarray. The policy for this demotion is device-specific: on our hybrid (HDD-backed) DataStore nodes, data is consolidated into linear 512K regions and written out as large chunks.  On repeated access, or when analysis tells us that access is predictive of future re-access, disk-based data is “promoted,” or copied back into flash so that additional reads to the chunk are served faster.

Source :http://www.cohodata.com/blog/2015/03/18/software-defined-storage/