Companies across the globe are rapidly undergoing a digital transformation covering every aspect of their respected organizations, especially the enterprise. This macro process has pushed IT leaders to migrate applications and data to the cloud as well as software defining their own on premise infrastructures. This is being accomplished by incorporating the technology triad of software defined compute (SDC), software defined networking (SDN) and software defined storage (SDS). All three of these technologies are based on two core principles:
According to IBM, 90% of the data in the world today has been created in the last two years. That amounts to more than 2.5 quintillion bytes of data a day.
According to IBM, 90% of the data in the world today has been created in the last two years. That amounts to more than 2.5 quintillion bytes of data a day. As many data centers today are handling petabytes of data on a daily basis, traditional storage implementations are straining to keep up with soaring capacity demands. In order to contend in the highly competitive global economy in which we operate, IT managers must create a data structure that gives users the ability to access data, process it and bring it to value as quickly as possible.
Before virtualization, enterprises relied on direct attached storage, which consisted of one or more storage RAIDs in which virtual volumes were provisioned. Every server had its own dedicated storage, which in turn resulted in isolated data silos throughout the enterprise. Computer virtualization carried on the tradition of utilizing DAS until IT Managers began realizing that the potential benefits of virtualization went beyond the initial attraction of cost savings. The only way to truly obtain the conceivable levels of redundancy, high availability and scalability that virtualization was capable of was to offer shared network storage to the VMs.
The process of centralizing all storage greatly augmented the single point of storage vulnerability inherent in DAS. In order to avoid the pitfalls of a disrupting storage outage, vendors offered external arrays that were extremely fault tolerant and robust. The SAN soon became the storage system of choice. It achieved enterprise levels of redundancy through the utilization of RAID 50 storage volumes that encompassed multiple arrays, all driven by intelligence driven hardware. These highly robust specialized devices are also very expensive and require a great amount of training. Every SAN has a custom proprietary operating system that has to be updated and patched regularly. Its proprietary nature locks you in to a particular vendor, increasing opportunity costs and forcing future complicated data migrations. Because of its complex, expensive hardware, it proved just as inflexible as the bare metal servers that computer virtualization had sought to replace in the first place.
SDS simply utilizes commodity hardware and can be implemented on any x86 server and any server-attached storage device. This allows it to offer an incredible flexibility of choice for IT managers and avoids vendor lock-in as well as numerous other benefits including:
SDS allows admins to manage all of their controlled data through a single interface. This includes data that resides at multiple datacenters across the world if need be. This means that IT can achieve data locality by locating data as close as possible to the users that need it. Because of this, enterprises require unprecedented levels of availability and load balancing to ensure that users are never disconnected from the information and applications they require. Like SDS, modern load balancing solutions are no longer about expensive, complicated, proprietary hardware. Instead, they are hosted in the cloud today, offering scalability on a global scale. Cloud based load balancing ensures that users are indeed directed to the nearest data location and that in the event a disruption of service, are then directed to an alternative site in fluid fashion. Whether it is networking, storage or network load balancing, the days of proprietary hardware are soon coming to an end.
We read an interesting article recently published in IT World entitled Google, Amazon reveal their secrets of scalability. Joab Jackson, who wrote the article, provides information received at various talks by Google and Amazon on how they engineer their IT infrastructure for scale. One of the key strategies used by Google was doing things on the […]
As we talk to people during the week, we periodically make suggestions for using Cloud Load Balancing or Failover that are often met with surprise, such as “Oh, I didn’t know it could be used for that”. So we thought it might be helpful to compile a list of 8 potential uses. Of course, it […]
One of the biggest trends in data center infrastructure is convergence. Actually it has been happening for some time. Equipment footprint has been getting smaller for years. Functions that used to be handled by huge dedicated machines are now accomplished by modular cards. Specialized servers, switches, routers, and other network devices have been combined into […]
All over the world, companies are competing with one another in the race towards digital transformation. According to a report by Gartner in 2016, one-half of all CEO’s expect their industries to be substantially or unrecognizably transformed by digital transformation. It is a recurrent digital evolutionary process of embedding technologies into nearly everything around us […]