|By Eric Burgener||
|November 26, 2012 07:45 AM EST||
Legacy storage architectures do not perform very efficiently in virtual computing environments. The very random, very write-intensive I/O patterns generated by virtual hosts drive storage costs up as enterprises either add spindles or look to newer storage technologies like solid state disk (SSD) to address the IOPS shortfall.
SSD costs are coming down, but they are still significantly higher than spinning disk costs. When enterprises do consider SSD, how it is used and where it is placed in the virtual infrastructure can make a big difference in how much enterprises have to spend to meet their performance requirements. It can also impose certain operational limitations that may or may not be issues in specific environments.
Some of the key considerations that need to be taken into account are SSD placement (in the host or in the SAN), high availability/failover requirements, caching vs logging architectures, and the value of preserving existing investments vs rip and replace investments that promise storage hardware specifically designed for virtual environments.
There are two basic locations to place SSD, each of which offers its own pros and cons. Host-based SSD will generally offer the lowest storage latencies, particularly if the SSD is located on PCIe cards. In non-clustered environments where it is clear that IOPS and storage latencies are the key performance problems, these types of devices can be very valuable. In most cases, they will remove storage as the performance problem.
But don't necessarily expect that in your environment, these devices will deliver their rated IOPS directly to your applications. In removing storage as the bottleneck, system performance will now be determined by whatever the next bottleneck in the system is. That could be CPU, memory, operating system, or any number of other potential issues. This phenomenon is referred to as Amdahl's Law.
What you probably care about are application IOPS. Test the devices you're considering in your environment before purchase, so you know exactly the level of performance gain they will provide to you. Then you can make a more informed decision about whether or not you can cost justify them for use with your workloads. Paying for performance you can't use is like buying a Ferrari for use on America's interstate system - you may never get out of second gear.
Raw SSD technology generally can provide blazingly fast read performance. Write performance, however, varies depending on whether you are writing randomly or sequentially. The raw technical specs on many SSD devices indicate that sequential write performance may be half that of read performance, and random write performance may be half again as slow. Write latencies may also not be deterministic because of how SSD devices manage the space they are writing to. Many SSD vendors are combining software and other infrastructure around their SSD devices to address some of these issues. If you're looking at SSD, look to the software it's packaged with to make sure the SSD capacity you're buying can be used most efficiently.
Host-based SSD introduces failover limitations. If you have implemented a product like VMware HA in your environment to automatically recover failed nodes, any data sitting in a host-based SSD device that has not been written through to shared storage will not be available on recovery. This can lead to data loss on recovery - something that may or may not be an issue in your environment. Even though SSD is non-volatile storage, if the node it is sitting in is down, you can't get to it. You can get to it after that node is recovered, but the issue here is whether or not you can automatically fail over and have access to it.
Because of this issue, most host-based SSD products implement what is called a "write-through" cache, which means that they don't acknowledge writes at SSD latencies, they actually write them through to shared disk and then send the write acknowledgement back from there. Anything on shared disk can be potentially recovered by any other node in the cluster, ensuring that no committed data is unavailable on failover. But what this means is that you won't get any write performance improvements from SSD, just better read performance.
What does your workload look like in terms of read vs write percentages? Most virtual environments are very write intensive, much more so than they ever were in physical environments, and virtual desktop infrastructure (VDI) environments can be as much as 90% writes when operating in steady state mode. If write performance is your problem, host-based SSD with a write-through cache may not help very much in the big picture.
SAN-based SSD, on the other hand, can support failover without data loss, and if implemented with a write-back cache can provide write performance speedups as well. But many implementations available for use with SAN arrays are really only designed to speed up reads. Check carefully as you consider SSD to understand how it is implemented, and how well that maps to the actual performance requirements in your environment.
Caching vs Logging Architectures
Most SSD, wherever it is implemented, is used as a cache. Sizing guidelines for caches start with the cache as a percentage of the back-end storage it is front-ending. Generally the cache needs to be somewhere between 3% to 6% of the back-end storage, so larger data store capacities require larger caches. For example, 20TB of back-end data might require 1TB of SSD cache (5%).
Caches are generally just speeding up reads, but if you are working with a write-back cache, then the cache will have to be split between SSD capacity used to speed up reads and SSD capacity used to speed up writes. Everything else being equal in terms of performance requirements, write-back caches will have to be larger than write-through caches, but will provide more balanced performance gains (across both reads and writes).
Logging architectures, by definition, speed up writes, making them a good fit for write-intensive workloads like those found in virtual computing environments. Logs provide write performance gains by taking the very random workload and essentially removing the randomness from it by writing it sequentially to a log, acknowledging the writes from there, then asynchronously de-staging them to a shared storage pool. This means that the same SSD device used in a log vs used in a cache will be faster, assuming some randomness to the workload. The write performance the guest VMs see is the performance of the log device operating in sequential write mode almost all the time, and it can result in write performance improvements of up to 10x (relative to that same device operating in the random mode it would normally be operating in). And a log provides write performance improvements for all writes from all VMs all the time. (What's also interesting is that if you are getting 10x the IOPS from your current spinning disk, given Amdahl's Law, you may not even need to purchase SSD to remove storage as the performance bottleneck.)
Logs are very small (10GB or so) and are dedicated to a host, while the shared storage pool is accessible to all nodes in a cluster and primarily handles read requests. In a 20 node cluster with 20TB of shared data, you would need 200GB for the logs (10GB x 20 hosts) vs the 1TB you would need if SSD was used as a cache. Logs are much more efficient than caches for write performance improvements, resulting in lower costs.
If logs are located on SAN-based SSD, you not only get the write performance improvements, but this design fully supports node failover without data loss, a very nice differentiator from write-through cache implementations.
But what about read performance? This is where caches excel, and a write log doesn't seem to address that. That's true, and why it's important to combine a logging architecture with storage tiering. Any SSD capacity not used by the logs can be configured into a fast tier 0, which will provide the read performance improvements for any data residing in that tier. The bottom line here is that you can get better overall storage performance improvements from a "log + tiering" design than you can from a cache design while using 50% - 90% less high performance device (in this case, SSD) capacity. In our example above, if you buy a 256GB SAN-based SSD device and use it in a 20 node cluster, you'll get SSD sequential write performance for every write all the time, and have 56GB left over to put into a tier 0. Compare that to buying 1TB+ of cache capacity at SSD prices.
With single image management technology like linked clones or other similar implementations, you can lock your VM templates into this tier, and very efficiently gain read performance improvements against the shared blocks in those templates for all child VMs all the time. Single image management technology can help make the use of SSD capacity more efficient in either a cache or a log architecture, so don't overlook it as long as it is implemented in a way that does not impinge upon your storage performance.
Purpose-Built Storage Hardware
There are some interesting new array designs that leverage SSD, sometimes in combination with some of the other technologies mentioned above (log architectures, storage tiering, single image manage-ment, spinning disk). Designed specifically with the storage performance issues in virtual environments in mind, there is no doubt that these arrays can outperform legacy arrays. But for most enterprises, that may not be the operative question.
It's rare that an enterprise doesn't already have a sizable investment in storage. Many of these existing arrays support SSD, which can be deployed in a SAN-based cache or fast tier. It's much easier, and potentially much less disruptive and expensive if existing storage investments could be leveraged to address the storage performance issues in virtual environments. It's also less risky, since most of the hot new "virtual computing-aware" arrays and appliances are built by startups, not proven vendors. If there are pure software-based options to consider that support heterogeneous storage hardware and can address the storage issues common in virtual computing environments, allowing you to potentially take advantage of SSD capacity that fits into your current arrays, this could be a simpler, more cost-effective, and less risky option than buying from a storage startup. But only, of course, if it adequately resolves your performance problem.
If there's one point you should take away from this article, it's that just blindly throwing SSD at a storage performance problem in virtual computing environments is not going to be a very efficient or cost-effective way to address your particular issues. Consider how much more performance you need, whether you need it on reads, writes, or both, whether you need to failover without data loss, and whether preserving existing storage hardware investments is important to you. SSD is a great technology, but your best value from it will come when you deploy it most efficiently.
ScriptRock makes GuardRail, a DevOps-ready platform for configuration monitoring. Realizing we were spending way too much time digging up, cataloguing, and tracking machine configurations, we began writing our own scripts and tools to handle what is normally an enormous chore. Then we took the concept a step further, giving it a beautiful interface and making it simple enough for our bosses to understand. We named it GuardRail after its function - to allow businesses to move fast and stay sa...
Dec. 22, 2014 04:30 PM EST Reads: 1,126
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cloudian, Inc., is a Foster City, California - based software company specializing in cloud storage software. The main product is Cloudian, an Amazon S3-compliant cloud object storage platform, the bedrock of cloud computing systems, that enables c...
Dec. 22, 2014 04:00 PM EST Reads: 724
SYS-CON Media announced today that Sematext launched a popular blog feed on DevOps Journal with over 6,000 story reads over the weekend. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting an...
Dec. 22, 2014 04:00 PM EST Reads: 1,100
SYS-CON Events announced today Isomorphic Software, the global leader in high-end, web-based business applications, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software ...
Dec. 22, 2014 03:00 PM EST Reads: 1,391
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additional...
Dec. 22, 2014 11:15 AM EST Reads: 2,004
The BPM world is going through some evolution or changes where traditional business process management solutions really have nowhere to go in terms of development of the road map. In this demo at 15th Cloud Expo, Kyle Hansen, Director of Professional Services at AgilePoint, shows AgilePoint’s unique approach to dealing with this market circumstance by developing a rapid application composition or development framework.
Dec. 22, 2014 11:00 AM EST Reads: 1,451
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, p...
Dec. 22, 2014 11:00 AM EST Reads: 2,351
In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally. DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.
Dec. 21, 2014 07:00 PM EST Reads: 1,065
"Our premise is Docker is not enough. That's not a bad thing - we actually love Docker. At ActiveState all our products are based on open source technology and Docker is an up-and-coming piece of open source technology," explained Bart Copeland, President & CEO of ActiveState Software, in this SYS-CON.tv interview at DevOps Summit at Cloud Expo®, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 21, 2014 06:00 PM EST Reads: 2,062
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover ...
Dec. 21, 2014 02:00 PM EST Reads: 2,429
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 21, 2014 01:00 PM EST Reads: 2,033
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com...
Dec. 21, 2014 11:30 AM EST Reads: 2,445
Verizon Enterprise Solutions is simplifying the cloud-purchasing experience for its clients, with the launch of Verizon Cloud Marketplace, a key foundational component of the company's robust ecosystem of enterprise-class technologies. The online storefront will initially feature pre-built cloud-based services from AppDynamics, Hitachi Data Systems, Juniper Networks, PfSense and Tervela. Available globally to enterprises using Verizon Cloud, Verizon Cloud Marketplace provides a one-stop shop fo...
Dec. 21, 2014 11:00 AM EST Reads: 2,023
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps become...
Dec. 21, 2014 10:00 AM EST Reads: 2,104
SYS-CON Events announced today that IDenticard will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. IDenticard™ is the security division of Brady Corp (NYSE: BRC), a $1.5 billion manufacturer of identification products. We have small-company values with the strength and stability of a major corporation. IDenticard offers local sales, support and service to our customers across the United States and Canada...
Dec. 21, 2014 10:00 AM EST Reads: 2,187
SYS-CON Events announced today that AIC, a leading provider of OEM/ODM server and storage solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. AIC is a leading provider of both standard OTS, off-the-shelf, and OEM/ODM server and storage solutions. With expert in-house design capabilities, validation, manufacturing and production, AIC's broad selection of products are highly flexible and are conf...
Dec. 21, 2014 06:30 AM EST Reads: 2,024
AppZero has announced that its award-winning application migration software is now fully qualified within the Microsoft Azure Certified program. AppZero has undergone extensive technical evaluation with Microsoft Corp., earning its designation as Microsoft Azure Certified. As a result of AppZero's work with Microsoft, customers are able to easily find, purchase and deploy AppZero from the Azure Marketplace. With just a few clicks, users have an Azure-based solution for moving applications to the...
Dec. 20, 2014 09:00 AM EST Reads: 1,068
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 20, 2014 08:00 AM EST Reads: 1,445
The cloud is becoming the de-facto way for enterprises to leverage common infrastructure while innovating and one of the biggest obstacles facing public cloud computing is security. In his session at 15th Cloud Expo, Jeff Aliber, a global marketing executive at Verizon, discussed how the best place for web security is in the cloud. Benefits include: Functions as the first layer of defense Easy operation –CNAME change Implement an integrated solution Best architecture for addressing network-l...
Dec. 20, 2014 05:00 AM EST Reads: 1,386