|By Chris Evans||
|November 14, 2012 08:30 AM EST||
In the first wave of solid-state storage arrays, we saw commodity style SSDs (solid state drives) being added to traditional storage arrays. This solution provided an incremental benefit in performance over spinning hard drives, however the back-end technology in these arrays was developed up to 20 years ago and was purely focused around driving performance out of the slowest part of the infrastructure – the hard drive. Of course SSDs are an order of magnitude faster than HDDs so you can pretty much guarantee SSDs in traditional arrays results in underused resources, but is premium priced.
Wave 2 of SSD arrays saw the development of custom hardware, mostly still continuing to use commodity SSDs. At this point we saw full exploitation of the solid state capabilities, with architecture designed to provide the full performance capabilities of solid state drives. These arrays removed unnecessary or bottlenecking features (like cache) and provided much more back-end scalability. Within the wave 2 group, Nimbus Data have chosen a hybrid approach and developed their own solid state drives. This gives them more control over the management functionality of the SSDs and subsequently more control over performance and availability.
Notably, some startup vendors have taken a slightly different approach. Violin Memory have chosen from day 1 to use custom NAND memory cards called VIMMs (Violin Intelligent Memory Module). This technology removes the need for NAND to emulate a hard drive and for the interface between the processor/memory & persistent memory (e.g. the NAND) to go across a hard drive interface like SAS using the SCSI protocol. Whilst it could be debated that the savings from removing the disk drive protocol could be marginal, the use of NAND that doesn’t emulate hard drives is about much more than that. SSD controllers have many features to extend the life of the drive itself. This includes wear levelling and garbage collection, features that could have a direct impact on device performance. Custom NAND components can, for instance allow wear levelling to be achieved across the entire array or for individual cell failures to be managed more efficiently.
Building bespoke NAND components isn’t cheap. Violin have chosen to invest in technology that they believe gives them an advantage in their hardware – no dependency on SSD manufacturers. The ability to build advanced functionality into their persistent memory means availability can be increased (components don’t need to be swapped out as frequently – failing components can be partially used).
At this point we should do a call out to Texas Memory Systems, recently acquired by IBM. They have also used custom NAND components; their RamSan-820 uses 500GB flash modules using eMLC memory.
I believe that the third wave will see many more vendors looking to move away from the SSD form factor and building bespoke NAND components as Violin have done. Currently Violin and TMS have the headstart. They’ve done the hard work and built the foundation of their platform. Their future innovations will probably revolve around bigger and faster devices and replacing NAND with whatever is the next generation of persistent memory.
Last week, HDS announced their approach to full flash devices; a new custom-build Flash Module Drive (FMD) that can be added to the VSP platform. This provides 1.6TB or 3.2TB (higher capacity due March 2013) of storage per module, which can then be stacked into an 8U shelf of 48 FMDs in total – a total of 600TB of flash in a single VSP. Each FMD is like a traditional SSD drive in terms of height and width, but is much deeper in size. It appears to the VSP as a traditional SSD.
The FMD chassis is separate to the existing disk chassis that are deployed in the VSP and so FMDs can’t be deployed in conjunction with hard drives. Although this seems like a negative, the flash modules have higher specification back-end directors (to fully utilise the flash performance), which, in addition to their size, explains why they wouldn’t be mixed together.
Creating a discrete flash module provides Hitachi with a number of benefits compared to individual MLC SSDs including:
- Higher performance on mixed workloads
- Inbuilt compression using the onboard custom chips
- Improved ECC error correction using onboard code and hardware
- Lower power per TB consumption from higher memory density
- > 1,000,000 IOPS in a single array
The new FMDs can also be used with HDT (dynamic tiering) to cater for mixed sub-LUN workloads and of course Hitachi’s upgraded microcode is already optimised to work with flash devices.
The Architect’s View
Solid state storage continues to evolve. NAND flash is fast and has its foibles but this can be overcome with dedicated NAND modules. Today, only four vendors have moved to dedicated solid-state components while the others continue to use commodity SSDs. At scale, performance and availability, when viewed in terms of consistency become much more important. Many vendors today are producing high performance devices, but how well will they scale going forward and how resilient will they be? As the market matures, these differences will be the dividing line between survival and failure.
Disclaimer: I recently attended the Hitachi Bloggers’ and Influencers’ Days 2012. My flights and accommodation were covered by Hitachi during the trip, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time when attending the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.
- Optimising Storage Architectures for SSD
- Hitachi Accelerated Flash Storage Ignites Flash From Enlightenment to Productivity
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at Dev...
May. 29, 2015 12:00 PM EDT Reads: 1,476
Amazon, Google and Facebook are household names in part because of their mastery of Big Data. But what about organizations without billions of dollars to spend on Big Data tools - how can they extract value from their data? In his session at 6th Big Data Expo®, Ali Ghodsi, Co-Founder and Head of Engineering at Databricks, discussed how the zero management cost and scalability of the cloud is addressing the challenges and pain points that data engineers face when working with Big Data. He also s...
May. 29, 2015 12:00 PM EDT Reads: 4,287
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
May. 29, 2015 12:00 PM EDT Reads: 1,736
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and asse...
May. 29, 2015 11:00 AM EDT Reads: 5,698
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session a...
May. 29, 2015 11:00 AM EDT Reads: 4,009
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
May. 29, 2015 11:00 AM EDT Reads: 1,588
The multi-trillion economic opportunity around the "Internet of Things" (IoT) is emerging as the hottest topic for investors in 2015. As we connect the physical world with information technology, data from actions, processes and the environment can increase sales, improve efficiencies, automate daily activities and minimize risk. In his session at @ThingsExpo, Ed Maguire, Senior Analyst at CLSA Americas, will describe what is new and different about IoT, explore financial, technological and re...
May. 29, 2015 10:50 AM EDT Reads: 273
Andi Mann has been serving as Conference Chair of the DevOps Summit since its inception. He is one of the world's recognized leaders in DevOps, and continues to be one of its most articulate advocates. Here are some recent thoughts of his in an interview we conducted in the run-up to the DevOps Summit to be held June 9-11 at the Javits Center in New York City. When did you first start thinking about DevOps and its potential impact on enterprise IT? Andi: I first started thinking about DevOps b...
May. 29, 2015 10:33 AM EDT Reads: 329
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
May. 29, 2015 10:00 AM EDT Reads: 1,217
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile ...
May. 29, 2015 10:00 AM EDT Reads: 5,453
We are all here because we are sold on the transformative promise of The Cloud. But what good is all of this ephemeral, on-demand infrastructure if your usage doesn't actually improve the agility and speed of your business? How must Operations adapt in order to avoid stifling your Cloud initiative? In his session at DevOps Summit, Damon Edwards, co-founder and managing partner of the DTO Solutions, will highlight the successful organizational, process, and tooling patterns of high-performing c...
May. 29, 2015 10:00 AM EDT Reads: 5,377
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
May. 29, 2015 10:00 AM EDT Reads: 4,091
The web app is Agile. The REST API is Agile. The testing and planning are Agile. But alas, Data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes which force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software orga...
May. 29, 2015 10:00 AM EDT Reads: 622
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not ...
May. 29, 2015 09:45 AM EDT Reads: 650
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
May. 29, 2015 09:45 AM EDT Reads: 1,218
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
May. 29, 2015 09:15 AM EDT Reads: 1,270
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
May. 29, 2015 09:15 AM EDT Reads: 1,371
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 29, 2015 09:00 AM EDT Reads: 1,450
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 29, 2015 09:00 AM EDT Reads: 1,532
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. Acco...
May. 29, 2015 09:00 AM EDT Reads: 5,205