|By Peter Velikin||
|February 16, 2013 10:00 AM EST||
Hey there; the IT Dog back with some color commentary on our VDI experience. When I first heard the name, I thought VDI was the latest model Volkswagen diesel, but as the guy with the suit explained in the last blog, VDI is Virtual Desktop Infrastructure. Now that you know what VDI means, I am sure you know all there is to know about it, right? Well, I wish it was that easy for me.
Where Do I Begin?
I guess I’ll start from the beginning. At my company, my IT guys were at their wits end trying to manage the 1000 or so desktops, spread out over four different locations, with various flavors of Windows and who knows how many different versions of applications and other personal stuff. I kept getting the ‘we need more staff to manage this mess’ line from them. A lot of these problems had to do with acquisition and expanding business – which is all good.
Queue the VDI sales guy with the big Mercedes: “I’ve got just the thing to solve your problems: VDI”. Let’s see…take control of the company computing assets all from the back room, what a great idea! No more tech support phone calls, no more sending staff out to offices to get chewed out because some website they were on loaded some garbage on to their machine and now it runs slower than a weenie dog in a foot of snow. All this ‘problem solving’ was going to cost a bundle however and I was the guy who had to sell management on it. With my tail on the line, we bit the bone and put the system in.
I Wish My Problems Were “Virtual”
We were at the bleeding edge of the VDI wave so we expected some start up and implementation issues. Our vendor helped us specify and design a system to meet the performance requirements and within the budget we had sold management on. We installed racks of new servers, more spinning disks than at the Frisbee Dog World Championships, power, cooling, wires, wires and more wires. We had it all going on. I spent a month going around selling all the end users on this saying their lives were going to be better – no more sitting on hold, waiting for support, the latest and greatest applications, easy access anytime, anywhere with the potential to support any device in the future, yadda, yadda, yadda. We went live a few months later and that is and began to observe performance.
“Houston, We Have A Problem”
It did not take long to find out about some of the potential issues facing VDI installations today. The first problem we had to deal with had to do with simply getting all the users up and running every morning. I learned about the dreaded “boot storm.” Of course I had no idea what a boot storm was until we started this project. I thought it referred to something from Nazi Germany. But there it is, we have a boot storm problem – when lots of users try to start up their machines at the same time, it puts a tremendous load on the VDI hardware and network and all users suffer from poor service and slow startups. I have to admit – it happened to me also and as you know, being a dog, my life is too short to be waiting around for things like that.
It turns out we designed a system for a typical day in the office for our 1000+ users. What we did not do was design the system for that would be responsive for a “100 year” type of event like loading 400 user images at the same time. Basically our system was 90% perfect but the last 10% was really causing problems for the company. The feedback I was getting was pretty tough to take. I felt like I just pooped on the carpet.
Getting to 100%
As you remember from your Econ 101 class, there is a bell curve distribution for just about everything and IT system usage fits that model pretty well. I went back to our vendor to discuss what it would take to get that last 10% of performance to manage the “100 year boot storm event” (really it was every day) and it turns out this is a very common problem with VDI. You see, the main challenge is that if you build your VDI for the average 'steady state' IOPS usage (the 90% system), you can do it cost-effectively but then performance is inadequate during the usage storms. It turns out the problem is the 90% system is limited in the number of IO operations per second (IOPS) it can handle. One solution to the IOPS problem is to scale up by adding more disks to size IOPS for the peak usage. The problem with that is then your system becomes 2x the total price of all the 1,000 PC and, all that extra IOPS you just bought sits idle most of the time and the capacity is unused. Since I put my tail on the line for this system, my bosses promptly cut it off and I was on the hook to fix this problem with limited resources.
Stop the Spinning
We needed to get creative to solve the IOPS problem. The solution we came up with was to buy a limited quantity of SSDs and use SSD caching software to reduce the IOPS workload on the spinning disks. This made sense since things like the PC image which needed to be accessed by all users when they start their system can easily be stored in SSD cache and accessing that from SSD will increase IOPS without adding more spinning disks. Once the boot storm passed every morning, the SSD caching software would recognize that and start caching other heavily accessed data automatically so system performance would be improved all day. We solved our immediate problem and were able to focus on other VDI related management issues.
Tell me about your experience rolling out VDI.
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps,...
Nov. 27, 2014 07:45 AM EST Reads: 1,385
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
Nov. 27, 2014 07:45 AM EST Reads: 803
When an enterprise builds a hybrid IaaS cloud connecting its data center to one or more public clouds, security is often a major topic along with the other challenges involved. Security is closely intertwined with the networking choices made for the hybrid cloud. Traditional networking approaches for building a hybrid cloud try to kludge together the enterprise infrastructure with the public cloud. Consequently this approach requires risky, deep "surgery" including changes to firewalls, subnets...
Nov. 27, 2014 07:45 AM EST Reads: 778
DevOps is all about agility. However, you don't want to be on a high-speed bus to nowhere. The right DevOps approach controls velocity with a tight feedback loop that not only consists of operational data but also incorporates business context. With a business context in the decision making, the right business priorities are incorporated, which results in a higher value creation. In his session at DevOps Summit, Todd Rader, Solutions Architect at AppDynamics, discussed key monitoring techniques...
Nov. 27, 2014 07:30 AM EST Reads: 777
"Cloud consumption is something we envision at Solgenia. That is trying to let the cloud spread to the user as a consumption, as utility computing. We want to allow the people to just pay for what they use, not a subscription model," explained Ermanno Bonifazi, CEO & Founder of Solgenia, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 27, 2014 07:15 AM EST Reads: 1,020
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. Acco...
Nov. 27, 2014 07:00 AM EST Reads: 1,376
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective ...
Nov. 27, 2014 06:45 AM EST Reads: 1,173
"For the past 4 years we have been working mainly to export. For the last 3 or 4 years the main market was Russia. In the past year we have been working to expand our footprint in Europe and the United States," explained Andris Gailitis, CEO of DEAC, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 27, 2014 06:45 AM EST Reads: 1,162
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using ...
Nov. 27, 2014 06:45 AM EST Reads: 1,289
High-performing enterprise Software Quality Assurance (SQA) teams validate systems that are ready for use - getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time - bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of E...
Nov. 27, 2014 05:00 AM EST Reads: 1,211
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 27, 2014 04:00 AM EST Reads: 1,038
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happe...
Nov. 26, 2014 11:30 PM EST Reads: 1,075
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrateg...
Nov. 26, 2014 09:00 PM EST Reads: 1,095
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water,...
Nov. 26, 2014 06:00 PM EST Reads: 1,088
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
Nov. 26, 2014 04:00 PM EST Reads: 1,132
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device exp...
Nov. 26, 2014 03:45 PM EST Reads: 1,078
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect...
Nov. 26, 2014 02:00 PM EST Reads: 1,530
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series dat...
Nov. 25, 2014 09:30 PM EST Reads: 1,343
"Verizon offers public cloud, virtual private cloud as well as private cloud on-premises - many different alternatives. Verizon's deep knowledge in applications and the fact that we are responsible for applications that make call outs to other systems. Those systems and those resources may not be in Verizon Cloud, we understand at the end of the day it's going to be federated," explained Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, in this SYS-CON.tv interview at...
Nov. 25, 2014 09:00 PM EST Reads: 1,357
The term culture has had a polarizing effect among DevOps supporters. Some propose that culture change is critical for success with DevOps, but are remiss to define culture. Some talk about a DevOps culture but then reference activities that could lead to culture change and there are those that talk about culture change as a set of behaviors that need to be adopted by those in IT. There is no question that businesses successful in adopting a DevOps mindset have seen departmental culture change, ...
Nov. 25, 2014 07:00 PM EST Reads: 1,023