Welcome!

Machine Learning Authors: Zakia Bouachraoui, Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Hyper Converged Infrastructure | @CloudExpo #BigData #IoT #API #DevOps

Hyper converged is not meant for a complex performance sensitive environment

Hyper Converged Infrastructure, a Future Death Trap

In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need  some specialized processing and it doesn't make sense for all the general purpose servers doing this job. It is better handled by the specialized group of dedicated devices. The critical element in the complete data center infrastructure is data. It may be better to keep this data in the special devices with the required level of redundancy than spreading across the entire data centers. However, hyper convergence emerged for a noble cause of ease deployment for a very small scale branch office scenarios since it  is always complex to setup and operate traditional SAN. The real problem starts when we attempt to replicate this layout into large scale environment with the transactional workload. Three  predominant issues can hit the hyper converged deployments hard and it can spell a death trap.  While sophisticated IT houses know these problems and stay away from the hyper convergence, but others can fall prey to this hype cycle.

Performance Nightmares
Everybody  jumped on to virtualization way before the complete virtualization stack was ready with respect to compute, network and storage. Many of them were struggling to isolate their problems among these three components. Some intelligent lot realized that storage level IO contentions are the root cause of most of their performance related issues and looking for the new class of storage products guaranteeing performance at volume and VM level. Just imagine the magnitude of complexity if all these 3 components are put together in the form of hyper convergence and each IO needs to touch these multiple general purpose servers to complete one application level transaction in the loaded environment. Some of the issues may not surface, when the infrastructure is not loaded.

To make the things worse, during economic downtime which is due for some time, data doesn't stop growing but IT budget stops. IT houses tend to load the existing hardware infrastructure to the maximum level during such time. While all these performance issues related to  the misfit architecture pops up, further cost cutting will kick in to reduce the IT headcount. Isn't it a real death trap which CIOs of cloud providers and enterprise need to avoid.

Hardware Refresh
It is common for the storage vendors to just replace the  storage head  as part of the refresh cycle while data stays intact in the separate shelves. This refresh cycle will be complex if the data is distributed across all the devices in the form of internal disks across the data centers. And refresh cycles for compute and storage is different - disks stay longer than typical servers. In the hyper converged case, we need to replace everything at one time. This requires a tremendous amount of IT staff hours. Worse if this comes in the middle of an economic crisis.

Storage Expansion
If there is a need for a storage expansion, the customer ended up buying expensive servers in the hyper converged environment. Some Web scale companies are already facing this problem and moving the storage out of the server.

At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitations and curtails the flexibility to grow the resources independent of each other. In addition, a performance nightmare bound to hit once the system gets loaded.

More Stories By Felix Xavier

Recognized as one of the top 250 MSP thought and entrepreneurial leaders globally by MSPmentor, Felix Xavier has more than 15 years of development and technology management experience. With the right blend of expertise in both networking and storage technologies, he co-founded CloudByte.

Felix has built many high-energy technology teams, re-architected products and developed core features from scratch. Most recently, Felix helped NetApp gain leadership position in storage array-based data protection by driving innovations around its product suite. He has filed numerous patents with the US patent office around core storage technologies.

Prior to this, Felix worked at Juniper, Novell and IBM, where he handled networking technologies, including LAN, WAN and security protocols and Intrusion Prevention Systems (IPS). Felix has a master’s degrees in technology and business administration.

CloudEXPO Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed by some of the world's largest financial institutions. The company develops and applies innovative machine-learning technologies to big data to predict financial, economic, and world events. The team is a group of passionate technologists, mathematicians, data scientists and programmers in Silicon Valley with over 100 patents to their names. Big Data Federation was incorporated in 2015 and is ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by researching target group and involving users in the designing process.
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.