Welcome!

Machine Learning Authors: Pat Romanski, Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo, Microservices Expo, Open Source Cloud, Containers Expo Blog, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

What Independence in the Cloud Looks Like

Freedom in the cloud has some costs associated with it, though different costs than freedom overall...

Happy July 4th everybody! In celebration of our Day of Independence, Gathering Clouds thought we would explore what it takes to gain vendor freedom in the cloud context.

Vendor lock-in is a much discussed issue in the cloud. But on some level, the choices your company makes can determine the degree of lock-in you either exist in or feel on a day-to-day basis.

'Murica

Freedom in the cloud has some costs associated with it, though different costs than freedom overall...

There’s a strategic imperative around be agile in your cloud vendor relationships. Lock-in can be detrimental, especially when you reach the point that you do need to transition out of a certain cloud, due to scale or other requirements.

Perhaps your company has reached the scale where buying hardware and depreciating the costs are a smarter decision compared to continuing to rent infrastructure through AWS. What you buy might not work well with AWS, and the skills your internal team has developed might not (probably won’t) translate.

There are features in AWS that act as points of lock-in and it’s hard to get around these. They are amazing tools, but they don’t have easily paralleled equivalents like Elastic beanstalk, Elastic Map Reduce, S3, and so many more. Transitioning out of these services is difficult, especially when AWS makes it so easy to access all of them. Outside of AWS, there are 3 competing privatized cloud: Openstack, Cloud Stack and VMware. When it comes to MSPs, VMware is the lion’s share of the market.

The truth is that there is some lock-in that is chosen by the company looking to access cloud. This might be influenced by current technology – if your company has been running on VMware for a decade switching over to OpenStack might present more of a problem than the path of least resistance. Some of that decision though is also influenced by a desire to access certain tool sets.

However, there are ways to take a step back from the cloud platform itself. There are tools available that abstract the cloud layer, like Puppet or Chef, with which you can become more cloud agnostic, instead focusing on the automation and configuration management tools to run the infrastructure. This allows a company to focus above the cloud layer, gaining a degree of independence that utilizing the innate tools from any of the platforms wouldn’t deliver.

Another approach is to be cloud-centric, vendor agnostic. This approach allows you to lock into a product choice, say VMware, while being able to run across any number of MSPs that use VMware. You can take this approach with both OpenStack and Cloud Stack, though each platform has a smaller install based when compared to VMware.

Independence in the cloud is not a direct process. However, there are different paths by which a company can gain a degree of autonomy even when choosing one platform over another.

Thoughts? Agree/disagree? Let us know on Twitter @CloudGathering.

By Jake Gardner

Read the original blog entry...

More Stories By Gathering Clouds

Cloud computing news, information, and insights. Powered by Logicworks.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.