Welcome!

Machine Learning Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Understanding #Serverless at @CloudExpo | #DevOps #NoCode #LowCode #AI

Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit

Understanding Serverless Cloud and Clear
By Martijn van Dongen

Serverless is considered the successor to containers. And while it’s heavily promoted as the next great thing, it’s not the best fit for every use case. Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit. This post offers some technology perspectives on the maturity of serverless today.

First, note how we use the word serverless here. Serverless is a combination of “Function as a Service” (FaaS) and “Platform as a Service” (PaaS). Namely, those categories of cloud services where you don’t know what the servers look like. For example, RDS or Beanstalk are “servers managed by AWS,” where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers means there’s no provisioning, hardening or maintenance involved, hence they are server “less.” A serverless platform works with “events.” Examples of events are the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.

Performance
All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions will finish one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like serverless is going to win in this performance challenge, right? Sometimes the serverless container with your function is not running and needs to start. This causes slight overhead in the total execution of the “cold” functions, which is undesirable if you want to ensure that your users (or “things”) get 100% fast response. To get predictable responses, you have to provision a container platform, leaving you to wonder if it’s worth the cost, not just for running the containers, but also for related investments in time, complexity and risk.

Cost Predictability
With container platforms or servers, you’re billed per running hour, or, in exceptional cases, per minute or second. If you have a very predictable and steady workload, you might utilize at around 70%, which is still a lot of waste. At the same time, you always need to over-provision because of the possibility of sudden spikes in traffic. One option would be to increase utilization, which would come with fewer costs, but also higher risk. With serverless, in contrast, you pay by code execution to the nearest 100 milliseconds, which is much more granular and close to 100% utilization. This makes serverless a great choice for traffic that is unpredictable and very spiky because you pay only for what you use.

Security
You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the “attack surface” is limited and therefore possible to fully protect. With serverless, however, this surface is really thin and broad and runs on shared servers with less protection than, for instance, EC2 or DynamoDB. For that reason, information such as credit card details are not permitted in functions. That does not mean it’s insecure, but it does mean that it can’t pass a strict and required audit…yet. Given the high expectations for serverless, security will likely improve, so it’s good to get some experience with it now so you’re ready for the future.

Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, and so on. Or process orders of groceries, but outsource the payment to a provider. Like credit card numbers, these things are on their own sensible piece of data, but if data in memory is leaked to other users of the same underlying server, a credit card number exposure can be exploited, but an identifier like id: 3h7L8r bought tomatoes cannot.

Reliability
Another thing to think about with security is the availability of services. A relatively “slow” service that can’t go down is generally better than a service that is fast but unavailable. Often in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases, it’s better to turn off your on-premise and go all-in cloud. If you’re not ready for this step, you can also use serverless as a failover platform to keep particular functionalities highly available, not all functionalities of course, but those that are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.

Cloud and Clear
Until recently, it was quite tricky to launch and update a live function. More and more frameworks, like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. This ensures the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C#, you develop the functions themselves. Even logging and monitoring has become really mature over the last few months. The whole source gives you a “cloud and clear” overview of what’s under the hood of your serverless application: how it’s provisioned, built, deployed, tested and monitored and how it runs.

Conclusion
AWS started in 2014 with the launch of Lambda, and although this post is mainly about AWS, Google and Microsoft are investing highly in their functions, and in the serverless approach as well. Over the last couple of months, they’ve shown very promising offerings and demos. The world is not ready to go all-in on serverless, but we’re already seeing increasing interest from developers and startups, who are building secure, reliable, high-performing and cost-effective solutions, and easily mitigating the issues mentioned earlier. You can look forward to waking up one day and finding out that serverless is now fully secured, provides reliable performance (pre-warmed), and has been adopted by many competitors. So be prepared and start investing in this technology today.

This blog was originally published by Xebia at Understanding Serverless Cloud and Clear.

The post Understanding Serverless Cloud and Clear appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

CloudEXPO Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.
The revocation of Safe Harbor has radically affected data sovereignty strategy in the cloud. In his session at 17th Cloud Expo, Jeff Miller, Product Management at Cavirin Systems, discussed how to assess these changes across your own cloud strategy, and how you can mitigate risks previously covered under the agreement.
Digital Initiatives create new ways of conducting business, which drive the need for increasingly advanced security and regulatory compliance challenges with exponentially more damaging consequences. In the BMC and Forbes Insights Survey in 2016, 97% of executives said they expect a rise in data breach attempts in the next 12 months. Sixty percent said operations and security teams have only a general understanding of each other’s requirements, resulting in a “SecOps gap” leaving organizations unable to mobilize to protect themselves. The result: many enterprises face unnecessary risks to data loss and production downtime.
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Big Data teams at Autodesk. He is a contributing author of book on Azure and Big Data published by SAMS.
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?