Welcome!

Machine Learning Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Corey Roth, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

Cloud: Commoditizing End Users

It's not just commoditization of business functions (SaaS) or IT infrastructure (IaaS) - it's the users, too

Prioritization. It's something that's built into nearly every technology, particularly that which services network traffic. Rate shaping. Queuing. Coloring bits.

We do a lot of interesting gyrations with technology to ensure that some user traffic and requests are more equal than others.

Today we still do the same thing, but it's done in different ways. Software as a Service charges a premium for "extra" API calls, for example, and if you want access to premium content there's sure to be a paywall in front of it.

But that's at the service level. It's not the same as prioritization of individual users; of affording specific users privileges of some kind based either on their position (No, no, the CEO can't have his e-mail be delayed - never apply bandwidth limiting policies to him) or on their customer status (They're a "gold" customer, make sure their requests go to the fastest application instance).

These kinds of customer privileges have always existed and in some industries remain a staple reward or requirement for operations.

Cloud, however, commoditizes users, affording operations no way to distinguish between traffic from the CEO and traffic from, well, me.

IT'S THE NETWORK

That's because the mechanisms by which traffic and requests are prioritized exist in the network; in the data path. By the time the request gets to the Exchange server, it's already too late. The Exchange server doesn't know that three upstream switches and routers have queued the packets comprising the CEO's request, causing a slight but noticeable delay. It is the infrastructure - the network - that provides this service necessarily. Prioritization of traffic through a series of tubes interconnected by what are essentially processing centers has to occur at those processing centers, before it arrives at the destination.

The effect is commoditization of users. Every user is the same, every request - equal. There is no special treatment for anyone, period. Part of this is due to the relinquishment of control over the network inherent in a cloud-based environment, part of it is due to the failure of that same network to pass on context and awareness of the user and the context in which such requests are made.

The inability to deploy policies designed to give preference to some requests over other - for whatever reason the business thinks it may be necessary - means users are commoditized. They become a sequence number, nothing more, nothing less.

For many applications and business models this may be a non-issue. But for industries and organizations that in part monetize (or have monetized in the past) based on the ability to offer "better or faster" service on an individual basis, moving to cloud will have a significant impact and may require changes to not only operations but to the business.

Some capability to differentiate levels of service on a per-user basis may be returned as more mature services are offered by cloud providers, but the level of differentiation and prioritization IT has known in the data center will never completely return in the cloud.

Organizations who may be impacted by this commoditization in the form of frustrated users or churning customers will need to consider other ways in which to address the ability to decommoditize its users.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.