|By Jason Bloomberg||
|December 5, 2013 08:45 AM EST||
After a dozen years and almost 300 ZapFlashes, I have reached my penultimate issue for ZapThink. I’ll be moving onto new adventures in 2014, but not before assembling just a bit more wisdom for the tens of thousands of followers of this little newsletter.
The next and final ZapFlash later this month will of necessity be our annual retrospective and predictions for the upcoming year. But for this issue, I have decided to return to a theme from 2002, the first year of the ZapFlash: idées fortes, or powerful ideas. The original Web Services Idées Fortes article from September of that year called out asynchrony, loose coupling, and coarse granularity as the idées fortes of Web Services – and to this day, those three core architectural principles are still foundational to today’s Cloudified distributed computing world.
But in the intervening decade, ZapThink has covered immense swaths of territory. We’ve followed SOA up one hill and down the other side. We’ve focused the architect’s eye on the Cloud. We’ve hammered out the notion of enterprise as complex system, exhibiting the emergent property of business agility. We’ve done our best to tie all the various threads of change facing enterprise IT into our ZapThink 2020 vision.
Today, then, we can survey the ground we’ve covered and distill new idées fortes for the 2010s – the core architectural threads that have tied together ZapThink’s thinking over the last twelve years.
Idée Forte #1: Cloud Friendliness/REST-based SOA
As enterprises struggle with middleware-centric, Web Services-based SOA, they eventually move to next generation, RESTful approaches. SOA still calls for an intermediary, but now it’s stateless and exposes functionality via RESTful URIs. The REST-based SOA intermediary is policy-driven and provides a loosely coupled Business Service abstraction, but because it is stateless, it requires a RESTful approach to state that separates application state from resource state.
As it happens, this approach to state enables the intermediary to run in a Cloud instance in such a way that allows for automated recovery from failure, an essential Cloud best practice. It’s also possible for the elasticity of the Cloud to provide horizontal scalability and decentralization to the SOA intermediary. In other words, by solving some of the issues with traditional ESBs, we get Cloud friendliness as a side benefit – or vice versa, depending on how you approach the problem. But however you look at this idée forte, the end result supports Integration-as-a-Service: not the “middleware in the Cloud” Cloudwashing that some vendors are promoting, but the Cloud friendly Integration-as-a-Service that is well on its way to becoming the crux of the enterprise integration story.
Cloud friendliness, however, is more than REST-Based SOA and Integration-as-a-Service. You must also have a Cloud friendly approach to data. Since Cloud environments are inherently partition tolerant, we must trade off basic availability and eventual consistency. Typically we want our Cloud environments to be basically available, and hence we must live with eventual consistency. But there’s more to this story: data immutability. If we entirely forego UPDATEs and DELETEs, then we no longer have to worry about blocking our reads or writes, improving performance for highly distributed, eventually consistent databases like the ones we want to run in the Cloud. Data immutability is also the key to real time Big Data analytics, thus opening the doors to a range of entirely new possibilities.
Idée Forte #2: HATEOAS and Next Generation APIs
Hypermedia as the Engine of Application State (HATEOAS) is the most important RESTful constraint, because it drives Hypermedia-Oriented Architecture (HOA) and the separation of application and resource state. Not only does HATEOAS open up new, Cloud-friendly approaches to Business Process Management (BPM), but it also redefines the notion of an Application Programming Interface (API).
REST’s uniform interface, of course, dramatically simplifies the notion of an API. Instead of Web Services’ custom operations which cause no end of trouble, REST provides the same verbs for interacting with any resource: GET, POST, PUT, and DELETE if we’re using HTTP as the transport protocol. But HATEOAS takes this API simplification one step further, because it calls for hypermedia-based discovery.
Start with an initial URI, which we logically call a bookmark. From there, any arbitrary piece of software serving as the client should be able to query the current URI (either for the bookmark or for a subsequent hyperlink) for instructions on what to do next. All retrieved representations should be fully self-descriptive (another REST constraint), thus providing all the necessary metadata to the client to facilitate the ongoing interaction. Straightforward in its way, but for developers used to traditionally static APIs, this HOA approach is a mind-bender.
One of the most important benefits of hypermedia-based discovery is how it deals with changing functionality. Fundamentally, we have the freedom to update or version resources without having to change the URIs we use to access them. In fact, we should never change the URIs – they should be as immutable as our data. Combine immutable URIs with the extreme late binding on the REST-based SOA intermediary, and we’ve built a shockingly flexible approach for solving diverse, dynamic integration challenges. In other words, we’ve taken the loose coupling idée forte to the next level.
Idée Forte #3: Declarative Programming Driving Next-Generation Governance
The declarative programming model separates logic from control flow. SQL describes database query logic, while the inner workings of the database are independent of individual queries. HTML describes the structure of Web pages, but the code inside the browser can render any HTML file. Extend this separation to the functionality of our enterprise technology infrastructure: policy-driven behavior where we represent the policies as metadata.
Of all the Web Services standards, in fact, the ones that drive this metadata representation of policies are perhaps the most powerful: WS-Policy, WS-SecurityPolicy, and a handful of others. But regardless of whether you’re using these XML-based standards, JSON-based policy representations, or some other format, representing policies as metadata is the first step in supporting policy change. Change a policy, change a behavior – theoretically of any element of our IT infrastructure.
While first-generation SOA focused on simple message routing and security policies, the declarative model has come into its own in the Cloud. When we say the Cloud operational environment is fully automated, what we mean is that the consumers of the Cloud may control every aspect of the Cloud declaratively – either via user interfaces or through APIs. Hard-code capabilities that change infrequently. Shift all dynamic behavior to the policy metadata.
Policies, of course, are the technical foundation of governance, which ZapThink defines as creating, communicating, and enforcing the policies that are important to the organization. And while many policies constrain and drive human behavior, the full spectrum of policies in the enterprise go from business-centric to technology-centric. As our technology improves, we’re better able to automate the communication and enforcement of policies, leading to a dynamic, technology-enabled approach to governance we call next-generation governance.
Idée Forte #4: Ubiquitous Cloud
Many people mischaracterize Cloud Computing as next-generation managed hosting: essentially, automated data centers on steroids. But in fact, there’s nothing in the definition of the Cloud that requires Cloud resources to be in data centers. On the contrary, if we free the Cloud from the data center, we can extend it to the entire Internet of Things (IoT).
This notion of the Ubiquitous Cloud is far more powerful than simply an extension of a buzzword, or it wouldn’t be an idée forte. What the Cloud brings to the IoT is the notion of user-driven automated provisioning of virtual resources. The IoT no longer consists of static endpoints. Instead, we have a way of provisioning, managing, and deprovisioning them that can potentially help us solve the knottiest of IoT problems: how to secure it.
We’ll never find our way out of the IoT security morass, however, until we take control of our identities – and by “we,” I mean the users of technology to whom those identities belong. After all, our identities don’t belong to the credit reporting bureaus, NSAs, and Facebooks of the world. They belong to us. We must develop the technology – and the will – to treat identity itself as provisionable Cloud resource.
Idée Forte #5: Human-Driven Semantics
ZapThink likens today’s semantics challenges to the pot of gold under the rainbow: seemingly within our grasp, and yet always just out of reach. True, there has been much progress with standards like the Resource Description Framework (RDF) and innumerable semantic models for every industry and governmental domain on the planet. But we still don’t have a clue how to build technology that seamlessly automates semantic interoperability beyond simple, specific situations.
The problem, of course, is that semantics is context dependent. Human meaning is inherently vague and ambiguous, and those vagaries never resolve themselves until we focus our communication on a particular context. To address these challenges, standards efforts have sought to strip all vagueness and ambiguity out of language in order to give our computers a basis for communication. But in so doing, we lose the subtleties of human meaning so important to the way people communicate.
The only way we can achieve real progress with semantics is to reverse our approach. Instead of trying to shoehorn human communication into the world of ones and zeroes that characterize computer interactions, we must embrace a human-driven approach to semantics. Give up on championing impossibly rigid data models. Instead, allow for separate data domains that are loosely coupled from one another. Instead of requiring strongly typed schemas at runtime, allow for loose typing that encourages flexibility in the representations of data. Underspecify at design time whenever possible. Embrace differently structured data. After all, we don’t serve our data. Our data serve us.
The ZapThink Take
As with all powerful ideas, these idées fortes are works in progress. If anything, they bring up more questions than answers. Yes, we’ve made substantial progress so far this century, and hopefully ZapThink has been a part of that success. But there’s much more work to do.
As with all good endings, this one is actually more of a beginning. These idées fortes are starting points to ongoing discussions and ongoing research. At the center of ZapThink 2020 is the principle of continuous transformation. You ain’t seen nothin’ yet!
- Mainstream Business Applications and In-Memory Databases
- Working with Project Management Software – Who Is Managing Who?
- APM Convergence: Monitoring vs. Management
- Donald Fischer Joins General Catalyst as Venture Partner
- DataStax Hires Clint Smith as General Counsel
- Achieving Agile Transformation with Kanban, Kotter, and Lean Startup
- The Top Five Benefits of Cloud Computing
- How to Performance Test Automation for GWT and SmartGWT
- Compuware APM Extends Leadership in Big Data
- Compuware APM Recognized as Trendsetter in Big Data Solutions
- Will These Five Websites Make the Same Mistake Twice During the Big Game?
- RSA Conference USA 2014 Exhibitor Profiles (A through L)
- Mainstream Business Applications and In-Memory Databases
- Consumer Electronics - Global Trends, Estimates and Forecasts, 2011-2018
- Working with Project Management Software – Who Is Managing Who?
- Objective-C Programming: The Big Nerd Ranch Guide (2nd Edition)
- APM Convergence: Monitoring vs. Management
- Small Medium Business (SMB) IT Continues to Gain Respect, What About SOHO?
- Donald Fischer Joins General Catalyst as Venture Partner
- Big Data Market: Business Case, Market Analysis and Forecasts 2014 - 2019
- Analyzing Web Site Performance Made Easy
- 2014 International CES Exhibitor Profiles: Samsung Electronics America, Inc. to 3D Vision Technologies Limited
- Global Customer Relationship Management (CRM) Software Industry
- Creating JavaServer Faces Maven Managed Projects with Eclipse
- Building a Drag-and-Drop Shopping Cart with AJAX
- What Is AJAX?
- Google Maps! AJAX-Style Web Development Using ASP.NET
- Where Are RIA Technologies Headed in 2008?
- How and Why AJAX, Not Java, Became the Favored Technology for Rich Internet Applications
- Flashback to January 2006: Exclusive SYS-CON.TV Interviews on "OpenAjax Alliance" Announcement
- "Real-World AJAX" One-Day Seminar Arrives in Silicon Valley
- AJAXWorld Conference & Expo to Take Place October 2-4, 2006, at the Santa Clara Convention Center, California
- AJAX Sponsor Webcasts Are Now Available at AJAXWorld Website
- AJAXWorld University Announces AJAX Developer Bootcamp
- AJAX Support In JadeLiquid WebRenderer v3.1
- i-Technology 2008 Predictions: Where's RIAs, AJAX, SOA and Virtualization Headed in 2008?
SYS-CON Events announced today that Ambernet Technologies, the innovative “Cloud Management Center” company, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Ambernet Technologies is a leading global provider of cloud management software (CloudTruOps) and IT professional services to the enterprise, service provider and government markets. CloudTruOps is the industry’s first infrastructure-independent and service-aware software solution that provides a fully transactional single pane of glass for cloud service provisioning & orchestration, governance, policy, security, performance, self-service storefront, and billing/chargeback for multiple clouds. Ambernet's IT professional services provide consulting services, solutions, and support. Ambernet is a global company with headquarters in Dallas, Texas and regional offices in Toronto, Canada, and Bangalore, India.
Mar. 10, 2014 09:27 AM EDT Reads: 651
The evolutionary nature of mobile presents a security-centric challenge for businesses with corporate content on these devices. Enterprises put themselves at risk when users access sensitive information through email and applications across smartphones and tablets, while mobile. Organizations can choose to ignore this security threat or enhance employee productivity through secure corporate containers. In his session at 14th Cloud Expo, Eric Owings, an enterprise account executive at AirWatch®, will discuss best practices and strategies to ensure global security and workforce enablement by leveraging enterprise mobility management (EMM) across the enterprise. He will also provide attendees with a deeper understanding of enterprise mobility in a connected ecosystem, while ensuring security and compliance in the cloud.
Mar. 7, 2014 09:45 AM EST Reads: 1,673
Cascading is the popular Java-based application development framework for building Big Data applications on Apache Hadoop. This open source framework allows you to leverage existing skillsets such as Java, SQL, R, and more to create enterprise-grade applications without having to think in MapReduce. In his session at 5th Big Data Expo, Alexis Roos, a Senior Solutions Architect focusing on Big Data solutions at Concurrent, Inc., will give an introduction to Cascading, how it works, and then dive into how enterprises can start building applications with Cascading. Come and see how companies like Twitter, eBay, Etsy, and other data-driven companies are taking advantage of Cascading and how Cascading is changing the business of Big Data in the enterprise.
Mar. 4, 2014 11:15 AM EST Reads: 1,814
The world’s largest and most successful private cloud operations are revolutionizing their approach to demand management. These organizations have recognized that while self-service portals are a component in the overall cloud architecture, these tools do not enable demand management. In fact, in many cases the portals and end-user interfaces don’t actually capture anything to do with demand, but instead force the user to enter the capacity “supply” requirements that they think will meet their demands. This is very different. Large enterprises have recognized the need to look beyond immediate requests to also model the “pipeline” of new demands that will be coming down the road. It is only by capturing new immediate requirements, an understanding of the pipeline and what is running in environments that organizations can possibly hope to accurately model demand and properly allocate compute, storage and network resources.
Mar. 4, 2014 10:15 AM EST Reads: 1,824
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. Without bringing these three elements together via Systems of Discover you either end up with an Internet of somethings and/or a big mess of data. In his session at @ThingsExpo, Mac Devine, a Distinguished Engineer at IBM, will focus on how to ensure businesses have the right plans in place for Systems of Discovery for the Internet-of-Things world we are entering.
Mar. 4, 2014 09:00 AM EST Reads: 2,062
Nominations for participating vendors will be accepted through Twitter at @ThingsExpo. The "Open Cloud Shoot-Out at @ThingsExpo New York," in which leading cloud providers are expected to participate, will be held live on stage at the event. The Shootout will provide the vendors with an opportunity to demonstrate the features and capabilities of their products, with a particular focus on interoperability, scalability, security, and reliability in terms of development, deployment, and management.
Feb. 25, 2014 02:30 PM EST Reads: 2,239
As businesses aspire to move more and more application workloads outside of the boundaries of their private cloud data centers, public cloud service providers are increasingly implementing a private cloud staple: resiliency. In his session at 14th Cloud Expo, John Roese, SVP and Chief CTO at EMC Corporation, will summarize the key architectural tenets of resilient private cloud architectures. These tenets can be implemented in any service provider cloud implementation, regardless of hypervisor choice (e.g., VMware, Hyper-V, Xen), cloud orchestration software (e.g., vSphere, OpenStack), network implementation (e.g., SDN, NFV), or storage implementation (file, block, object). A resilient public cloud will naturally attract increased workload migration, and the rest of the session will describe foundational technologies that facilitate not only secure and seamless application workload migration, but secure and seamless data set migration as well.
Feb. 25, 2014 11:00 AM EST Reads: 1,988
Fueled by the global economic situation, the government's focus on datacenter consolidation and the "Cloud First" initiative, Cloud Computing continues to be the buzzword of the year. As government agencies start to adopt cloud computing, additional challenges including security in the cloud have become prominent barriers to adoption. In his session at 14th Cloud Expo, Majed Saadi, Director of the Cloud Computing Practice at SRA International, will focus on providing a quick Cloud Computing technology update with an emphasis on current Cloud Computing security trends and drivers. Examples of these trends include: the utilization and evaluation of Clouds in both active and passive surveillance systems and the use of High Performance Clouds for expanding scientist ability to access data. He will also introduces best practices and lessons learned for securing both public and private cloud environments. It offers insight into how Cloud Computing coupled with other technical advancements i...
Feb. 24, 2014 09:45 AM EST Reads: 2,389
With Windows Server 2003 end of extended support approaching, enterprises must begin their migration planning for all affected production applications. There are a variety of approaches and many people will take a “mix and match” approach. Whatever the approach, it’s important to have a migration plan now – 200 business days goes by quickly when some applications take weeks to migrate. This is the perfect opportunity to move those applications to the Cloud. There’s a way to move your applications and modernize (move to the cloud) at the same time.
Feb. 23, 2014 11:30 AM EST Reads: 1,780
Software development, like engineering, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of what's typically involved when moving an application from prototype to production and, indeed, maintaining the application through its lifecycle remains craftwork.
Feb. 22, 2014 01:30 PM EST Reads: 1,899
Are you re-creating existing technology silos in the cloud? If so, your entire enterprise investment in the cloud is at risk. From the perspective of IT, organizational silos seem to be the root of all problems. Every line of business, every department, every functional area has its own requirements, its own technology preferences, and its own way of doing things. They have historically invested in specialized components for narrow purposes, which IT must then conventionally integrate via application middleware – increasing the cost, complexity, and brittleness of the overall architecture. Now those same stakeholders want to move to the cloud. Save money with SaaS apps! Reduce data center costs with IaaS! Build a single private cloud we can all share! But breaking down the technical silos is easier said than done. There are endless problems: Static interfaces. Legacy technology. Inconsistent policies, rules, and processes. Crusty old middleware that predates the cloud. And everybod...
Feb. 21, 2014 11:00 AM EST Reads: 2,123
Recent high-profile events (2010 Haitian Earthquake, 2011 Tōhoku Earthquake and Tsunami, 2013 Typhoon Haiyan/Yolanda) have highlighted the growing importance played by the international community in successful humanitarian assistance and disaster response. These events also showcased the critical importance of quickly providing robust information technology resources to response effort participants. In June 2010, in support of its continuing effort to foster international collaboration, the National Geospatial-Intelligence Agency (NGA) initiated a dialog with the Network Centric Operations Industry Consortium (NCOIC) to discuss this and other aspects of geospatial data information-sharing across the international community. In response to this request the NCOIC through the use of a cloud services brokerage paradigm, built and demonstrated a federated cloud computing infrastructure capable of managing the electronic exchange of geospatial data. The effort also led to the development of ...
Feb. 21, 2014 09:00 AM EST Reads: 2,215
Cloud computing is changing our world, sharing common platforms for global information exchange. Self-service computing makes the Internet come alive, helping users visualize and analyze location-aware information. Configurable applications deliver a solution framework for integration, collaboration, and efficiency. Cloud-based applications integrate and synthesize information from many sources, facilitating communication and collaboration, and breaking down barriers between institutions, disciplines, and cultures. Online platforms enable real-time access from everyone. Web connectivity provides a common information source, elaborating, collaborating, and sharing holistic approaches for content awareness.
Feb. 18, 2014 09:15 AM EST Reads: 1,931
Although PaaS is new, it's rapidly gaining momentum, with growth projected at 48 percent annually by Technavio, the research firm, and topping $6 billion in value by 2016. If PaaS is treated as a strategic opportunity to align agendas across IT and across the business, it may well prove to be a ʺonce in a generationʺ opportunity to clarify, improve, and strengthen everything developers do. As with any new technology or approach to doing business, PaaS will appeal to different groups for different reasons. The clear business value is that PaaS is added at the application layer. For ISVs, PaaS can help extend the availability of a traditional software product or enable organizations to add new capabilities to their existing IT spectrum. It's also helpful to anyone wishing to achieve productivity gains, speed time to results, or reduce their costs. But like any technological shift, PaaS adoption requires changes in how people work and demands collaboration if it is to be as successful as...
Feb. 17, 2014 09:00 AM EST Reads: 2,940
This first person “in the trenches” enterprise Public Cloud story candidly examines the project from inception to delivery. Attendees will hear first-hand the real-world challenges, opportunities, lessons-learned, and what it takes to architect and implement a real-world application in the public cloud. In his session at 14th Cloud Expo, Brian McCallion, founder of New York City-based consultancy Bronze Drum, will focus on the organizational, cultural, and technical hurdles to designing and implementing a strategic application in the Public Cloud in a regulated industry.
Feb. 17, 2014 08:45 AM EST Reads: 1,744