Welcome!

AJAX & REA Authors: Manuel Weiss, Kevin Benedict, RealWire News Distribution, Tim Hinds, Andreas Grabner

Related Topics: Cloud Expo, SOA & WOA, .NET, Virtualization, Big Data Journal, SDN Journal

Cloud Expo: Blog Feed Post

Notes from the Field: Inside a Real World Large-Scale Cloud Deployment

I thought I’d share some important generalities about this type of effort

I’ve been granted an incredible opportunity. Over the past three and a half months I have gotten to lead a real world large-scale delivery of a cloud solution. The final solution will be delivered as Software-as-a-Service (SaaS) to the customer via an on-premise managed service. While I have developed SaaS/PaaS (Platform-as-a-Service) solutions in the past, I was fortunate enough to have been able to build those on public cloud infrastructures. This has been a rare glimpse into the “making of the sausage” having to orchestrate everything from delivery of the hardware into the data center in four countries to testing and integration with the customer environment.

All I can say about this opportunity is that the term, “it takes a village” applies well. I thought I’d share some important generalities about this type of effort. It’s important to note that this is a Global 100 company with data centers around the globe. Regardless of what the public cloud providers are telling the world, this application is not appropriate for public cloud deployment due to the volume of data traversing the network, the amount of storage required, the types of storage required (e.g. Write-Once-Read-Many), level of integration with internal environments and the requirements for failover.

The following are some observations about deploying cloud solutions at this scale:

  • Data Centers. As part of IT-as-a-Service (ITaaS) we talk a lot about convergence, software-defined data centers and general consolidation. All of this has major implications for simplifying management and lowering the total cost of ownership and operations of the data centers. However, we should not forget that it still takes a considerable amount of planning and effort to bring new infrastructure into an existing data center. The most critical of these is that the data center is a living entity that doesn’t stop because work is going on, which means a lot of this effort occurs after hours and in maintenance windows. This particular data center freezes all changes between mid-December till mid-January to ensure that their customers will not have interrupted service during a peak period that includes major holidays and end of year reporting, which had significant impact on attempting to meet certain end-of-year deliverables. On site surveys were critical to planning the organization of the equipment (four racks in total) on the floor to minimize cabling efforts and ensure our equipment was facing in the right direction to meet the needs for hot/cold isles. Additionally, realize that in this type of business, every country may have different rules for accessing, operating in and racking your equipment.
  • Infrastructure. At the end of the day, we can do more with the hardware infrastructure architectures now available. While we leverage virtualization to take advantage of the greater compute power, it does not alleviate the requirements around planning a large-scale virtual environment that must span countries. Sometimes, it’s the smallest details that can be the most difficult to work out, for example, how to manage an on-premise environment, such as this one, as a service. The difficulties here is that the network, power, cooling, etc. are provided for by the customer, which requires considerable efforts to negotiate shared operating procedures, while still attempting to commit to specific service levels. Many of today’s largest businesses do not operate their internal IT organizations with the same penalties for failure to meet a service level agreement (SLA) as they would apply to an external service provider. Hence, service providers that must rely on this foundation face many challenges and hurdles to ensuring their own service levels.
  • Security. Your solution may be reviewed by the internal security team to ensure it is compliant with current security procedures and policies. Since this is most often not the team that procured or built the solution, you should not expect that they will be able to warn you about all the intricacies for deploying a solution for the business. The best advice here would be to ensure you engage the security team early and often once you have completed your design. In US Federal IT, part of deployment usually requires that those implementing the system obtain an Authority to Operate (ATO). Quite often, medium- and large-sized businesses have a similar procedure; it’s just not spelled out so succinctly. Hence, these audits and tests can introduce unexpected expenses due to the need to modify the solution and unexpected delays.
  • Software. Any piece of software can be tested and operated under a modest set of assumptions. When that software must be deployed as part of a service that has requirements to meet certain performance metrics as well as meet certain recovery metrics in the case of an outage, that same software can fall flat on its face. Hence, the long pole in the tent for building out a cloud solution at this scale is testing for disaster recovery and scalability. In addition to requiring time to complete, it often requires a complementary environment for disaster recovery and failover testing, which can be a significant additional cost to the project. I will also note that in a complex environment software license management can become very cumbersome. I recommend starting the license catalog early and ensure that it is maintained throughout the project.
  • Data Flow. A complex cloud-based solution that integrates with existing internal systems operating on different networks across multiple countries will have to cross multiple firewalls, routers and run along paths with varying bandwidth carrying varying levels of traffic. Hence, issues for production operation and remote management can be impacted by multiple factors both during planning and during operation. No matter how much testing is done in a lab, the answer seemingly comes down to, “we’ll just have to see how it performs in production.” So, perhaps, a better title for this bullet might be “Stuff You’re Going To Learn Only After You Start The Engine.” Your team will most likely have a mix of personalities. Some will be okay with this having learned from doing similar projects in their past, others will not be able to get past this point and continually raise objections. Shoot the naysayer! Okay, not really, but seriously, adopt this mandate and make sure everyone on the team understands it.
  • Documentation. I cannot say enough about ensuring you document early and often. Once the train is started, it’s infinitely more difficult to catch up. Start with good highly-reviewed requirements. Review them with the customer. Call to order the ARB and have them review and sign off. This is a complex environment with a lot of interdependencies. It’s not going to be simple to change one link without it affecting many others. The more changes you can avoid the more smoothly the process of getting a system into production will be.

Most importantly, and I cannot stress this enough, is the importance in building a team environment to accomplish the mission. Transforming a concept into a production-ready operational system requires a large number of people to cooperatively work together to address the hurdles. The solution as designed on paper will hardly ever match perfectly what is deployed in the field for the reasons stated above. This project is heavily reliant upon a Program Management Organization with representatives from engineering, managed services, field services, product and executive leadership to stay on track. Developing the sense of team within this group is critical to providing the appropriate leadership to the project as a whole. Subsequently, we also formed an Architecture Review Board (ARB) comprised of key technical individuals related to each aspect of the solution to address and find solutions for major technical issues that emerged throughout the project. In this way we ensure the responses were holistic in nature, not just focused on the specific problem, but also provided alternatives that would work within the scope of the entire project.

More Stories By JP Morgenthal

JP is a Principal Solutions Architect in EMC’s Cloud and Virtual Data Center Services Ranger group, where he focuses on cloud advisory services, IT transformation, and application portfolio rationalization. He’s been a vocal advocate for cloud computing for many years, even going so far as being a formal Cloud Evangelist in a previous company. He is well known in the IT world as an expert in IT strategy and cloud computing. JP has more than 25 years of expertise in enterprise technology. He’s a prolific blogger and author on the topics of integration, software development and cloud computing, and has written or co-authored four books, including his most recent publication “Cloud Computing: Assessing the Risks He serves as the Lead Cloud Computing editor for InfoQ.

Cloud Expo Breaking News
APIs came about to help companies create and manage their digital ecosystem, enabling them not only to reach more customers through more devices, but also create a large supporting ecosystem of developers and partners. While Facebook, Twitter and Netflix were the early adopters of APIs, large enterprises have been quick to embrace the concept of APIs and have been leveraging APIs as a connective tissue that powers all interactions between their customers, partners and employees. As enterprises embrace APIs, some very specific Enterprise API Adoption patterns and best practices have started emerging. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will talk about the most common enterprise API patterns and will discuss how enterprises can successfully launch an API program.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
Today, developers and business units are leading the charge to cloud computing. The primary driver: faster access to computing resources by using the cloud's automated infrastructure provisioning. However, fast access to infrastructure exposes the next friction point: creating, delivering, and operating applications much faster. In his session at 14th Cloud Expo, Bernard Golden, VP of Strategy at ActiveState, will discuss why solving the next friction point is critical for true cloud computing success and how developers and business units can leverage service catalogs, frameworks, and DevOps to achieve the true goal of IT: delivering increased business value through applications.
MapDB is an Apache-licensed open source database specifically designed for Java developers. The library uses the standard Java Collections API, making it totally natural for Java developers to use and adopt, while scaling database size from GBs to TBs. MapDB is very fast and supports an agile approach to data, allowing developers to construct flexible schemas to exactly match application needs and tune performance, durability and caching for specific requirements.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.