Click here to close now.

Welcome!

AJAX & REA Authors: Mike Kavis, Pat Romanski, Elizabeth White, Liz McMillan, AppDynamics Blog

Related Topics: Cloud Expo, Java, Microservices Journal, Virtualization, Apache, Security

Cloud Expo: Blog Feed Post

Disaster Recovery Ascends to the Cloud | Part 2: Deployment Considerations

Realizing an economical alternative to traditional DR

As mentioned in Part I of this series, cloud technology has introduced a viable alternative to the practice of creating secondary sites for disaster recovery (DR), promising to save IT organizations hundreds of thousands or even millions of dollars in infrastructure and maintenance. While the cost reduction associated with replacing dedicated DR infrastructure is intuitive, the ability of cloud solutions to meet the recovery times (RTOs and RPOs) dictated by businesses is often less well understood.

Part I suggested two key considerations in recovering IT operations from a disaster are (1) regaining access to data and (2) regaining access to applications. Today’s cloud integrated storage or cloud storage gateways can push backups or live data sets to the cloud easily and securely, enabling the first element of a cloud DR solution. With this in mind, let’s examine two strategies for application recovery using cloud-based DR:

Strategy 1: Data copies in-cloud, application recovery off-cloud
One of the simpler approaches to cloud-based DR stores data copies in the cloud and allows external, off-cloud access by applications in the case of a primary site outage. With data in the cloud accessible from nearly anywhere, applications may be recovered at a secondary site if they cannot be recovered at the primary site.

The advantage of this approach is the elimination of dedicated secondary storage infrastructure for DR. The disadvantage is the requirement for a secondary site for application recovery.

An improvement to this approach involves leveraging a hosting provider as the application recovery site, where new application servers can be provisioned on-demand in case of a disaster. Using a hosted recovery site can be considerably faster than restoring and rebuilding the original application environment and more economical than maintaining a dedicated secondary site. However, recovery times may be impacted by the time it takes for the hosting provider to provision new servers.

Application recovery off-cloud versus in-cloud

Strategy 2: Data copies in-cloud, application recovery in-cloud
Perhaps a more ideal approach to cloud-based DR enables both data and application recovery in the cloud without the need for a secondary site for applications or storage. Cloud compute as-a-service represents an attractive environment for recovering applications by rapidly spinning up new virtual servers.

When using a cloud storage gateway to replicate data to the cloud, consider cloud gateways with the ability to run in the cloud. Cloud servers can then attach to the gateway to facilitate application recovery.

The process of application recovery may involve activating servers and applications via a cloud provider’s catalog. Although this process is much faster than provisioning new physical hardware, it can still be time consuming, particularly when attempting to recover tens or hundreds of servers.

Alternatively, virtual machines that resided on-premise can be reinstantiated in the cloud, similar to failover of virtual machines between hypervisors. This is possible if the same hypervisor runs on-premise and in the cloud. However, while moving virtual machine (VM) images between like hypervisors is generally straightforward, many cloud providers may not offer sufficient administrative privilege in their virtual compute environments or may not be compatible with on-premise hypervisors.

To get around these limitations and incompatibilities, an emerging option involves importing on-premise VMs into the cloud via conversion scripts and tools. An important consideration is ensuring that these conversion scripts and tools operate bidirectionally, meaning they allow a way to eventually export VMs back to the on-premise environment.

The keys to success are testing and working with a partner you trust
While there are a variety of ways to deploy DR in the cloud, there are many subtleties and details to consider. Not surprisingly, the devil is often in the details.

Keep in mind that an important aspect of any DR strategy is conducting regular testing and validation. Additionally, working with technology partners who understand the advantages and tradeoffs of DR in the cloud can be particularly helpful.

Like any major IT undertaking, DR in the cloud requires significant planning — but the payoff can be substantial if reducing disaster recovery costs and improving availability are important to your business.

Read the original blog entry...

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

@CloudExpo Stories
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
With the arrival of the Big Data revolution, a data professional is expected to master a broad spectrum of complex domains including data processing, mathematics, programming languages, machine learning techniques, and business knowledge. While this mastery is undoubtedly important, this narrow focus on tool usage has divorced many from the imagination required to solve real-world problems. As the demand for analysis increases, the data science community must transform from tool experts to "data...
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, discussed how a user-centric Application Performance Management solution can help inspire your users with every applicati...
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo – to be held June 9-11, 2015, at the Javits Center in New York City, NY – is now accepting Hackathon proposals. Hackathon sponsorship benefits include general brand exposure and increasing engagement with the developer ecosystem. At Cloud Expo 2014 Silicon Valley, IBM held the Bluemix Developer Playground on November 5 and ElasticBox held the DevOps Hackathon on November 6. Both events took place on the expo fl...
15th Cloud Expo, which took place Nov. 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, expanded the conference content of @ThingsExpo, Big Data Expo, and DevOps Summit to include two developer events. IBM held a Bluemix Developer Playground on November 5 and ElasticBox held a Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of Bluemix, its services and functionalit...
"Our premise is Docker is not enough. That's not a bad thing - we actually love Docker. At ActiveState all our products are based on open source technology and Docker is an up-and-coming piece of open source technology," explained Bart Copeland, President & CEO of ActiveState Software, in this SYS-CON.tv interview at DevOps Summit at Cloud Expo®, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for applica...
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their small...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps,...
A new definition of Big Data & the practical applications of the defined components & associated technical architecture models This presentation introduces a new definition of Big Data, along with the practical applications of the defined components and associated technical architecture models. In his session at Big Data Expo, Tony Shan will start with looking into the concept of Big Data and tracing back the first definition by Doug Laney, and then he will dive deep into the description of 3V...
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps become...
NuoDB just introduced the Swifts 2.1 Release. In this demo at 15th Cloud Expo, Seth Proctor, CTO of NuoDB, Inc., discussed why scaling databases in the cloud is challenging, why building your application on top of the infrastructure that is designed with this in mind makes a difference, and what you can do with NuoDB that simplifies your programming model, your operations model.
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, discussed this shifting dynamic with an ...
Hovhannes Avoyan, CEO of Monitis, Inc., a provider of on-demand systems management and monitoring software to 50,000 users spanning small businesses and Fortune 500 companies, has surpassed 1.5 million page views on the SYS-CON family of online magazines, which includes Cloud Computing Journal, DevOps Journal, Internet of Things Journal, and Big Data Journal. His home page at SYS-CON can be found at Montis.SYS-CON.com
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additional...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make sm...
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, will explain the best practices of continuous testing at high scale, which is r...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackI...