Welcome!

Machine Learning Authors: Jyoti Bansal, Peter Silva, Carmen Gonzalez, Liz McMillan, Mike Puglia

Related Topics: @CloudExpo, Open Source Cloud

@CloudExpo: Blog Post

Hadoop Quickstart: Create and Better Manage Hadoop Clusters on Rackspace

Use Whirr to automate standup of your distributed cluster on Rackspace

We have previously provided a Quickstart guide to standing up Rackspace cloud servers (and have one for Amazon servers as well). These are very low cost ways of building reliable, production ready capabilities for enterprise use (commercial and government).  And Bryan Halfpap has provided a Quickstart guide which shows you how to build a Hadoop Cluster (leveraging Cloudera’s CDH3).  Using Bryan’s guide you can have a Hadoop Cluster up and running in under 20 minutes.

With this post we would like to provide you with some additional tips that flow from these other posts. We will show you how to build clusters even faster using another common tool in community use, Whirr.

What is Whirr? Apache Whirr is a set of libraries for running cloud services. Here is more from http://whirr.apache.org/

Whirr provides:

  • A cloud-neutral way to run services. You don’t have to worry about the idiosyncrasies of each provider.
  • A common service API. The details of provisioning are particular to the service.
  • Smart defaults for services. You can get a properly configured system running quickly, while still being able to override settings as needed.

And the great news is you can use Whirr as a command line tool for deploying clusters.

If you follow the tips below you can use Whirr to quickly standup distributed clusters. Our assumptions in this guide are that you have stood up RedHat severs using our Rackspace tutorial. But if this is not the case you should be able to easily modify the tips below to suit your situation.

SSH into your Rackspace account by terminal window:

sudo ssh [email protected]

After logging in, it is always a good idea to make sure you have the latest packages. In Red Hat, type:

sudo yum upgrade

Now it is time to install Whirr.  This is easy since you are running RedHat. RedHat uses YUM, a package management application that makes software installation easy. Type:

yum install whirr

Your installation will be complete in under a minute.

You will now need to generate a keypair for use with Whirr. This will let you enable secure communications with the Whirr cluster without needing passwords. To do that, enter the following command:

ssh-keygen -t rsa -P ”

You will see:

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):

Just hit “enter”.

You will see something like:

Created directory ‘/root/.ssh’.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c6:31:f7:f5:97:e4:8c:b3:2a:f4:0d:a0:93:e4:c1:06 [email protected]
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| E |
| o o . .. |
| * = . .=..|
| + S . .o +o|
| * . . o .|
| o . o. |
| . … |
| .. |
+—————–+

Now you must define your Whirr cluster. you do that by creating a properties file. For simplicity, you will name it hadoop.properties. You will need your rackspace username and API to fill out the whirr properties file.  Your API is found in my account page under “API Access”

You can create the properties file many ways. Here is how to do it in nano:

nano hadoop.properties

Now  enter the following info in that file, subsituting your login and API info for what you see below:

whirr.cluster-name=myhadoopcluster
whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+$
whirr.provider=cloudservers-us
whirr.identity=enteryourloginidfromrackspace
whirr.credential=[youuseyourownapi]
whirr.private-key-file=/root/.ssh/id_rsa
whirr.public-key-file=/root/.ssh/id_rsa.pub
whirr.cluster-user=newusers
whirr.hadoop-install-function=install_cdh_hadoop
whirr.hadoop-configure-function=configure_cdh_hadoop

Now to launch a cluster, type:

$ whirr launch-cluster –config hadoop.properties

This will take a few moments to run. As it runs you should see messages like:

Bootstrapping cluster
Configuring template
Starting 1 node(s) with roles [hadoop-datanode]
Configuring template
Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]

As things are started up, servers are being automatically built. Keep watching your e-mail, you will be getting notices of server standup. Remember, this is costing you money. When you finish using your clusters you will want to terminate them. You can do that through Whirr or by just nuking the servers using your Rackspace account and control panel.

Note the info being provided in the terminal window. Information is being provided on the instances being stood up.  As you skim this info you will notice a couple URL’s are provided that give you a web UI into the namenode and job tracker. For example, mine are:

Namenode web UI available at http://50.56.211.206:50070

Jobtracker web UI available at http://50.56.211.206:50030

You will also see that a site file was created for you at:

/root/.whirr/myhadoopcluster/hadoop-site.xml

You need to update your your local Hadoop configuration to use this file.  Type the following commands:

cp -r /etc/hadoop-0.20/conf.empty /etc/hadoop-0.20/conf.whirr
rm -f /etc/hadoop-0.20/conf.whirr/*-site.xml
cp ~/.whirr/myhadoopcluster/hadoop-site.xml /etc/hadoop-0.20/conf.whirr
alternatives –install /etc/hadoop-0.20/conf hadoop-0.20-conf /etc/hadoop-0.20/conf.whirr 50
alternatives –display hadoop-0.20-conf

A proxy script was created for you at:

/root/.whirr/myhadoopcluster/hadoop-proxy.sh

You should now start that proxy.  It is there for security reasons.  All traffic from the network where your client is running is proxied through the master node of the cluster using an SSH tunnel.  This script launches the proxy.  Run the following command to launch the script:

~/.whirr/myhadoopcluster/hadoop-proxy.sh

If that doesn’t run make sure you have the right permissions on the file by

chmod +rwx hadoop-proxy.sh

Then try again.

With the above you are now able to use your Hadoop Cluster.

Prove that by browsing HDFS:

hadoop fs -ls /

Now it is time to run a MapReduce job!  We are going to use one of the example programs provided in the Hadoop installation. The program is in the file Hadoop-*examples*.jar  First, a lets review list of options available form the program. See these by entering:

hadoop jar $HADOOP_HOME/hadoop-examples-*.jar

You will see:

An example program must be given as the first argument.
Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
dbcount: An example job that count the pageview counts from a database.
grep: A map/reduce program that counts the matches of a regex in the input.
join: A job that effects a join over sorted, equally partitioned datasets
multifilewc: A job that counts words from several files.
pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
pi: A map/reduce program that estimates Pi using monte-carlo method.
randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
randomwriter: A map/reduce program that writes 10GB of random data per node.
secondarysort: An example defining a secondary sort to the reduce.
sleep: A job that sleeps at each map and reduce task.
sort: A map/reduce program that sorts the data written by the random writer.
sudoku: A sudoku solver.
teragen: Generate data for the terasort
terasort: Run the terasort
teravalidate: Checking results of terasort
wordcount: A map/reduce program that counts the words in the input files.

So lets put this info to use. We will make a directory put some info in there, and run the wordcount program:

 

$ export HADOOP_HOME=/usr/lib/hadoop
$ hadoop fs -mkdir input
$ hadoop fs -put $HADOOP_HOME/CHANGES.txt input
$ hadoop jar $HADOOP_HOME/hadoop-examples-*.jar wordcount input output
$ hadoop fs -cat output/part-* | head

Now you are off and running.

You now have a platform capable of scaling to very large jobs. And it runs CDH3, the most reliable, capable distribution of Hadoop and related technologies. Let the fun begin!

But one final note. Think about the lifecycle of your system. At some time you will need to spin it down and turn it off.  To destroy the cluster gracefully using Whirr, enter this command:

whirr destroy-cluster –config hadoop.properties

Using the information above you can create and better manage Hadoop Clusters on Rackspace very easily. This is how we create our CDH Clusters.  In future posts we will show you how to get data prepared for analysis and how to run some queries.  We will also provide tips on how to use Cloudera’s free management tools and how to upgrade to Cloudera Enterprise when you are ready.

 

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder and partner at Cognitio Corp and publsher of CTOvision.com

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, looked at differ...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
Security, data privacy, reliability and regulatory compliance are critical factors when evaluating whether to move business applications from in-house client hosted environments to a cloud platform. In her session at 18th Cloud Expo, Vandana Viswanathan, Associate Director at Cognizant, In this session, will provide an orientation to the five stages required to implement a cloud hosted solution validation strategy.
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
"Splunk basically takes machine data and we make it usable, valuable and accessible for everyone. The way that plays in DevOps is - we need to make data-driven decisions to delivering applications," explained Andi Mann, Chief Technology Advocate at Splunk and @DevOpsSummit Conference Chair, in this SYS-CON.tv interview at @DevOpsSummit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Security, data privacy, reliability, and regulatory compliance are critical factors when evaluating whether to move business applications from in-house, client-hosted environments to a cloud platform. Quality assurance plays a vital role in ensuring that the appropriate level of risk assessment, verification, and validation takes place to ensure business continuity during the migration to a new cloud platform.
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, discussed the impact of technology on identity. Sho...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus o...
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).