Tuesday, 25 February 2014

Installing hadoop in single node centos5, centos6, Fedora 18, RHEL5, RHEL6, ubuntu

CentOS 5, CentOS 6, Fedora 18, RHEL5, RHEL6

  1. Make sure to grab the repo file:
    wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/bigtop/bigtop-0.6.0/repos/[centos5|centos6|fedora17|fedora18]/bigtop.repo
    
  2. Browse through the artifacts
    yum search mahout
    
  3. Install the full Hadoop stack (or parts of it)
    sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\* hbase\* hive\* hue\*
    

SLES 11, OpenSUSE

  1. Make sure to grab the repo file:
    #wget http://www.apache.org/dist/bigtop/bigtop-0.6.0/repos/[sles11|opensuse12]/bigtop.repo
    #mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
    
  2. Refresh zypper to start looking at the newly added repo
    #zypper refresh
    
  3. Browse through the artifacts
    zypper search mahout
    
  4. Install the full Hadoop stack (or parts of it)
    zypper install hadoop\* flume\* mahout\* oozie\* whirr\* hive\* hue\*
    

Ubuntu (64 bit, lucid, precise, quantal)

  1. Install the Apache Bigtop GPG key
    wget -O- http://archive.apache.org/dist/bigtop/bigtop-0.5.0/repos/GPG-KEY-bigtop | sudo apt-key add -
    
  2. Make sure to grab the repo file:
    sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/bigtop/bigtop-0.6.0/repos/`lsb_release --codename --short`/bigtop.list
    
  3. Update the apt cache
    sudo apt-get update
    
  4. Browse through the artifacts
    apt-cache search mahout
    
  5. Install bigtop-utils
    sudo apt-get install bigtop-utils
    
  6. Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/bigtop-utils file
    export JAVA_HOME=XXXX
    
  7. Install the full Hadoop stack (or parts of it)
    sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-* hive\* hue\*
    

Running Hadoop

After installing Hadoop packages onto your Linux box, make sure that:
  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/bigtop-utils file
    export JAVA_HOME=XXXX
    
  2. Format the namenode
    sudo /etc/init.d/hadoop-hdfs-namenode init
    
  3. Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:
    for i in hadoop-hdfs-namenode hadoop-hdfs-datanode ; do sudo service $i start ; done i.e --- (Note: # service hadoop-hdfs-namenode start #service hadoop-hdfs-datanode start)
  4. Make sure to create a sub-directory structure in HDFS before running any daemons:
    sudo /usr/lib/hadoop/libexec/init-hdfs.sh
    
  5. Now start YARN daemons:
    sudo service hadoop-yarn-resourcemanager start
    sudo service hadoop-yarn-nodemanager start
    
  6. Enjoy your cluster
    hadoop fs -ls -R /
    hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples*.jar pi 10 1000

No comments:

Post a Comment