hadoop2.2.0 centos 编译安装详解



hadoop2.2.0 centos 编译安装详解

搭建环境:Centos x 6.4 64bit

1、安装JDK

我这里用的是64位机,要下载对应的64位的JDK,下载地址:http://www.oracle.com/technetwork/cn/java/javase/downloads/jdk7-downloads-1880260-zhs.html,选择对应的JDK版本,解压JDK,然后配置环境变量,

[html] view plaincopy在CODE上查看代码片派生到我的代码片
vi /etc/profile

注:这里有的人喜欢配置在当前用户里,我这里是配置的全局。
[html] view plaincopy在CODE上查看代码片派生到我的代码片
export PATH
export JAVA_HOME=/opt/jdk1.7
export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile

测试下JDK是否安装成功: java -version
[html] view plaincopy在CODE上查看代码片派生到我的代码片
java version “1.7.0_45″
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

2、编译前的准备(maven)

maven官方下载地址,可以选择源码编码安装,这里就直接下载编译好的 就可以了
[html] view plaincopy在CODE上查看代码片派生到我的代码片
wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.zip
解压文件后,同样在/etc/profie里配置环境变量
[html] view plaincopy在CODE上查看代码片派生到我的代码片
export MAVEN_HOME=/opt/maven3.1.1
export PATH=$PATH:$MAVEN_HOME/bin
验证配置是否成功: mvn -version

[html] view plaincopy在CODE上查看代码片派生到我的代码片
Apache Maven 3.1.1 (0728685237757ffbf44136acec0402957f723d9a; 2013-09-17 23:22:22+0800)
Maven home: /opt/maven3.1.1
Java version: 1.7.0_45, vendor: Oracle Corporation
Java home: /opt/jdk1.7/jre
Default locale: en_US, platform encoding: UTF-8
OS name: “linux”, version: “2.6.32-358.el6.x86_64″, arch: “amd64″, family: “unix”

3、编译hadoop
这个地方你将会遇到各式各样的头疼问题

首先官方下载hadoop源码

[html] view plaincopy在CODE上查看代码片派生到我的代码片
wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz
如果是你32bit的机器,可以直接下载官方已经编译好的包,64bit的机子跑编译好的包跑不了。
由于maven国外服务器可能连不上,先给maven配置一下国内镜像,在maven目录下,conf/settings.xml,在<mirrors></mirros>里添加,原本的不要动

[html] view plaincopy在CODE上查看代码片派生到我的代码片
<mirror>
<id>nexus-osc</id>
<mirrorOf>*</mirrorOf>
<name>Nexusosc</name>
<url>http://maven.oschina.net/content/groups/public/</url>
</mirror>

同样,在<profiles></profiles>内新添加
[html] view plaincopy在CODE上查看代码片派生到我的代码片
<profile>
<id>jdk-1.7</id>
<activation>
<jdk>1.4</jdk>
</activation>
<repositories>
<repository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
编译clean
[html] view plaincopy在CODE上查看代码片派生到我的代码片
cd hadoop2.2.0-src
mvn clean install –DskipTests

发现异常
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: ‘protoc –version’ did not return a version -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common

hadoop2.2.0编译需要protoc2.5.0的支持,所以还要下载protoc,下载地址:https://code.google.com/p/protobuf/downloads/list,要下载2.5.0版本噢
对protoc进行编译安装前先要装几个依赖包:gcc,gcc-c++,make 如果已经安装的可以忽略

[html] view plaincopy在CODE上查看代码片派生到我的代码片
yum install gcc
yum intall gcc-c++
yum install make

安装protoc
[html] view plaincopy在CODE上查看代码片派生到我的代码片
tar -xvf protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0
./configure –prefix=/opt/protoc/
make && make install
安装完配置下环境变量,就不多说了,跟上面过程一样。
编译clean

[html] view plaincopy在CODE上查看代码片派生到我的代码片
cd hadoop2.2.0-src
mvn clean install –DskipTests
[html] view plaincopy在CODE上查看代码片派生到我的代码片
cd hadoop2.2.0-src
mvn clean install –DskipTests

发现异常

[html] view plaincopy在CODE上查看代码片派生到我的代码片
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: ‘protoc –version’ did not return a version -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: ‘protoc –version’ did not return a version -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common

hadoop2.2.0编译需要protoc2.5.0的支持,所以还要下载protoc,下载地址:https://code.google.com/p/protobuf/downloads/list,要下载2.5.0版本噢

对protoc进行编译安装前先要装几个依赖包:gcc,gcc-c++,make 如果已经安装的可以忽略

 

再次遇到报错[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found

这次是遇到BUG了按照https://issues.apache.org/jira/browse/HADOOP-10110官方说明在hadoop-common-project/hadoop-auth/pom.xml文件中添加

[html] view plaincopy在CODE上查看代码片派生到我的代码片
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-util</artifactId>
<scope>test</scope>
</dependency>
[html] view plaincopy在CODE上查看代码片派生到我的代码片
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-util</artifactId>
<scope>test</scope>
</dependency>

再次编译遇到报错Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-common:

这是没有安装zlib1g-dev的关系,这个可以 使用apt-get安装
别急,还不要着急开始编译安装,不然又是各种错误,需要安装cmake,openssl-devel,ncurses-devel依赖

[html] view plaincopy在CODE上查看代码片派生到我的代码片
yum install cmake
yum install openssl-devel
yum install ncurses-devel

ok,现在可以进行编译了,
[html] view plaincopy在CODE上查看代码片派生到我的代码片
mvn package -Pdist,native -DskipTests -Dtar

现在可以拿出你的手机,玩会游戏了,慢慢等吧!
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[INFO] ————————————————————————
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ………………………….. SUCCESS [3.709s]
[INFO] Apache Hadoop Project POM ……………………. SUCCESS [2.229s]
[INFO] Apache Hadoop Annotations ……………………. SUCCESS [5.270s]
[INFO] Apache Hadoop Assemblies …………………….. SUCCESS [0.388s]
[INFO] Apache Hadoop Project Dist POM ……………….. SUCCESS [3.485s]
[INFO] Apache Hadoop Maven Plugins ………………….. SUCCESS [8.655s]
[INFO] Apache Hadoop Auth ………………………….. SUCCESS [7.782s]
[INFO] Apache Hadoop Auth Examples ………………….. SUCCESS [5.731s]
[INFO] Apache Hadoop Common ………………………… SUCCESS [1:52.476s]
[INFO] Apache Hadoop NFS …………………………… SUCCESS [9.935s]
[INFO] Apache Hadoop Common Project …………………. SUCCESS [0.110s]
[INFO] Apache Hadoop HDFS ………………………….. SUCCESS [1:58.347s]
[INFO] Apache Hadoop HttpFS ………………………… SUCCESS [26.915s]
[INFO] Apache Hadoop HDFS BookKeeper Journal …………. SUCCESS [17.002s]
[INFO] Apache Hadoop HDFS-NFS ………………………. SUCCESS [5.292s]
[INFO] Apache Hadoop HDFS Project …………………… SUCCESS [0.073s]
[INFO] hadoop-yarn ………………………………… SUCCESS [0.335s]
[INFO] hadoop-yarn-api …………………………….. SUCCESS [54.478s]
[INFO] hadoop-yarn-common ………………………….. SUCCESS [39.215s]
[INFO] hadoop-yarn-server ………………………….. SUCCESS [0.241s]
[INFO] hadoop-yarn-server-common ……………………. SUCCESS [15.601s]
[INFO] hadoop-yarn-server-nodemanager ……………….. SUCCESS [21.566s]
[INFO] hadoop-yarn-server-web-proxy …………………. SUCCESS [4.754s]
[INFO] hadoop-yarn-server-resourcemanager ……………. SUCCESS [20.625s]
[INFO] hadoop-yarn-server-tests …………………….. SUCCESS [0.755s]
[INFO] hadoop-yarn-client ………………………….. SUCCESS [6.748s]
[INFO] hadoop-yarn-applications …………………….. SUCCESS [0.155s]
[INFO] hadoop-yarn-applications-distributedshell ……… SUCCESS [4.661s]
[INFO] hadoop-mapreduce-client ……………………… SUCCESS [0.160s]
[INFO] hadoop-mapreduce-client-core …………………. SUCCESS [36.090s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher …. SUCCESS [2.753s]
[INFO] hadoop-yarn-site ……………………………. SUCCESS [0.151s]
[INFO] hadoop-yarn-project …………………………. SUCCESS [4.771s]
[INFO] hadoop-mapreduce-client-common ……………….. SUCCESS [24.870s]
[INFO] hadoop-mapreduce-client-shuffle ………………. SUCCESS [3.812s]
[INFO] hadoop-mapreduce-client-app ………………….. SUCCESS [15.759s]
[INFO] hadoop-mapreduce-client-hs …………………… SUCCESS [6.831s]
[INFO] hadoop-mapreduce-client-jobclient …………….. SUCCESS [8.126s]
[INFO] hadoop-mapreduce-client-hs-plugins ……………. SUCCESS [2.320s]
[INFO] Apache Hadoop MapReduce Examples ……………… SUCCESS [9.596s]
[INFO] hadoop-mapreduce ……………………………. SUCCESS [3.905s]
[INFO] Apache Hadoop MapReduce Streaming …………….. SUCCESS [7.118s]
[INFO] Apache Hadoop Distributed Copy ……………….. SUCCESS [11.651s]
[INFO] Apache Hadoop Archives ………………………. SUCCESS [2.671s]
[INFO] Apache Hadoop Rumen …………………………. SUCCESS [10.038s]
[INFO] Apache Hadoop Gridmix ……………………….. SUCCESS [6.062s]
[INFO] Apache Hadoop Data Join ……………………… SUCCESS [4.104s]
[INFO] Apache Hadoop Extras ………………………… SUCCESS [4.210s]
[INFO] Apache Hadoop Pipes …………………………. SUCCESS [9.419s]
[INFO] Apache Hadoop Tools Dist …………………….. SUCCESS [2.306s]
[INFO] Apache Hadoop Tools …………………………. SUCCESS [0.037s]
[INFO] Apache Hadoop Distribution …………………… SUCCESS [21.579s]
[INFO] Apache Hadoop Client ………………………… SUCCESS [7.299s]
[INFO] Apache Hadoop Mini-Cluster …………………… SUCCESS [7.347s]
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 11:53.144s
[INFO] Finished at: Fri Nov 22 16:58:32 CST 2013
[INFO] Final Memory: 70M/239M
[INFO] ————————————————————————

直到看到上面的内容那就说明编译完成了。
编译后的路径在:hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0

[html] view plaincopy在CODE上查看代码片派生到我的代码片
[root@localhost bin]# ./hadoop version
Hadoop 2.2.0
Subversion Unknown -r Unknown
Compiled by root on 2013-11-22T08:47Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using /data/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar

可以看出hadoop的版本
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[root@localhost hadoop-2.2.0]# file lib//native/*
lib//native/libhadoop.a: current ar archive
lib//native/libhadooppipes.a: current ar archive
lib//native/libhadoop.so: symbolic link to `libhadoop.so.1.0.0′
lib//native/libhadoop.so.1.0.0: <span style=”color:#ff0000;”>ELF 64-bit LSB shared object, x86-64, version 1</span> (SYSV), dynamically linked, not stripped
lib//native/libhadooputils.a: current ar archive
lib//native/libhdfs.a: current ar archive
lib//native/libhdfs.so: symbolic link to `libhdfs.so.0.0.0′
lib//native/libhdfs.so.0.0.0: <span style=”color:#ff0000;”>ELF 64-bit LSB shared object, x86-64, version 1</span> (SYSV), dynamically linked, not stripped

注意红色字体部分,如果下载官网的编译好的包,这里显示的是32-bit。

hadoop编译成功,下面可以来部署集群。

 

5、部署集群准备

两台以上机器,修改hostname, ssh免登陆,关闭防火墙等

5.1、创建新用户

[html] view plaincopy在CODE上查看代码片派生到我的代码片
useradd hadoop
su hadoop
注意以下操作有些需要root权限
5.2、修改主机名


[html] view plaincopy在CODE上查看代码片派生到我的代码片
vi /etc/sysconfig/network
[html] view plaincopy在CODE上查看代码片派生到我的代码片
hostname master
注销一下系统
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[root@master ~]#
变成master了,修改生效
5.3、修改hosts

[html] view plaincopy在CODE上查看代码片派生到我的代码片
vi /etc/hosts
新增你的主机IP和HOSTNAME

192.168.10.10 master
192.168.10.11 slave1
5.4、ssh免登陆

查看ssh

[html] view plaincopy在CODE上查看代码片派生到我的代码片
[root@localhost data]# rpm -qa|grep ssh
libssh2-1.4.2-1.el6.x86_64
openssh-5.3p1-84.1.el6.x86_64
openssh-server-5.3p1-84.1.el6.x86_64
缺少openssh-clients,
[html] view plaincopy在CODE上查看代码片派生到我的代码片
yum install openssh-clients
现在开始配置无密登录

[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master ~]$ cd /home/hadoop/
[hadoop@master ~]$ ssh-keygen -t rsa
一路回车
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master ~]$ cd .ssh/
[hadoop@master .ssh]$ cp id_rsa.pub authorized_keys
[hadoop@master .ssh]$ chmod 600 authorized_keys
把authorized_keys复制到其他要无密的机器上
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master .ssh]$ scp authorized_keys hadoop@192.168.10.11:/home/hadoop/.ssh/
记得这里是以要以hadoop权限过去,不然会报权限错误
一般情况到这里就可以无密登录了,可是我怎么还是需要密码,经过一翻搜寻才知道这是centos6.4版本的问题,《关于centos ssh无密登录失败的记录》
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master .ssh]$ ssh slave1
Last login: Mon Nov 25 14:49:25 2013 from master
[hadoop@slave1 ~]$
看到已经变成slave1了,说明成功鸟

6、开始集群配置工作
配置之前在要目录下创建三个目录,用来放hadooop文件和日志数据
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master ~]$mkdir -p dfs/name
[hadoop@master ~]$mkdir -p dfs/data
[hadoop@master ~]$mkdir -p temp
把之前编译成功的版本移到hadoop目录下,注意目录权限问题
下面就开始配置文件

6.1 hadoop-env.sh

找到JAVA_HOME,把路径改为实际地址

6.2 yarn-env.sh

同6.1

6.3 slave

配置所有slave节点

6.4 core-site.xml
[html] view plaincopy在CODE上查看代码片派生到我的代码片
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/temp</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
注意fs.defaultFS为2.2.0新的变量,代替旧的:fs.default.name

6.5、hdfs-site.xml
[html] view plaincopy在CODE上查看代码片派生到我的代码片
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/dfs/data</value>
</property>
<span style=”white-space:pre”> </span><property>
<name>dfs.replication</name>
<value>3</value>
</property>
<span style=”white-space:pre”> </span><property>
<name>dfs.webhdfs.enabled</name>
<span style=”white-space:pre”> </span><value>true</value>
</property>
新的:dfs.namenode.name.dir,旧:dfs.name.dir,新:dfs.datanode.name.dir,旧:dfs.data.dir
dfs.replication确定 data block的副本数目,hadoop基于rackawareness(机架感知)默认复制3份分block,(同一个rack下两个,另一个rack下一 份,按照最短距离确定具体所需block, 一般很少采用跨机架数据块,除非某个机架down了)

6.6、mapred-site.xml

这个地方需要把mapred-site.xml.template复制重新命名
[html] view plaincopy在CODE上查看代码片派生到我的代码片
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>cloud001:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>

</property>

新的计算框架取消了实体上的jobtracker, 故不需要再指定mapreduce.jobtracker.addres,而是要指定一种框架,这里选择yarn. 备注2:hadoop2.2.还支持第三方的计算框架,但没怎么关注过。
配置好以后将$HADOOP_HOME下的所有文件,包括hadoop目录分别copy到其它3个节点上。
6.7、yarn-site.xml

[html] view plaincopy在CODE上查看代码片派生到我的代码片
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>

到这里基本配置好了,把所有复制到其他的slave节点。
7、启动hadoop

这里你可以进行环境变量设置,不举例了

7.1、格式化namenode

[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master hadoop]$ cd /home/hadoop/hadoop-2.2.0/bin/
[hadoop@master bin]$ ./hdfs namenode -format
7.2、启动hdfs
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master bin]$ cd ../sbin/
[hadoop@master sbin]$ ./start-dfs.sh
这时候在master中输入jps应该看到namenode和secondarynamenode服务启动,slave中看到datanode服务启动

7.3、启动yarn
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master sbin]$ ./start-yarn.sh

master中应该有ResourceManager服务,slave中应该有nodemanager服务

查看集群状态:./bin/hdfs dfsadmin –report

查看文件块组成: ./bin/hdfsfsck / -files -blocks

查看HDFS: http://192.168.10.10:50070

查看RM: http:// 192.168.10.11:8088

8、安装中要注意的事项

8.1、注意版本,机器是32bit还是64位

8.2、注意依赖包的安装

8.3、写配置文件注意”空格“,特别是从别的地方copy的时候

8.4、关闭所有节点的防火墙

如果有看到类似”no route to host”这样的异常,基本就是防火墙没关

记得关的时候要切换到root帐号
[html] view plaincopy在CODE上查看代码片派生到我的代码片
(1) 重启后永久性生效:

开启:chkconfig iptables on

关闭:chkconfig iptables off

(2) 即时生效,重启后失效:

开启:service iptables start

关闭:service iptables stop
8.5、开启datanode后自动关闭
基本是因为namenode和datanode的clusterID不一致,可以参考《解决hadoop集群中datanode启动后自动关闭的问题》
其他一些特殊异常只能google之了
9、运行测试例子
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master bin]$ ./yarn jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar randomwriter /home/hadoop/dfs/input/
这里要注意不要用 -jar,不然会报异常“Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/util/ProgramDriver”
[html] view plaincopy在CODE上查看代码片派生到我的代码片
[hadoop@master bin]$ ./yarn jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /home/hadoop/dfs/input/ /home/hadoop/dfs/output/
在input下面新建两个文件
[html] view plaincopy在CODE上查看代码片派生到我的代码片
$mkdir /dfs/input %echo ‘hello,world’ >> input/file1.in
$echo ‘hello, ruby’ >> input/file2.in

./bin/hadoop fs -mkdir -p /home/hadoop/dfs/input
./bin/hadoop fs –put /home/hadoop/dfs/input /home/hadoop/test/test_wordcount/in

查看word count的计算结果:
$bin/hadoop fs -cat /home/hadoop/test/test_wordcount/out/*
hadoop 1
hello 1
ruby