hadoop学习第二天

   

hadoop学习第二天

API操作hadoop

Shell接口

# 常用Shell操作HDFS的命令
  -mkdir            在HDFS创建目录    hadoop fs -mkdir /data
  -ls               查看当前目录      hadoop fs -ls /
  -ls -R            查看目录与子目录
  -put              上传一个文件      hadoop fs -put data.txt /data/input
  -moveFromLocal    上传一个文件,会删除本地文件:ctrl + X
  -copyFromLocal    上传一个文件,与put一样
  -copyToLocal      下载文件  hadoop fs -copyToLocal /data/input/data.txt 
  -get              下载文件  hadoop fs -get /data/input/data.txt 
  -rm               删除文件  hadoop fs -rm /data/input/data.txt 
  -getmerge         将目录所有的文件先合并,再下载
  -cp               拷贝:hadoop fs -cp /data/input/data.txt  /data/input/data01.txt 
  -mv               移动:hadoop fs -mv /data/input/data.txt  /data/input/data02.txt 
  -count            统计目录下的文件个数(该命令选项显示指定路径下的文件夹数量、文件数量、文件总大小信息)
  -text、-cat       查看文件的内容  hadoop fs -cat /data/input/data.txt

Java接口

MapReduce

WordCount

WordCount程序是Hadoop自带的演示java程序,用于统计一个文本里面单词出现的次数。

hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar

运行WordCount程序

vim word.txt
I Love China
I Love Beijing
hdfs dfs -mkdir /huatec
hdfs dfs -put word.txt /huatec
cd /data/module/hadoop-2.7.3/share/hadoop/mapreduce/
hadoop jar hadoop-mapreduce-examples-2.7.3.jar  wordcount /huatec/word.txt /huatec/output
hdfs dfs -ls /huatec/output
hdfs dfs -cat  /huatec/output/part-r-00000
Beijing    1
China    1
I    2
Love    2

开发WordCount程序

编写WordCount程序[JAVA]

(1)在IDEA中引入所需的jar包,IDEA支持文件夹方式引入
(2)代码编写
Mapper函数、Reducer函数、Main函数主入口
(3)代码打包
(4)程序调试

Jar包:
hadoop-2.7.3/share/hadoop/common
hadoop-2.7.3/share/hadoop/mapreduce

WordCountMapper 代码

import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
   @Override
   protected void map(LongWritable key, Text value, Context context)
         throws IOException, InterruptedException {
      /*
       * key: 输入的key
       * value: 数据   I love Beijing
       * context: Map上下文
       */
      String data= value.toString();
      //分词
      String[] words = data.split(" ");
      //输出每个单词
      for(String w:words){
         context.write(new Text(w), new LongWritable(1));
      }
   }
}

WordCountReducer 代码

import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable>{
   @Override
   protected void reduce(Text k3, Iterable<LongWritable> v3,Context context) throws IOException, InterruptedException {
      //v3: 是一个集合,每个元素就是v2
      long total = 0;
      for(LongWritable l:v3){
         total = total + l.get();
      }
      //输出
      context.write(k3, new LongWritable(total));
   }
}

Main函数主入口

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountMain {

   public static void main(String[] args) throws Exception {
      //创建一个job = map + reduce
      Configuration conf = new Configuration();
      //创建一个Job
      Job job = Job.getInstance(conf);
      //指定任务的入口
      job.setJarByClass(WordCountMain.class);
      //指定job的mapper
      job.setMapperClass(WordCountMapper.class);
      job.setMapOutputKeyClass(Text.class);
      job.setMapOutputValueClass(LongWritable.class);
      //指定job的reducer
      job.setReducerClass(WordCountReducer.class);
      job.setOutputKeyClass(Text.class);
      job.setOutputValueClass(LongWritable.class);
      //指定任务的输入和输出
      FileInputFormat.setInputPaths(job, new Path(args[0]));
      FileOutputFormat.setOutputPath(job, new Path(args[1]));       
      //提交任务
      job.waitForCompletion(true);
   }
}

代码打包,上传伪分布式环境
程序调试:

hadoop jar IDEA.jar /huatec/word.txt /huatec/output
hdfs dfs -ls /huatec/output
hdfs dfs -cat  /huatec/output/part-r-00000
    1
Beijing    1
China    1
I    2
Love    2

发表评论