小編給大家分享一下hadoop-reduce的示例分析,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
我們提供的服務(wù)有:成都網(wǎng)站設(shè)計、網(wǎng)站制作、微信公眾號開發(fā)、網(wǎng)站優(yōu)化、網(wǎng)站認(rèn)證、霸州ssl等。為成百上千家企事業(yè)單位解決了網(wǎng)站和推廣的問題。提供周到的售前咨詢和貼心的售后服務(wù),是有科學(xué)管理、有技術(shù)的霸州網(wǎng)站制作公司
Map的結(jié)果,會通過partition分發(fā)到Reducer上,Reducer做完Reduce操作后,通過OutputFormat,進行輸出。
* Licensed to the Apache Software Foundation (ASF) under one package org.apache.hadoop.mapreduce; import java.io.IOException; * Reduces a set of intermediate values which share a key to a smaller set of public class Reducer{ public class Context extends ReduceContext { public Context(Configuration conf, TaskAttemptID taskid, RawKeyValueIterator input, Counter inputKeyCounter, Counter inputValueCounter, RecordWriter output, OutputCommitter committer, StatusReporter reporter, RawComparator comparator, Class keyClass, Class valueClass ) throws IOException, InterruptedException { super(conf, taskid, input, inputKeyCounter, inputValueCounter, output, committer, reporter, comparator, keyClass, valueClass); } } /** * Called once at the start of the task. */ protected void setup(Context context ) throws IOException, InterruptedException { // NOTHING } /** * This method is called once for each key. Most applications will define * their reduce class by overriding this method. The default implementation * is an identity function. */ @SuppressWarnings("unchecked") protected void reduce(KEYIN key, Iterable values, Context context ) throws IOException, InterruptedException { for(VALUEIN value: values) { context.write((KEYOUT) key, (VALUEOUT) value); } } /** * Called once at the end of the task. */ protected void cleanup(Context context ) throws IOException, InterruptedException { // NOTHING } /** * Advanced application writers can use the * {@link #run(org.apache.hadoop.mapreduce.Reducer.Context)} method to * control how the reduce task works. */ public void run(Context context) throws IOException, InterruptedException { setup(context); while (context.nextKey()) { reduce(context.getCurrentKey(), context.getValues(), context); } cleanup(context); } }
Mapper的結(jié)果,可能送到可能的Combiner做合并,Combiner在系統(tǒng)中并沒有自己的基類,而是用Reducer作為Combiner的基類,他們對外的功能是一樣的,只是使用的位置和使用時的上下文不太一樣而已。
Mapper最終處理的結(jié)果對
* Licensed to the Apache Software Foundation (ASF) under one package org.apache.hadoop.mapreduce; * Partitions the key space. public abstract class Partitioner{ /** * Get the partition number for a given key (hence record) given the total * number of partitions i.e. number of reduce-tasks for the job. * * Typically a hash function on a all or a subset of the key.
* * @param key the key to be partioned. * @param value the entry value. * @param numPartitions the total number of partitions. * @return the partition number for thekey
. */ public abstract int getPartition(KEY key, VALUE value, int numPartitions); } * Licensed to the Apache Software Foundation (ASF) under one package org.apache.hadoop.mapreduce.lib.partition; import org.apache.hadoop.mapreduce.Partitioner; /** Partition keys by their {@link Object#hashCode()}. */ public class HashPartitionerextends Partitioner { /** Use {@link Object#hashCode()} to partition. */ public int getPartition(K key, V value, int numReduceTasks) { return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks; } }
Reducer是所有用戶定制Reducer類的基類,和Mapper類似,它也有setup,reduce,cleanup和run方法,其中setup和cleanup含義和Mapper相同,reduce是真正合并Mapper結(jié)果的地方,它的輸入是key和這個key對應(yīng)的所有value的一個迭代器,同時還包括Reducer的上下文。系統(tǒng)中定義了兩個非常簡單的Reducer,IntSumReducer和LongSumReducer,分別用于對整形/長整型的value求和。
* Licensed to the Apache Software Foundation (ASF) under one package org.apache.hadoop.mapreduce.lib.reduce; import java.io.IOException; public class IntSumReducerextends Reducer { private IntWritable result = new IntWritable(); public void reduce(Key key, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
Reduce的結(jié)果,通過Reducer.Context的方法collect輸出到文件中,和輸入類似,Hadoop引入了OutputFormat。OutputFormat依賴兩個輔助接口:RecordWriter和OutputCommitter,來處理輸出。RecordWriter提供了write方法,用于輸出
OutputFormat和RecordWriter分別對應(yīng)著InputFormat和RecordReader,系統(tǒng)提供了空輸出NullOutputFormat(什么結(jié)果都不輸出,NullOutputFormat.RecordWriter只是示例,系統(tǒng)中沒有定義),LazyOutputFormat(沒在類圖中出現(xiàn),不分析),F(xiàn)ilterOutputFormat(不分析)和基于文件FileOutputFormat的SequenceFileOutputFormat和TextOutputFormat輸出。
基于文件的輸出FileOutputFormat利用了一些配置項配合工作,包括mapred.output.compress:是否壓縮;mapred.output.compression.codec:壓縮方法;mapred.output.dir:輸出路徑;mapred.work.output.dir:輸出工作路徑。FileOutputFormat還依賴于FileOutputCommitter,通過FileOutputCommitter提供一些和Job,Task相關(guān)的臨時文件管理功能。如FileOutputCommitter的setupJob,會在輸出路徑下創(chuàng)建一個名為_temporary的臨時目錄,cleanupJob則會刪除這個目錄。
SequenceFileOutputFormat輸出和TextOutputFormat輸出分別對應(yīng)輸入的SequenceFileInputFormat和TextInputFormat
以上是“hadoop-reduce的示例分析”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對大家有所幫助,如果還想學(xué)習(xí)更多知識,歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道!