flume架構(gòu)圖
成都創(chuàng)新互聯(lián)公司提供成都網(wǎng)站建設(shè)、網(wǎng)站制作、網(wǎng)頁設(shè)計,品牌網(wǎng)站設(shè)計,一元廣告等致力于企業(yè)網(wǎng)站建設(shè)與公司網(wǎng)站制作,十載的網(wǎng)站開發(fā)和建站經(jīng)驗,助力企業(yè)信息化建設(shè),成功案例突破千余家,是您實現(xiàn)網(wǎng)站建設(shè)的好選擇.
單節(jié)點flume配置
flume-1.4.0 啟動flume
bin/flume-ng agent --conf ./conf -f conf/flume-conf.properties -Dflume.root.logger=DEBUG,console -n agent
-n表示配置文件中agent的名字
agent.sources = r1 agent.sinks = s1 agent.channels = c1 agent.sources.r1.channels = c1 agent.sinks.s1.channel = c1 #Describe/configure the sources agent.sources.r1.type = exec agent.sources.r1.command = tail -F /home/flume/loginfo #Use a channel which buffers events in memory agent.channels.c1.type = memory agent.channels.c1.capacity = 1000 #Event agent.channels.c1.transactionCapacity = 100 agent.sinks.s1.type = logger
flume-1.4.0 + kafka-0.7.2+hdfs flume配置
agent.sources = r1 agent.sinks = s_kafka s_hdfs agent.channels = c_kafka c_hdfs agent.sources.r1.channels = c_kafka c_hdfs agent.sources.r1.type = exec #下面這個腳本tail某個日志 agent.sources.r1.command = tail -F /home/flume/loginfo agent.channels.c_kafka.type = memory agent.channels.c_hdfs.type = memory agent.sinks.s_kafka.type = com.sink.FirstkafkaSink agent.sinks.s_kafka.channel = c_kafka #kafka需要連接zk,寫入broker數(shù)據(jù) agent.sinks.s_kafka.zkconnect = localhost:2181 agent.sinks.s_kafka.topic = test agent.sinks.s_kafka.serializer.class = kafka.serializer.StringEncoder agent.sinks.s_kafka.metadata.broker.list = localhost:9092 #配置文件server.properties agent.sinks.s_kafka.custom.encoding = UTF-8 agent.sinks.s_hdfs.type = hdfs agent.sinks.s_hdfs.channel = c_hdfs #默認端口8020 agent.sinks.s_hdfs.hdfs.path = hdfs://localhost:9000/root/source agent.sinks.s_hdfs.hdfs.filePrefix = events- agent.sinks.s_hdfs.hdfs.fileType = DataStream agent.sinks.s_hdfs.hdfs.writeFormat = Text agent.sinks.s_hdfs.hdfs.rollCount = 30 #達到某一數(shù)值記錄生成文件 agent.sinks.s_hdfs.hdfs.rollSize = 0 agent.sinks.s_hdfs.hdfs.rollInterval = 0 agent.sinks.s_hdfs.hdfs.useLocalTimeStamp = true agent.sinks.s_hdfs.hdfs.idleTimeout = 51 agent.sinks.s_hdfs.hdfs.threadsPoolSize = 2