kafka+zookeeper搭建見文章教程url
我們注重客戶提出的每個(gè)要求,我們充分考慮每一個(gè)細(xì)節(jié),我們積極的做好網(wǎng)站設(shè)計(jì)、成都網(wǎng)站建設(shè)服務(wù),我們努力開拓更好的視野,通過不懈的努力,創(chuàng)新互聯(lián)建站贏得了業(yè)內(nèi)的良好聲譽(yù),這一切,也不斷的激勵(lì)著我們更好的服務(wù)客戶。 主要業(yè)務(wù):網(wǎng)站建設(shè),網(wǎng)站制作,網(wǎng)站設(shè)計(jì),小程序制作,網(wǎng)站開發(fā),技術(shù)開發(fā)實(shí)力,DIV+CSS,PHP及ASP,ASP.Net,SQL數(shù)據(jù)庫的技術(shù)開發(fā)工程師。
flume安裝:
1、下載地址
2、安裝-下圖
新建數(shù)據(jù)庫和表
3、配置新增conf/MySQL-flume.conf
[root@node191 apache-flume-1.8.0-bin]# cat conf/mysql-flume.conf
a1.channels = ch-1
a1.sources = src-1
a1.sinks = k1
###########sql source#################
# For each one of the sources, the type is defined
a1.sources.src-1.type = org.keedio.flume.source.SQLSource
a1.sources.src-1.hibernate.connection.url = jdbc:mysql://192.168.3.191:3306/chenhuachao
# Hibernate Database connection properties
a1.sources.src-1.hibernate.connection.user = root
a1.sources.src-1.hibernate.connection.password = abcd@1234
a1.sources.src-1.hibernate.connection.autocommit = true
a1.sources.src-1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
a1.sources.src-1.hibernate.connection.driver_class = com.mysql.jdbc.Driver
a1.sources.src-1.run.query.delay=5000
a1.sources.src-1.status.file.path = /opt/apache-flume-1.8.0-bin
a1.sources.src-1.status.file.name = sqlSource.status
# Custom query
a1.sources.src-1.start.from = 0
a1.sources.src-1.custom.query = select `id`, `name` from test
a1.sources.src-1.batch.size = 1000
a1.sources.src-1.max.rows = 1000
a1.sources.src-1.hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider
a1.sources.src-1.hibernate.c3p0.min_size=1
a1.sources.src-1.hibernate.c3p0.max_size=10
################################################################
a1.channels.ch-1.type = memory
a1.channels.ch-1.capacity = 10000
a1.channels.ch-1.transactionCapacity = 10000
a1.channels.ch-1.byteCapacityBufferPercentage = 20
a1.channels.ch-1.byteCapacity = 800000
################################################################
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = TestTopic
a1.sinks.k1.brokerList = 192.168.3.191:9092,192.168.3.193:9092,192.168.3.194:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
a1.sinks.k1.channel = ch-1
a1.sources.src-1.channels=ch-1
4、添加mysql驅(qū)動(dòng)到flume的lib目錄下
$ wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.35.tar.gz
$ tar xzf mysql-connector-java-5.1.35.tar.gz
$ cp mysql-connector-java-5.1.35-bin.jar lib/
5、添加kafka的topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic TestTopic
6、啟動(dòng)flume agent
./bin/flume-ng agent -n a1 -c conf -f conf/mysql-flume.conf -Dflume.root.logger=INFO,console
7、操作數(shù)據(jù)庫,新增表數(shù)據(jù),查看topic數(shù)據(jù)