創(chuàng)新互聯(lián)專(zhuān)注于企業(yè)成都全網(wǎng)營(yíng)銷(xiāo)、網(wǎng)站重做改版、永吉網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、HTML5、商城建設(shè)、集團(tuán)公司官網(wǎng)建設(shè)、外貿(mào)網(wǎng)站制作、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁(yè)設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性?xún)r(jià)比高,為永吉等各大城市提供網(wǎng)站開(kāi)發(fā)制作服務(wù)。
魯春利的工作筆記,誰(shuí)說(shuō)程序員不能有文藝范?
Kafka主要的shell腳本有
[hadoop@nnode kafka0.8.2.1]$ ll 總計(jì) 80 -rwxr-xr-x 1 hadoop hadoop 943 2015-02-27 kafka-console-consumer.sh -rwxr-xr-x 1 hadoop hadoop 942 2015-02-27 kafka-console-producer.sh -rwxr-xr-x 1 hadoop hadoop 870 2015-02-27 kafka-consumer-offset-checker.sh -rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-consumer-perf-test.sh -rwxr-xr-x 1 hadoop hadoop 860 2015-02-27 kafka-mirror-maker.sh -rwxr-xr-x 1 hadoop hadoop 884 2015-02-27 kafka-preferred-replica-election.sh -rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-producer-perf-test.sh -rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-reassign-partitions.sh -rwxr-xr-x 1 hadoop hadoop 866 2015-02-27 kafka-replay-log-producer.sh -rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-replica-verification.sh -rwxr-xr-x 1 hadoop hadoop 4185 2015-02-27 kafka-run-class.sh -rwxr-xr-x 1 hadoop hadoop 1333 2015-02-27 kafka-server-start.sh -rwxr-xr-x 1 hadoop hadoop 891 2015-02-27 kafka-server-stop.sh -rwxr-xr-x 1 hadoop hadoop 868 2015-02-27 kafka-simple-consumer-shell.sh -rwxr-xr-x 1 hadoop hadoop 861 2015-02-27 kafka-topics.sh drwxr-xr-x 2 hadoop hadoop 4096 2015-02-27 windows -rwxr-xr-x 1 hadoop hadoop 1370 2015-02-27 zookeeper-server-start.sh -rwxr-xr-x 1 hadoop hadoop 875 2015-02-27 zookeeper-server-stop.sh -rwxr-xr-x 1 hadoop hadoop 968 2015-02-27 zookeeper-shell.sh [hadoop@nnode kafka0.8.2.1]$
說(shuō)明:Kafka也提供了在windows下運(yùn)行的bat腳本,在bin/windows目錄下。
ZooKeeper腳本
Kafka各組件均依賴(lài)于ZooKeeper環(huán)境,因此在使用Kafka之前首先需要具備ZooKeeper環(huán)境;可以配置ZooKeeper集群,也可以使用Kafka集成的ZooKeeper腳本來(lái)啟動(dòng)一個(gè)standalone mode的ZooKeeper節(jié)點(diǎn)。
# 啟動(dòng)Zookeeper Server [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-start.sh USAGE: bin/zookeeper-server-start.sh zookeeper.properties # 配置文件路徑為config/zookeeper.properties,主要配置zookeeper的本地存儲(chǔ)路徑(dataDir) # 內(nèi)部實(shí)現(xiàn)為調(diào)用 exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain $@ # 停止ZooKeeper Server [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-stop.sh # 內(nèi)部實(shí)現(xiàn)為調(diào)用 ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT # 設(shè)置服務(wù)器參數(shù) [hadoop@nnode kafka0.8.2.1]$ zookeeper-shell.sh USAGE: bin/zookeeper-shell.sh zookeeper_host:port[/path] [args...] # 內(nèi)部實(shí)現(xiàn)為調(diào)用 exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server "$@" # zookeeper shell用來(lái)查看zookeeper的節(jié)點(diǎn)信息 [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-shell.sh nnode:2181,dnode1:2181,dnode2:2181/ Connecting to nnode:2181,dnode1:2181,dnode2:2181/ Welcome to ZooKeeper! JLine support is disabled WATCHER:: WatchedEvent state:SyncConnected type:None path:null ls / [hbase, hadoop-ha, admin, zookeeper, consumers, config, zk-book, brokers, controller_epoch]
說(shuō)明:$@ 表示所有參數(shù)列表。 $# 添加到Shell的參數(shù)個(gè)數(shù)。
Kafka啟動(dòng)與停止
# 啟動(dòng)Kafka Server [hadoop@nnode kafka0.8.2.1]$ bin/kafka-server-start.sh USAGE: bin/kafka-server-start.sh [-daemon] server.properties # 內(nèi)部實(shí)現(xiàn)為調(diào)用 exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@ # 略 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-run-class.sh # 停止Kafka Server [hadoop@nnode kafka0.8.2.1]$ kafka-server-stop.sh # 內(nèi)部實(shí)現(xiàn)為調(diào)用 ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM
說(shuō)明:Kafka啟動(dòng)時(shí)會(huì)從config/server.properties讀取配置信息,其中Kafka Server啟動(dòng)的三個(gè)核心配置項(xiàng)為:
broker.id : broker的唯一標(biāo)識(shí)符,取值為非負(fù)整數(shù)(可以取ip的最后一組) port : server監(jiān)聽(tīng)客戶(hù)端連接的端口(默認(rèn)為9092) zookeeper.connect : ZK的連接信息,格式為hostname1:port1[,hostname2:port2,hostname3:port3] # 可選 log.dirs : Kafka數(shù)據(jù)存儲(chǔ)的路徑(默認(rèn)為/tmp/kafka-logs),以逗號(hào)分割的一個(gè)或多個(gè)目錄列表。 當(dāng)有一個(gè)新partition被創(chuàng)建時(shí),此時(shí)哪個(gè)目錄中partition數(shù)目最少,則新創(chuàng)建的partition會(huì)被放 置到該目錄。 num.partitions : Topic的partition數(shù)目(默認(rèn)為1),可以在創(chuàng)建Topic時(shí)指定 # 其他參考http://kafka.apache.org/documentation.html#configuration
Kafka消息
# 消息生產(chǎn)者 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-producer.sh Read data from standard input and publish it to Kafka. # 從控制臺(tái)讀取數(shù)據(jù) Option Description ------ ----------- --topicREQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2. --broker-list REQUIRED: The topic id to produce messages to. # 這兩個(gè)為必選參數(shù),其他的可選參數(shù)可以通過(guò)直接執(zhí)行該命令查看幫助 # 消息消費(fèi)者 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-consumer.sh The console consumer is a tool that reads data from Kafka and outputs it to standard output. Option Description ------ ----------- --zookeeper REQUIRED: The connection string for the zookeeper connection, in the form host:port.(Multiple URLS can be given to allow fail-over.) --topic The topic id to consume on. --from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. # zookeeper參數(shù)是必須的,其他的都是可選的,具體的參考幫助信息 # 查看消息信息 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-topics.sh Create, delete, describe, or change a topic. Option Description ------ ----------- --zookeeper REQUIRED: The connection string for the zookeeper connection, in the form host:port. (Multiple URLS can be given to allow fail-over.) --create Create a new topic. --delete Delete a topic --alter Alter the configuration for the topic. --list List all available topics. --describe List details for the given topics. --topic The topic to be create, alter or describe. Can also accept a regular expression except for --create option。 --help Print usage information. # zookeeper參數(shù)是必須的,其他的都是可選的,具體的參考幫助信息
其余腳本暫略