Storm-kafka中如何理解ZkCoordinator的過程,針對這個問題,這篇文章詳細介紹了相對應(yīng)的分析和解答,希望可以幫助更多想解決這個問題的小伙伴找到更簡單易行的方法。
為剛察等地區(qū)用戶提供了全套網(wǎng)頁設(shè)計制作服務(wù),及剛察網(wǎng)站建設(shè)行業(yè)解決方案。主營業(yè)務(wù)為網(wǎng)站制作、成都做網(wǎng)站、剛察網(wǎng)站設(shè)計,以傳統(tǒng)方式定制建設(shè)網(wǎng)站,并提供域名空間備案等一條龍服務(wù),秉承以專業(yè)、用心的態(tài)度為用戶提供真誠的服務(wù)。我們深信只要達到每一位用戶的要求,就會得到認可,從而選擇與我們長期合作。這樣,我們也可以走得更遠!
梳理ZkCoordinator的過程
package com.mixbox.storm.kafka; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.mixbox.storm.kafka.trident.GlobalPartitionInformation; import java.util.*; import static com.mixbox.storm.kafka.KafkaUtils.taskId; /** * * * ZKCoordinator 協(xié)調(diào)器 * * @author Yin Shuai */ public class ZkCoordinator implements PartitionCoordinator { public static final Logger LOG = LoggerFactory .getLogger(ZkCoordinator.class); SpoutConfig _spoutConfig; int _taskIndex; int _totalTasks; String _topologyInstanceId; // 每一個分區(qū)對應(yīng)著一個分區(qū)管理器 Map_managers = new HashMap(); //緩存的List List _cachedList; //上次刷新的時間 Long _lastRefreshTime = null; //刷新頻率 毫秒 int _refreshFreqMs; //動態(tài)分區(qū)連接 DynamicPartitionConnections _connections; //動態(tài)BrokersReader DynamicBrokersReader _reader; ZkState _state; Map _stormConf; /** * * @param connections * 動態(tài)的 分區(qū)連接 * @param stormConf * Storm的配置文件 * @param spoutConfig * Storm sput的配置文件 * @param state * 對于ZKState的連接 * @param taskIndex * 任務(wù) * @param totalTasks * 總共的任務(wù) * @param topologyInstanceId * 拓撲的實例ID */ public ZkCoordinator(DynamicPartitionConnections connections, Map stormConf, SpoutConfig spoutConfig, ZkState state, int taskIndex, int totalTasks, String topologyInstanceId) { this(connections, stormConf, spoutConfig, state, taskIndex, totalTasks, topologyInstanceId, buildReader(stormConf, spoutConfig)); } public ZkCoordinator(DynamicPartitionConnections connections, Map stormConf, SpoutConfig spoutConfig, ZkState state, int taskIndex, int totalTasks, String topologyInstanceId, DynamicBrokersReader reader) { _spoutConfig = spoutConfig; _connections = connections; _taskIndex = taskIndex; _totalTasks = totalTasks; _topologyInstanceId = topologyInstanceId; _stormConf = stormConf; _state = state; ZkHosts brokerConf = (ZkHosts) spoutConfig.hosts; _refreshFreqMs = brokerConf.refreshFreqSecs * 1000; _reader = reader; } /** * @param stormConf * @param spoutConfig * @return */ private static DynamicBrokersReader buildReader(Map stormConf, SpoutConfig spoutConfig) { ZkHosts hosts = (ZkHosts) spoutConfig.hosts; return new DynamicBrokersReader(stormConf, hosts.brokerZkStr, hosts.brokerZkPath, spoutConfig.topic); } @Override public List getMyManagedPartitions() { if (_lastRefreshTime == null || (System.currentTimeMillis() - _lastRefreshTime) > _refreshFreqMs) { refresh(); _lastRefreshTime = System.currentTimeMillis(); } return _cachedList; } /** * 簡單的刷新的行為 * */ void refresh() { try { LOG.info(taskId(_taskIndex, _totalTasks) + "Refreshing partition manager connections"); // 拿到所有的分區(qū)信息 GlobalPartitionInformation brokerInfo = _reader.getBrokerInfo(); // 拿到自己任務(wù)的所有分區(qū) List mine = KafkaUtils.calculatePartitionsForTask( brokerInfo, _totalTasks, _taskIndex); // 拿到當前任務(wù)的分區(qū) Set curr = _managers.keySet(); // 構(gòu)造一個集合 Set newPartitions = new HashSet (mine); // 在new分區(qū)中,移除掉所有 自己擁有的分區(qū) newPartitions.removeAll(curr); // 要刪除的分區(qū) Set deletedPartitions = new HashSet (curr); // deletedPartitions.removeAll(mine); LOG.info(taskId(_taskIndex, _totalTasks) + "Deleted partition managers: " + deletedPartitions.toString()); for (Partition id : deletedPartitions) { PartitionManager man = _managers.remove(id); man.close(); } LOG.info(taskId(_taskIndex, _totalTasks) + "New partition managers: " + newPartitions.toString()); for (Partition id : newPartitions) { PartitionManager man = new PartitionManager(_connections, _topologyInstanceId, _state, _stormConf, _spoutConfig, id); _managers.put(id, man); } } catch (Exception e) { throw new RuntimeException(e); } _cachedList = new ArrayList (_managers.values()); LOG.info(taskId(_taskIndex, _totalTasks) + "Finished refreshing"); } @Override public PartitionManager getManager(Partition partition) { return _managers.get(partition); } }
1 : 首先 ZKCoorDinator 實現(xiàn) PartitionCoordinator的接口
package com.mixbox.storm.kafka; import java.util.List; /** * @author Yin Shuai */ public interface PartitionCoordinator { /** * 拿到我管理的分區(qū)列表 List{PartitionManager} * @return */ ListgetMyManagedPartitions(); /** * @param 依據(jù)制定的分區(qū)partition,去getManager * @return */ PartitionManager getManager(Partition partition); }
第一個方法拿到所有的 PartitionManager
第二個方法依據(jù)特定的 Partition去得到一個分區(qū)管理器
關(guān)于 Storm-kafka中如何理解ZkCoordinator的過程問題的解答就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,如果你還有很多疑惑沒有解開,可以關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道了解更多相關(guān)知識。