本篇內(nèi)容介紹了“Kafka2.7是如何重設(shè)消費者組位移”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細閱讀,能夠?qū)W有所成!
創(chuàng)新互聯(lián)公司-專業(yè)網(wǎng)站定制、快速模板網(wǎng)站建設(shè)、高性價比貢井網(wǎng)站開發(fā)、企業(yè)建站全套包干低至880元,成熟完善的模板庫,直接使用。一站式貢井網(wǎng)站制作公司更省心,省錢,快速模板網(wǎng)站建設(shè)找我們,業(yè)務(wù)覆蓋貢井地區(qū)。費用合理售后完善,10多年實體公司更值得信賴。
首先看看重置位移前的消費進度
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --describe
根據(jù)進度截圖,能看到所有分區(qū)的Lag
均為0,說明消息已經(jīng)被消費完,現(xiàn)在根據(jù)Earliest
策略重置消費進度,要求重置后所有的消息均可重新消費。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --to-earliest --execute
此時再度查看消費進度,可以看到 此時消費者可以重新消費這些消息。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:1,2 --to-earliest --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
final String topic = "mytopic";
try (KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
Collection partitions = consumer.partitionsFor(topic).stream()
.map(partitionInfo -> new TopicPartition(topic, partitionInfo.partition()))
.collect(Collectors.toList());
consumer.seekToBeginning(partitions);
consumer.partitionsFor(topic).forEach(i -> consumer.position(new TopicPartition(topic, i.partition())));
}
需要特殊說明的是,seekToBeginning
、seekToEnd
等方法執(zhí)行完需要執(zhí)行position
才會立刻生效
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
final String topic = "mytopic";
try (KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
List partitions = new ArrayList();
partitions.add(new TopicPartition(topic, 1));
partitions.add(new TopicPartition(topic, 2));
consumer.seekToBeginning(partitions);
consumer.position(new TopicPartition(topic, 1));
consumer.position(new TopicPartition(topic, 2));
}
首先看看重置位移前的消費進度。
根據(jù)上圖可以看到,kafka當(dāng)前沒有任何消息被消費,現(xiàn)在根據(jù)Latest
策略重置消費進度,要求重置后原消息不再消費。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --to-latest --execute
重置后
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:1,2 --to-latest --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
consumer.seekToEnd(consumer.partitionsFor(topic).stream()
.map(partitionInfo -> new TopicPartition(topic, partitionInfo.partition()))
.collect(Collectors.toList()));
consumer.partitionsFor(topic).forEach(i -> consumer.position(new TopicPartition(topic, i.partition())));
}
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
List partitions = new ArrayList();
partitions.add(new TopicPartition(topic, 1));
partitions.add(new TopicPartition(topic, 2));
consumer.seekToEnd(partitions);
consumer.position(new TopicPartition(topic, 1));
consumer.position(new TopicPartition(topic, 2));
}
此方法暫時聯(lián)想不到相應(yīng)的應(yīng)用場景,粗略跳過,待以后了解后再補充。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --to-current --execute
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:1,2 --to-current --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
consumer.partitionsFor(topic).stream().map(info -> new TopicPartition(topic, info.partition())).forEach(tp -> {
long committedOffset = consumer.committed(tp).offset();
consumer.seek(tp, committedOffset);
});
}
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
TopicPartition tp1 = new TopicPartition(topic, 1);
TopicPartition tp2 = new TopicPartition(topic, 2);
consumer.seek(tp1, consumer.committed(tp1).offset());
consumer.seek(tp2, consumer.committed(tp2).offset());
}
重置前
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --to-offset 5 --execute
通常來說,各個分區(qū)的提交位移往往是不同的,所以將所有分區(qū)的位移設(shè)置成同一個值并不顯示,需要指定分區(qū)。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:2 --to-offset 11 --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
consumer.partitionsFor(topic).stream().forEach(pi -> {
TopicPartition tp = new TopicPartition(topic, pi.partition());
consumer.seek(tp, 5L);
});
}
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
consumer.seek(new TopicPartition(topic, 2), 10L);
}
重置前
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --shift-by -1 --execute
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:2 --shift-by -2 --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
for (PartitionInfo info : consumer.partitionsFor(topic)) {
TopicPartition tp = new TopicPartition(topic, info.partition());
consumer.seek(tp, consumer.committed(tp).offset() - 1L);
}
}
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
TopicPartition tp = new TopicPartition(topic, 2);
consumer.seek(tp, consumer.committed(tp).offset() + 2L);
}
有時按照時間點來重置位移是個不錯的方式,重置前:
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --to-datetime 2021-05-09T00:00:00.000 --execute
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:2 --to-datetime 2020-05-09T00:00:00.000 --execute
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
long ts = new Date().getTime() - 24 * 60 * 60 * 1000;
Map timeToSearch = consumer.partitionsFor(topic).stream()
.map(pi -> new TopicPartition(topic, pi.partition()))
.collect(Collectors.toMap(Function.identity(), tp -> ts));
for (Entry entry : consumer.offsetsForTimes(timeToSearch).entrySet()) {
consumer.seek(entry.getKey(), entry.getValue() == null ? consumer.committed(entry.getKey()).offset() : entry.getValue().offset());
}
}
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.108:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "mytopic-consumer-group");
final String topic = "mytopic";
try (final KafkaConsumer consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(Arrays.asList(topic));
consumer.poll(0);
long ts = new Date().getTime() - 365 * 24 * 60 * 60 * 1000;
Map timeToSearch = new HashMap(){{
put(new TopicPartition(topic, 2), ts);
}};
for (Entry entry : consumer.offsetsForTimes(timeToSearch).entrySet()) {
consumer.seek(entry.getKey(), entry.getValue() == null ? consumer.committed(entry.getKey()).offset() : entry.getValue().offset());
}
}
重置前
首先需要了解Java Duration
的格式PnDTnHnMnS
,這里不做詳細展開。
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic --by-duration P1DT0H0M0S --execute
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.108:9092 --group mytopic-consumer-group --reset-offsets --topic mytopic:2 --by-duration P1DT0H0M0S --execute
同DateTime
“Kafka2.7是如何重設(shè)消費者組位移”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注創(chuàng)新互聯(lián)網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!