1. install kafkacat
Ubuntu
apt-get install kafkacat
CentOS
install deepenency
yum install librdkafka-devel
download source from github
build source on centos
./configure <usual-configure-options> make sudo make install
2. watch the target topic data
lenmom@M1701:~/workspace/software/confluent-community-5.1.0-2.11$ bin/kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic connect-offsets --property print.key=true ["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":1005}
there is only one record in kafka topic connect-offsets.
3. dump the record from topic
lenmom@M1701:~/workspace/software/confluent-community-5.1.0-2.11$ kafkacat -b localhost:9092 -t connect-offsets -C -K# -o-1 % Reached end of topic connect-offsets [0] at offset 0 % Reached end of topic connect-offsets [5] at offset 0 % Reached end of topic connect-offsets [10] at offset 0 % Reached end of topic connect-offsets [20] at offset 0 % Reached end of topic connect-offsets [15] at offset 0 % Reached end of topic connect-offsets [9] at offset 0 % Reached end of topic connect-offsets [11] at offset 0 % Reached end of topic connect-offsets [4] at offset 0 % Reached end of topic connect-offsets [16] at offset 0 % Reached end of topic connect-offsets [17] at offset 0 % Reached end of topic connect-offsets [3] at offset 0 % Reached end of topic connect-offsets [24] at offset 0 % Reached end of topic connect-offsets [23] at offset 0 % Reached end of topic connect-offsets [13] at offset 0 % Reached end of topic connect-offsets [18] at offset 0 ["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005} % Reached end of topic connect-offsets [8] at offset 0 % Reached end of topic connect-offsets [2] at offset 0 % Reached end of topic connect-offsets [12] at offset 0 % Reached end of topic connect-offsets [19] at offset 0 % Reached end of topic connect-offsets [14] at offset 0 % Reached end of topic connect-offsets [1] at offset 0 % Reached end of topic connect-offsets [6] at offset 0 % Reached end of topic connect-offsets [7] at offset 0 % Reached end of topic connect-offsets [21] at offset 0 % Reached end of topic connect-offsets [22] at offset 1
the value:
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}
is what we want!
4. use the value get in step 3 as template to and send to the topic again
lenmom@M1701:~/workspace/software/confluent-community-5.1.0-2.11$ echo ‘["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}‘ | > kafkacat -b localhost:9092 -t connect-offsets -P -Z -K#
here, we modify the incrementing value from 1005 to 1.
5. watch the topic again
lenmom@M1701:~/workspace/software/confluent-community-5.1.0-2.11$ bin/kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic connect-offsets --property print.key=true ["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":1005} ["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":1}
we can see, there are two values with the same key in the topic now.
using kafkacat reset kafka offset
原文:https://www.cnblogs.com/lenmom/p/10898581.html