-
Notifications
You must be signed in to change notification settings - Fork 915
Consumer fails to consume on broker restart #173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Facing similar issue like this one, consumer hangs: |
Was able to attach gdb when consumer hangs, here is the backtrace (gdb) bt stack_end=0x7fff7c4024d8) at libc-start.c:287 #16 0x0000000000578c4e in _start () (gdb) |
The following logs are observed whenever I come across this issue, not sure if this is normal, still learning Kafka. On Consumer: On Broker restart: |
@edenhill Can you please help me out here. Thanks |
In your initial description you dont mention closing down the consumer, but in later comments you mention hang on destroy and provide a backtrace from consumer_close. Can you clarify what is not working, the consumtion or the closing? |
Hi @edenhill sorry if I was not clear earlier, I have been observing issues in both cases - So I am observing a couple of issues here -
|
Hi @edenhill any update over this ? |
do these log messages keep repeating after the broker has been restarted? |
When restarting the brokers, do they recover the correct number of replicas to start being operational again? If you restart the client when this happens, does it recover? |
No logs after this point onwards |
|
If you reproduce this issue again where it hangs during normal operation (not during close), can you do a |
|
Also I am observing after detaching from gdb the consumer start working again |
The gdb backtraces look fine. |
After this log there should be coordinator queries showing up every 1000ms, very weird that they are not appearing, and according to gdb nothing is dead-locked internally either. In this case your consumer is just sitting there, waiting for more messages, right? You have made no attempt to close the consumer at this point, right? Is this reproducible with the example consumer ? |
Yes i am just polling with timeout=1 and no attempt was made to close the consumer. On trying multiple time I have observed following backtrace, in this case backtrace for thread 3 seems different from previous backtraces, not sure if this is ok. I will try out with the examples. Thanks
|
Closing this as it's not reproducible with librdkafka 0.9.4 & confluent-kafka 0.9.4. Thanks @edenhill |
Hi @edenhill Is it normal that consumer.poll() returning message object with topic=None. I have the condition to check for topics before consuming the message & have observed that at some very rare occasion the topic is None. |
Proper messages will always have topic set, but since the Message object also double as a consumer (error) event it might be non-topic-specific consumer events you are seeing. See here: |
Hi all, @edenhill
I am facing this issue where when the broker is restarted after which causing the consumer not able to get the latest produce messages
The following steps is what i am trying:
Also posting the logs for the broker & consumer
Can anyone help me on this ? Thanks
Kafka Version -> kafka_2.11-0.10.0.0
librdkafka -> master
confluent-kafka-python -> confluent-kafka (0.9.4)
brokerlog.txt
consumer.txt
The text was updated successfully, but these errors were encountered: