Skip to content

Conversation

nomisRev
Copy link
Owner

@nomisRev nomisRev commented Jul 13, 2022

To be able to properly stream Kafka you need to have an event loop that properly facilitates streaming Kafka.

There are several reasons why this is needed:

  • Commits can only occur during poll events
  • You need to poll continuously, or pause partitions to guarantee proper back-pressure without expensive rebalancing/repartitioning
  • Optimising committing offsets to Kafka with strong guarantees in face of stream termination

Also see https://tuleism.github.io/blog/2021/parallel-backpressured-kafka-consumer/.

This PR adds a custom event loop for facilitating this based on reactor-kafka.
Some work still needs to be done to support EXACTLY_ONE and AT_MOST_ONCE delivery.

Given this work makes everything in the Consumer.kt file obsolete I decided to move it to a .receiver package, following similar naming as reactor-kafka but this is prone to change/improvement towards 1.0.

I tried to add as much documentation inside of the code as possible. All feedback, and suggestions, code reviews or questions are welcome! 🙏

@nomisRev nomisRev merged commit c8186c1 into main Jul 24, 2022
@nomisRev nomisRev deleted the wip-consumer-loop branch July 24, 2022 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant