Skip to content

Conversation

@jonathanlehto
Copy link

@jonathanlehto jonathanlehto commented Oct 27, 2025

Description

Attempting to re-vitalize the http sink updates. Branch ended up getting bigger than I hoped, but please provide any feedback you have!

Resolves <issue nr here>

PR Checklist

"failed requests: {}, throwing BatchHttpStatusCodeValidationFailedException from sink",
failedRequests
);
getFatalExceptionCons().accept(new BatchHttpStatusCodeValidationFailedException(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For context, the README reads like an error should trigger a job failure, which is why I threw a fatal exception here

Copy link
Collaborator

@davidradl davidradl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest that you raise an issue for each of the 3 things you would like to put in. It will be far easier to review smaller pieces.

  • the retries for sink should be very self contained piece of code.
  • for the ratelimitter -I see we have apache/flink#27134 which is not yet merged. Is there a generic way we can do Sink in the Flink code base ?
  • fyi - we are porting over changes to https://github.com/apache/flink-connector-http and are hoping to release this in the near future

You can configure HTTP status code handling for HTTP sink table and enable automatic retries with delivery guarantees.

#### Retries and delivery guarantee
HTTP Sink supports automatic retries when `sink.delivery-guarantee` is set to `at-least-once`. Failed requests will be automatically retried based on the configured status codes.
Copy link
Collaborator

@davidradl davidradl Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sink.delivery-guarantee is interesting and looks like it is a bringing this connector in line with others. When do we think we will get more that one request issued? I assume we would need more processing for exactly once.

Copy link
Author

@jonathanlehto jonathanlehto Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am lifting a big part of the code from an earlier PR, which was the source of using a delivery guarantee. However, for my personal use case, I do in fact need to guarantee at least once behavior 🙂. The current implementation has an indefinite number of retries. If we don't have the delivery guarantee, we'll need to have something like number of attempts imo. I'm fine with whatever though! I just really need retries!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly once is pretty challenging given that we are using HTTP requests. Imo, it would require coordination with the receiving service to response with some kind of ack in order to enforce it. I don't believe exactly once can be achieved on just the client side with HTTP requests if that makes sense. However, duplicated success message emissions should be really infrequent

@jonathanlehto
Copy link
Author

jonathanlehto commented Oct 30, 2025

I would suggest that you raise an issue for each of the 3 things you would like to put in. It will be far easier to review smaller pieces.

That's great to hear this could make it in the Apache foundation! I can certainly break up the retry behavior and the rate limiter. However, I'm not sure if I could separate the delivery guarantee and the HTTP code refactor. I don't the current sink configuration makes much sense, and I would like to consolidate the code parsing logic between the sink and polling clients. Let me know if you have thoughts there.. I can certainly try to break the httpd code changes and delivery guarantee up if you still think it would be helpful!

You probably meant the config changes, the rate limiting, and the retry behavior 🤦 . I can do that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants