Skip to content

Conversation

@zhijun42
Copy link

@zhijun42 zhijun42 commented Dec 31, 2025

Solves issues #18879 and #20716


This change refactors the client-side watch stream implementation to make sending watch requests (create / cancel / progress) fully asynchronous and safe from deadlock under the in-process client-server transport.

Instead of sending directly from the main watchGRPCStream.run loop, watch requests are now enqueued (in a non-blocking way) into bounded buffered channels sendCreatec and sendBestEffortc, and handled by a dedicated sendLoop goroutine. I rename the serveWatchClient function as recvLoop , and now this recvLoop/sendLoop design actually mirrors the server side implementation.

The reason for having two channels of sending client requests is the following:

We have three types of watch requests: create / cancel / progress. A create request has to be delivered to and processed by the server, and the user client will just block until it receives created response. However, the
cancel / progress requests are best-effort. Such distinction matters because when the watch system is under high pressure, we need to keep the create requests in channel sendCreatec while dropping the best-effort ones out of channel sendBestEffortc.

At any moment, there'll be only one sendLoop goroutine running. When the connection is disrupted, recvLoop exits with error and the main watchGRPCStream.run loop will close the sendc queue channel, wait for sendLoop to exit, and then spin up a new pair of recvLoop/sendLoop, which is the same as in the server side implementation.

I added an integration test TestV3WatchCancellationStorm to show that the system works fine even when there are a lot of watch cancel and create requests.

Once the previous PR #21064 that reproduces the deadlock issue is reviewed and merged, I'll rebase this one and update that reproduction test as well. I separated the PRs to make code review easier.

…d sendLoop, similar to the server side

Signed-off-by: Zhijun <dszhijun@gmail.com>
Signed-off-by: Zhijun <dszhijun@gmail.com>
@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zhijun42
Once this PR has been reviewed and has the lgtm label, please assign spzala for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link

Hi @zhijun42. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@serathius
Copy link
Member

/ok-to-test

@serathius
Copy link
Member

serathius commented Jan 1, 2026

Haven't dig deep, but let me list things I would look into when reviewing this code:

  • How is this different from the previous issue with separation of control vs data response. Requested watcher progress notifications are not synchronised with stream #15220
  • How dropping of watch responses impacts client. I expect dropping create will deadlock client.
  • Can we add robustness test for this?
  • Could we refactor the code to make it easier to understand before we add complexity?
  • Is the proposed response architecture the only one that solves the issue? What are it's downsides or alternative?

@codecov
Copy link

codecov bot commented Jan 1, 2026

Codecov Report

❌ Patch coverage is 92.39130% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.66%. Comparing base (5e543d7) to head (1fe01bf).
⚠️ Report is 26 commits behind head on main.

Files with missing lines Patch % Lines
client/v3/watch.go 92.39% 6 Missing and 1 partial ⚠️
Additional details and impacted files
Files with missing lines Coverage Δ
server/etcdserver/api/v3rpc/watch.go 85.75% <ø> (+1.16%) ⬆️
client/v3/watch.go 95.09% <92.39%> (+1.65%) ⬆️

... and 19 files with indirect coverage changes

@@            Coverage Diff             @@
##             main   #21065      +/-   ##
==========================================
+ Coverage   68.44%   68.66%   +0.22%     
==========================================
  Files         429      429              
  Lines       35203    35349     +146     
==========================================
+ Hits        24093    24271     +178     
+ Misses       9709     9687      -22     
+ Partials     1401     1391      -10     

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5e543d7...1fe01bf. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@serathius
Copy link
Member

/retest

Signed-off-by: Zhijun <dszhijun@gmail.com>
…comments

Signed-off-by: Zhijun <dszhijun@gmail.com>
Signed-off-by: Zhijun <dszhijun@gmail.com>
Signed-off-by: Zhijun <dszhijun@gmail.com>
@zhijun42
Copy link
Author

zhijun42 commented Jan 6, 2026

/retest

@zhijun42
Copy link
Author

zhijun42 commented Jan 6, 2026

@serathius Thanks for the comprehensive list!

How is this different from the previous issue with separation of control vs data response.

In the previous progress issue, there was race between watch events and control responses, and wrong ordering of revisions could happen. That is fundamentally different from this issue, where client sending watch requests much faster than server's processing rate could cause deadlock. For the ease of reasoning, you can ignore the data responses channel in the server, and just focus on how the control channel is filled up and causes deadlock between client and server data exchange.

How dropping of watch responses impacts client. I expect dropping create will deadlock client.

I assume you meant to say "watch requests" (I'm not dropping responses). You're right, I just realized that dropping create requests is incorrect and will leave the system in a broken state that it can't serve create requests anymore.

I refactor my changes to have two separate send channels and then always keep the watch create requests. I also added an integration test TestV3WatchCancellationStorm to show that the system works fine even when there are a lot of watch cancel and create requests.

Can we add robustness test for this?

I'm not sure what kind of robustness test would bring additional value beyond the integration test I just added.

Could we refactor the code to make it easier to understand before we add complexity?

Good point! I did have to spend many days reading through the codebase to understand the watch implementation. Unfortunately, I couldn't find an effective way to refactor it. That being said, I did do my best to:

  • Add many comprehensive comments for both my new changes and the existing components.
  • Refactor the client side main run() loop into recvLoop/sendLoop, mirroring the implementation of recvLoop/sendLoop of serverWatchStream at the server side. This makes it clear how the client and server exchange watch requests/responses.

What are downsides or alternative?

Downsides:

  • More code paths, harder for new developers to fully grasp (although I did try to make everything readable)
  • New behavior: Cancel / progress requests could be dropped during high pressure (without impacting correctness of the system)

Alternatives:

  • No changes to the client side. Simply increase the server side ctrlStream buffer size. This makes it harder to trigger the deadlock, but doesn't really solve the issue.
  • Enhance the current in-process channel based stream implementation with gRPC-like backpressure flow control. This is not a complete fix though - backpressure can reduce likelihood but doesn’t guarantee the watch client/server state machine can’t form a cyclic wait. And it's a much larger blast radius, and harder to fully implement.

Is this the only solution?

  • I cannot guarantee so, but this is the best I can do.

@k8s-ci-robot
Copy link

@zhijun42: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-etcd-coverage-report 1fe01bf link true /test pull-etcd-coverage-report

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@zhijun42
Copy link
Author

zhijun42 commented Jan 6, 2026

And I see the test case TestCacheLaggingWatcher/pipeline_overflow has been failing for many PRs as well. I think it's not related to my changes here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

3 participants