Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix workerthread hang due to grpc client cancel connection #1854

Merged
merged 3 commits into from
Sep 13, 2022
Merged

Conversation

lxning
Copy link
Collaborator

@lxning lxning commented Sep 10, 2022

Description

Please read our CONTRIBUTING.md prior to creating your first pull request.

Please include a summary of the feature or issue being fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

GPRC service throw exception when it is trying to send response if GRPC client was already disconnected. This exception trigger workerthread is trying to shutdown. However, the shutdown is blocked due to GRPC server call lock.

Fixes #(issue)
#1815

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Follow the reproduce steps.
  1. start grpc client to send multiple inference request
  2. stop grpc client
  3. start grpc client to send multiple inference request again

Result: workerthread 9000 keep alive

2022-09-09T20:08:27,915 [WARN ] W-9000-BERTSeqClassification_torchscript_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled
2022-09-09T20:08:27,915 [WARN ] W-9000-BERTSeqClassification_torchscript_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled
....
2022-09-09T20:08:34,483 [INFO ] W-9000-BERTSeqClassification_torchscript_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 298

ts_log.log.zip

  • Test B
    Logs for Test B

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@lxning lxning added bug Something isn't working p0 high priority labels Sep 10, 2022
@lxning lxning added this to the v0.6.1 milestone Sep 10, 2022
@lxning lxning self-assigned this Sep 10, 2022
@codecov
Copy link

codecov bot commented Sep 10, 2022

Codecov Report

Merging #1854 (32fae9e) into master (e8b8a03) will not change coverage.
The diff coverage is n/a.

❗ Current head 32fae9e differs from pull request most recent head 82d99e2. Consider uploading reports for the commit 82d99e2 to get more accurate results

@@           Coverage Diff           @@
##           master    #1854   +/-   ##
=======================================
  Coverage   45.23%   45.23%           
=======================================
  Files          64       64           
  Lines        2602     2602           
  Branches       60       60           
=======================================
  Hits         1177     1177           
  Misses       1425     1425           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@msaroufim msaroufim self-requested a review September 10, 2022 22:46
@hgong-snap
Copy link

hgong-snap commented Sep 11, 2022

Hi! just to understand the fix: will worker still restart(correctly without hanging), or catch the exception and worker can alive ? Asking because in high-throughput production use case, cancellation can be often, and frequent worker restart might result in frequent 'no worker error'.

@lxning
Copy link
Collaborator Author

lxning commented Sep 12, 2022

Hi! just to understand the fix: will worker still restart(correctly without hanging), or catch the exception and worker can alive ? Asking because in high-throughput production use case, cancellation can be often, and frequent worker restart might result in frequent 'no worker error'.

@hgong-snap This fixing does not throw exception (only log warning "grpc client call already cancelled") so the worker is still alive and is able to process the incoming new request). The behind the philosophy is the worker is independent on front grpc handler(ie. client interruption).

@MHatemAbdelhamid
Copy link

Can someone review it as it is resolving a blocking issue @maaquib , @msaroufim

Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stamping because this is urgent but wondering out loud if we should integrate a test case with an interrupt

Copy link
Collaborator

@maaquib maaquib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Wondering if we should refactor this to a method in grpc utils to avoid code duplication

@lxning lxning merged commit 9a9c2f9 into master Sep 13, 2022
jagadeeshi2i pushed a commit to jagadeeshi2i/serve that referenced this pull request Nov 1, 2022
)

* check grpc client cancel connection

* grpc setOnCancelHandler

Co-authored-by: Aaqib <maaquib@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working p0 high priority
Projects
v0.6.1 lifecycle
Awaiting triage
Development

Successfully merging this pull request may close these issues.

None yet

5 participants