Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pin the gpt-fast repo commit version #2966

Merged
merged 2 commits into from
Feb 27, 2024
Merged

Conversation

agunapal
Copy link
Collaborator

Description

Update to the GPT-Fast example documentation.

Need to pin the commit version.

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
curl "http://localhost:8080/predictions/gpt_fast" -T request.json
2024-02-26T23:04:59,678 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - ts_inference_requests_total.Count:1.0|#model_name:gpt_fast,model_version:default|#hostname:ip-172-31-11-40,timestamp:1708988699
2024-02-26T23:04:59,679 [DEBUG] W-9000-gpt_fast_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1708988699679
2024-02-26T23:04:59,680 [INFO ] W-9000-gpt_fast_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1708988699680
2024-02-26T23:04:59,680 [INFO ] W-9000-gpt_fast_1.0-stdout MODEL_LOG - Backend received inference at: 1708988699
2024-02-26T23:04:59,711 [INFO ] W-9000-gpt_fast_1.0 ACCESS_LOG - /127.0.0.1:58004 "PUT /predictions/gpt_fast HTTP/1.1" 200 33
2024-02-26T23:04:59,712 [INFO ] W-9000-gpt_fast_1.0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-11-40,timestamp:1708988699
 is Paris. It is located in the northern central part of the country and is known for its beautiful architecture, art museums, fashion, and historical landmarks such as the Eiffel Tower and the Louvre Museum.

2024-02-26T23:05:01,016 [INFO ] W-9000-gpt_fast_1.0-stdout MODEL_LOG - Num tokens = 50
Paris is2024-02-26T23:05:01,016 [INFO ] W-9000-gpt_fast_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1336853.093|#model_name:gpt_fast,model_version:default|#hostname:ip-172-31-11-40,timestamp:1708988701
2024-02-26T23:05:01,016 [INFO ] W-9000-gpt_fast_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:80.835|#model_name:gpt_fast,model_version:default|#hostname:ip-172-31-11-40,timestamp:1708988701
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]HandlerTime.Milliseconds:1335.32|#ModelName:gpt_fast,Level:Model|#type:GAUGE|#hostname:ip-172-31-11-40,1708988701,b5c68212-74a0-402d-891a-4421e92f3b6a, pattern=[METRICS]
2024-02-26T23:05:01,017 [DEBUG] W-9000-gpt_fast_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 80835, Backend time ns: 1337548332
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0-stdout MODEL_METRICS - HandlerTime.ms:1335.32|#ModelName:gpt_fast,Level:Model|#hostname:ip-172-31-11-40,requestID:b5c68212-74a0-402d-891a-4421e92f3b6a,timestamp:1708988701
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:ip-172-31-11-40,timestamp:1708988701
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:1335.42|#ModelName:gpt_fast,Level:Model|#type:GAUGE|#hostname:ip-172-31-11-40,1708988701,b5c68212-74a0-402d-891a-4421e92f3b6a, pattern=[METRICS]
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1331
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:7.0|#Level:Host|#hostname:ip-172-31-11-40,timestamp:1708988701
2024-02-26T23:05:01,017 [INFO ] W-9000-gpt_fast_1.0-stdout MODEL_METRICS - PredictionTime.ms:1335.42|#ModelName:gpt_fast,Level:Model|#hostname:ip-172-31-11-40,requestID:b5c68212-74a0-402d-891a-4421e92f3b6a,timestamp:1708988701

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@agunapal agunapal requested a review from mreso February 26, 2024 23:10
Copy link
Collaborator

@mreso mreso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@agunapal agunapal added this pull request to the merge queue Feb 27, 2024
Merged via the queue into master with commit eaacf9d Feb 27, 2024
15 checks passed
@agunapal agunapal added this to the v0.10.0 milestone Feb 27, 2024
@agunapal agunapal deleted the issues/update_gpt_fast branch February 28, 2024 00:02
muthuraj-i2i pushed a commit to muthuraj-i2i/serve that referenced this pull request Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants