Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exchange Llama2 against Llama3 in HuggingFace_accelerate example #3108

Merged
merged 5 commits into from
Apr 24, 2024

Conversation

mreso
Copy link
Collaborator

@mreso mreso commented Apr 24, 2024

Description

This PR exchanges Llama2 against Llama3 in HuggingFace_accelerate example.

Fixes #(issue)
#3107

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Follow README steps
curl -v "http://localhost:8080/predictions/llama-3-70b-instruct" -T sample_text.txt & curl -v "http://localhost:8080/predictions/llama-3-70b-instruct" -T sample_text.txt& curl -v "http://localhost:8080/predictions/llama-3-70b-instruct" -T sample_text2.txt

Output:

2024-04-24T04:30:40,752 [INFO ] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1713933040752
2024-04-24T04:30:40,753 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_LOG - Backend received inference at: 1713933040
2024-04-24T04:30:40,754 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_LOG - Received texts: '['what is the recipe of mayonnaise?', 'what is the recipe of mayonnaise?']'
2024-04-24T04:30:40,759 [WARN ] W-9000-llama-3-70b-instruct_1.0-stderr MODEL_LOG - Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
2024-04-24T04:31:34,819 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_LOG - Generated text: ["what is the recipe of mayonnaise? Mayonnaise is a thick, creamy condiment made from a mixture of oil, egg yolks, vinegar or lemon juice, and seasonings. Here's a basic recipe for homemade m
ayonnaise:\n\n", "what is the recipe of mayonnaise? Mayonnaise is a thick, creamy condiment made from a mixture of oil, egg yolks, vinegar or lemon juice, and seasonings. Here's a basic recipe for homemade mayonnaise:\n\n"]
2024-04-24T04:31:34,819 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]HandlerTime.Milliseconds:54065.78|#ModelName:llama-3-70b-instruct,Level:Model|#type:GAUGE|#hostname:ip-172-31-3-46,1713933094,9f650527-4c07-4133-a522-da81a4345758,
3702d8f1-6613-46bd-bd7d-3b3972626d83, pattern=[METRICS]
2024-04-24T04:31:34,820 [INFO ] W-9000-llama-3-70b-instruct_1.0 ACCESS_LOG - /127.0.0.1:32800 "PUT /predictions/llama-3-70b-instruct HTTP/1.1" 200 54072
2024-04-24T04:31:34,821 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,821 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5.4068576692E7|#model_name:llama-3-70b-instruct,model_version:default|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,821 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_METRICS - HandlerTime.ms:54065.78|#ModelName:llama-3-70b-instruct,Level:Model|#hostname:ip-172-31-3-46,requestID:9f650527-4c07-4133-a522-da81a4345758,3702d8f1-6613-46bd-bd7d-3b3972626d83,timestamp:1713933094
2024-04-24T04:31:34,821 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:178.891|#model_name:llama-3-70b-instruct,model_version:default|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,821 [DEBUG] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 178891, Backend time ns: 54069722724
2024-04-24T04:31:34,821 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:54065.88|#ModelName:llama-3-70b-instruct,Level:Model|#type:GAUGE|#hostname:ip-172-31-3-46,1713933094,9f650527-4c07-4133-a522-da81a43457
58,3702d8f1-6613-46bd-bd7d-3b3972626d83, pattern=[METRICS]
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_METRICS - PredictionTime.ms:54065.88|#ModelName:llama-3-70b-instruct,Level:Model|#hostname:ip-172-31-3-46,requestID:9f650527-4c07-4133-a522-da81a4345758,3702d8f1-6613-46bd-bd7d-3b3972626d83,timestamp:1713933094
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 ACCESS_LOG - /127.0.0.1:32810 "PUT /predictions/llama-3-70b-instruct HTTP/1.1" 200 54073
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:5.4070136298E7|#model_name:llama-3-70b-instruct,model_version:default|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:141.191|#model_name:llama-3-70b-instruct,model_version:default|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,822 [DEBUG] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.job.RestJob - Waiting time ns: 141191, Backend time ns: 54070543212
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 54068
2024-04-24T04:31:34,822 [INFO ] W-9000-llama-3-70b-instruct_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:ip-172-31-3-46,timestamp:1713933094
2024-04-24T04:31:34,923 [DEBUG] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1713933094923
2024-04-24T04:31:34,923 [INFO ] W-9000-llama-3-70b-instruct_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1713933094923
2024-04-24T04:31:34,924 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_LOG - Backend received inference at: 1713933094
2024-04-24T04:31:34,924 [INFO ] W-9000-llama-3-70b-instruct_1.0-stdout MODEL_LOG - Received texts: '['Hello\n']'
2024-04-24T04:31:34,925 [WARN ] W-9000-llama-3-70b-instruct_1.0-stderr MODEL_LOG - Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Copy link
Collaborator

@agunapal agunapal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mreso mreso enabled auto-merge April 24, 2024 16:29
@mreso mreso added this pull request to the merge queue Apr 24, 2024
Merged via the queue into master with commit 6ae146d Apr 24, 2024
11 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants