Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama2 Chatbot on Mac #2618

Merged
merged 24 commits into from
Sep 30, 2023
Merged

Llama2 Chatbot on Mac #2618

merged 24 commits into from
Sep 30, 2023

Conversation

agunapal
Copy link
Collaborator

@agunapal agunapal commented Sep 27, 2023

Description

This is an example showing how to deploy a llama2 chat app using TorchServe on your laptop!

We use streamlit to create the app

We are using llama-cpp-python in this example

This example doesn't include streaming response.
Though i was able to get the server to send the response, the client was receiving junk .

architecture

Server
Client

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Logs for Test A

  • Test B
    Logs for Test B

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Sep 27, 2023

Codecov Report

Merging #2618 (5412735) into master (6e5b8cd) will not change coverage.
The diff coverage is n/a.

❗ Current head 5412735 differs from pull request most recent head 6eb0d65. Consider uploading reports for the commit 6eb0d65 to get more accurate results

@@           Coverage Diff           @@
##           master    #2618   +/-   ##
=======================================
  Coverage   71.34%   71.34%           
=======================================
  Files          85       85           
  Lines        3905     3905           
  Branches       58       58           
=======================================
  Hits         2786     2786           
  Misses       1115     1115           
  Partials        4        4           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@agunapal agunapal changed the title (WIP)Llama2 Chat app Llama2 Chat app on Mac Sep 28, 2023
@agunapal agunapal requested a review from mreso September 28, 2023 23:44
@agunapal agunapal changed the title Llama2 Chat app on Mac Llama2 Chatbot on Mac Sep 28, 2023
Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! a bunch of minor feedback but otherwise this is looking good



def start_server():
os.system("torchserve --start --model-store model_store --ncs")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for snappier starts you can disable the compression in archiver

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm..I am not sure I follow. When archiver is just passing the path to the weights using the yaml file and not the actual weights, this wouldn't come into picture?

examples/LLM/llama2/chat_app/torchserve_server_app.py Outdated Show resolved Hide resolved
examples/LLM/llama2/chat_app/torchserve_server_app.py Outdated Show resolved Hide resolved
server_state_container = st.container()
server_state_container.subheader("Server status:")

if st.session_state.started:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can the server fail after this point?

url = "http://localhost:8081/models/" + MODEL_NAME
res = requests.get(url)
if res.status_code != 200:
model_state_container.error("Error getting model status", icon="🚫")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so hopefully actual error logs are still somewhere? Don't wanna swallow the actual error message for someone trying to debug this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the terminal where you start the server shows the actual logs. But let me add a comment in the readme

@agunapal agunapal added this pull request to the merge queue Sep 30, 2023
Merged via the queue into pytorch:master with commit 253c205 Sep 30, 2023
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants