First, follow the steps in the Server section above to start a local server. Then, in another terminal, launch the interface. Running the following will open a tab in your browser. streamlit run torchchat/usages/browser.py Use the "Max Response Tokens" slider to limit the maximum number ...
We log this message under the warning level, to inform that caching could work differently from how it works in case of existing runtime (when we run a script with streamlit run app.py) Currently, we don't have config options to configure streamlit internal logging from there. CC: @sfc-...
File "/Users/dalemcdiarmid/Library/Caches/pypoetry/virtualenvs/llama-index-xtW50Fas-py3.11/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script exec(code, module.__dict__) File "/opt/llama_index/docs/examples/apps/hacker_insights.py", lin...
We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {{ message }} r...
1.5.0 tune_sklearn: Not installed ray: Not installed hyperopt: Not installed optuna: Not installed skopt: Not installed mlflow: 2.3.1 gradio: Not installed fastapi: Not installed uvicorn: Not installed m2cgen: Not installed evidently: Not installed fugue: 0.8.1 streamlit: Not installed prophet...
First, follow the steps in the Server section above to start a local server. Then, in another terminal, launch the interface. Running the following will open a tab in your browser. streamlit run torchchat/usages/browser.py Use the "Max Response Tokens" slider to limit the maximum number of...
streamlit run torchchat.py -- browser llama3.1 Server Note: This feature is still a work in progress and not all endpoints are working This mode gives a REST API that matches the OpenAI API spec for interacting with a model The server follows the OpenAI API specification for chat completi...
First, follow the steps in the Server section above to start a local server. Then, in another terminal, launch the interface. Running the following will open a tab in your browser. streamlit run torchchat/usages/browser.py Use the "Max Response Tokens" slider to limit the maximum number ...
First, follow the steps in the Server section above to start a local server. Then, in another terminal, launch the interface. Running the following will open a tab in your browser. streamlit run torchchat/usages/browser.py Use the "Max Response Tokens" slider to limit the maximum number ...
First, follow the steps in the Server section above to start a local server. Then, in another terminal, launch the interface. Running the following will open a tab in your browser.streamlit run torchchat/usages/browser.py Use the "Max Response Tokens" slider to limit the maximum number of...