* added message style command * docker setup scripts * reformat messageStyle.ts * fix: register unregister on deploy * add: messageStream preference * add: json config handler * update: messageCreate gets config * update: shifted chat to config callback * fix: naming conventions based on discord * update: setup in docs now * add: static docker ips * version increment * add: bot message for no config * fix: no config case * add: clarification for subnetting * update: version increment in lock file --------- Co-authored-by: JT2M0L3Y <jtsmoley@icloud.com>
1.7 KiB
1.7 KiB
Ollama Setup
- Go to Ollama's Linux download page and run the simple curl command they provide. The command should be
curl https://ollama.ai/install.sh | sh. - Now the the following commands in separate terminals to test out how it works!
- In terminal 1 ->
ollama serveto setup ollama - In terminal 2 ->
ollama run [model name], for exampleollama run llama2- The models can vary as you can create your own model. You can also view ollama's library of models.
- If there are any issues running ollama because of missing LLMs, run
ollama pull [model name]as it will pull the model if Ollama has it in their library. - This can also be done in wsl for Windows machines.
- In terminal 1 ->
- You can now interact with the model you just ran (it might take a second to startup).
- Response time varies with processing power!
To Run Locally (without Docker)
- Run
npm installto install the npm packages. - Ensure that your .env file's
OLLAMA_IPis127.0.0.1to work properly. - Now, you can run the bot by running
npm run clientwhich will build and run the decompiled typescript and run the setup for ollama.- IMPORTANT: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
- Refer to the resources on what node version to use.
- Open up a separate terminal/shell (you will need wsl for this if on windows) and run
ollama serveto startup ollama.- If you do not have a model, you will need to run
ollama pull [model name]in a separate terminal to get it.
- If you do not have a model, you will need to run