* Update: Local setup * Update: docker setup changes * Add: Discord App Creation Guide * Update: readme changes * Update: discord app guide link
2.5 KiB
2.5 KiB
Ollama Setup
- Go to Ollama's Linux download page and run the simple curl command they provide. The command should be
curl https://ollama.ai/install.sh | sh. - Since Ollama will run as a systemd service, there is no need to run
ollama serveunless you disable it. If you do disable it or have an olderollamaversion, do the following:- In terminal 1 ->
ollama serveto setup ollama - In terminal 2 ->
ollama run [model name], for exampleollama run llama2- The models can vary as you can create your own model. You can also view ollama's library of models.
- In terminal 1 ->
- Otherwise, if you have the latest
ollama, you can just runollama run [model name]rather than running this in 2 terminals. - If there are any issues running ollama because of missing LLMs, run
ollama pull [model name]as it will pull the model if Ollama has it in their library.- This can also be done in wsl for Windows machines.
- This should also not be a problem is a future feature that allows for pulling of models via discord client. For now, they must be pulled manually.
- You can now interact with the model you just ran (it might take a second to startup).
- Response time varies with processing power!
To Run Locally (without Docker)
- Run
npm installto install the npm packages. - Ensure that your .env file's
OLLAMA_IPis127.0.0.1to work properly.- You only need your
CLIENT_TOKEN,GUILD_ID,MODEL,CLIENT_UID,OLLAMA_IP,OLLAMA_PORT. - The ollama ip and port should just use it's defaults by nature. If not, utilize
OLLAMA_IP = 127.0.0.1andOLLAMA_PORT = 11434.
- You only need your
- Now, you can run the bot by running
npm run clientwhich will build and run the decompiled typescript and run the setup for ollama.- IMPORTANT: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
- Refer to the resources on what node version to use.
- If you are using wsl, open up a separate terminal/shell to startup the ollama service. Again, if you are running an older ollama, you must run
ollama servein that shell.- If you are on an actual Linux machine/VM there is no need for another terminal (unless you have an older ollama version).
- If you do not have a model, you will need to run
ollama pull [model name]in a separate terminal to get it.