Files
discord-ollama/docs/setup-local.md
Kevin Dang 43fb2ea94e User Preferences and Setup Docs (#20)
* added message style command

* docker setup scripts

* reformat messageStyle.ts

* fix: register unregister on deploy

* add: messageStream preference

* add: json config handler

* update: messageCreate gets config

* update: shifted chat to config callback

* fix: naming conventions based on discord

* update: setup in docs now

* add: static docker ips

* version increment

* add: bot message for no config

* fix: no config case

* add: clarification for subnetting

* update: version increment in lock file

---------

Co-authored-by: JT2M0L3Y <jtsmoley@icloud.com>
2024-03-22 10:37:06 -07:00

19 lines
1.7 KiB
Markdown

## Ollama Setup
* Go to Ollama's [Linux download page](https://ollama.ai/download/linux) and run the simple curl command they provide. The command should be `curl https://ollama.ai/install.sh | sh`.
* Now the the following commands in separate terminals to test out how it works!
* In terminal 1 -> `ollama serve` to setup ollama
* In terminal 2 -> `ollama run [model name]`, for example `ollama run llama2`
* The models can vary as you can create your own model. You can also view ollama's [library](https://ollama.ai/library) of models.
* If there are any issues running ollama because of missing LLMs, run `ollama pull [model name]` as it will pull the model if Ollama has it in their library.
* This can also be done in [wsl](https://learn.microsoft.com/en-us/windows/wsl/install) for Windows machines.
* You can now interact with the model you just ran (it might take a second to startup).
* Response time varies with processing power!
## To Run Locally (without Docker)
* Run `npm install` to install the npm packages.
* Ensure that your [.env](../.env.sample) file's `OLLAMA_IP` is `127.0.0.1` to work properly.
* Now, you can run the bot by running `npm run client` which will build and run the decompiled typescript and run the setup for ollama.
* **IMPORTANT**: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
* Refer to the [resources](../README.md#resources) on what node version to use.
* Open up a separate terminal/shell (you will need wsl for this if on windows) and run `ollama serve` to startup ollama.
* If you do not have a model, you will need to run `ollama pull [model name]` in a separate terminal to get it.