2024-02-07 11:41:15 -08:00
2024-01-30 16:15:35 -08:00
2024-02-07 09:59:06 -08:00
2024-02-07 09:59:06 -08:00
2024-02-07 09:59:06 -08:00
2024-02-07 09:59:06 -08:00
2023-12-22 11:22:16 -08:00
2024-02-07 09:59:06 -08:00
2024-02-07 09:59:06 -08:00
2024-02-07 11:41:15 -08:00
2024-01-31 10:28:02 -08:00

Discord Ollama Integration License: CC BY-NC 4.0 Release Badge

Ollama is an AI model management tool that allows users to install and use custom large language models locally. The goal is to create a discord bot that will utilize Ollama and chat with it on a Discord!

Ollama Setup

  • Go to Ollama's Linux download page and run the simple curl command they provide. The command should be curl https://ollama.ai/install.sh | sh.
  • Now the the following commands in separate terminals to test out how it works!
    • In terminal 1 -> ollama serve to setup ollama
    • In terminal 2 -> ollama run [model name], for example ollama run llama2
      • The models can vary as you can create your own model. You can also view ollama's library of models.
    • This can also be done in wsl for Windows machines.
  • You can now interact with the model you just ran (it might take a second to startup).
    • Response time varies with processing power!

Project Setup

  • Clone this repo using git clone https://github.com/kevinthedang/discord-ollama.git or just use GitHub Desktop to clone the repo.
  • You will need a .env file in the root of the project directory with the bot's token. There is a .env.sample is provided for you as a reference for what environment variables.
    • For example, CLIENT_TOKEN = [Bot Token]

To Run (with Docker)

  • Follow this guide to setup Docker
  • You will need a model in the container for this to work properly, on Docker Desktop go to the Containers tab, select the ollama container, and select Exec to run as root on your container. Now, run ollama pull [model name] to get your model.
    • For Linux Servers, you need another shell to pull the model, or if you run docker-compose build && docker-compose up -d, then it will run in the background to keep your shell. Run docker exec -it ollama bash to get into the container and run the samme pull command above.
  • There is no need to install any npm packages for this, you just need to run npm run start to pull the containers and spin them up.
  • For cleaning up on Linux (or Windows), run the following commands:
    • docker-compose stop
    • docker-compose rm
    • docker ps to check if containers have been removed.

To Run Locally (without Docker)

  • Run npm install to install the npm packages.
  • Now, you can run the bot by running npm run client which will build and run the decompiled typescript and run the setup for ollama.
    • IMPORTANT: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
    • Refer to the resources on what node version to use.
  • Open up a separate terminal/shell (you will need wsl for this if on windows) and run ollama serve to startup ollama.
    • If you do not have a model, you will need to run ollama pull [model name] in a separate terminal to get it.

Resources

Acknowledgement

discord-ollama © 2023 by Kevin Dang is licensed under CC BY-NC 4.0

Languages
TypeScript 99.5%
Dockerfile 0.5%