v0.2.0
Discord Ollama Integration

Ollama is an AI model management tool that allows users to install and use custom large language models locally. The goal is to create a discord bot that will utilize Ollama and chat with it on a Discord!
Ollama Setup
- Go to Ollama's Linux download page and run the simple curl command they provide. The command should be
curl https://ollama.ai/install.sh | sh. - Now the the following commands in separate terminals to test out how it works!
- You can now interact with the model you just ran (it might take a second to startup).
- Response time varies with processing power!
Project Setup
- Clone this repo using
git clone https://github.com/kevinthedang/discord-ollama.gitor just use GitHub Desktop to clone the repo. - You will need a
.envfile in the root of the project directory with the bot's token. There is a.env.sampleis provided for you as a reference for what environment variables.- For example,
CLIENT_TOKEN = [Bot Token]
- For example,
To Run (with Docker)
- Follow this guide to setup Docker
- If on Windows, download Docker Desktop to get the docker engine.
- You will need a model in the container for this to work properly, on Docker Desktop go to the
Containerstab, select theollamacontainer, and selectExecto run as root on your container. Now, runollama pull [model name]to get your model.- For Linux Servers, you need another shell to pull the model, or if you run
docker-compose build && docker-compose up -d, then it will run in the background to keep your shell. Rundocker exec -it ollama bashto get into the container and run the samme pull command above.
- For Linux Servers, you need another shell to pull the model, or if you run
- There is no need to install any npm packages for this, you just need to run
npm run startto pull the containers and spin them up. - For cleaning up on Linux (or Windows), run the following commands:
docker-compose stopdocker-compose rmdocker psto check if containers have been removed.
To Run Locally (without Docker)
- Run
npm installto install the npm packages. - Now, you can run the bot by running
npm run clientwhich will build and run the decompiled typescript and run the setup for ollama.- IMPORTANT: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
- Refer to the resources on what node version to use.
- Open up a separate terminal/shell (you will need wsl for this if on windows) and run
ollama serveto startup ollama.- If you do not have a model, you will need to run
ollama pull [model name]in a separate terminal to get it.
- If you do not have a model, you will need to run
Resources
- NodeJS
- This project uses
v20.10.0+(npm10.2.5). Consider using nvm for multiple NodeJS versions.- To run dev in
ts-node, usingv18.18.2is recommended. CAUTION:v18.19.0orlts/hydrogenwill not run properly. - To run dev with
tsx, you can usev20.10.0or earlier.
- To run dev in
- This project supports any NodeJS version above
16.x.xto only allow ESModules.
- This project uses
- Ollama
- Ollama Docker Image
- IMPORTANT: For Nvidia GPU setup, install
nvidia container toolkit/runtimethen configure it with Docker to utilize Nvidia driver.
- Discord Developer Portal
- Discord.js Docs
- Setting up Docker (Ubuntu 20.04)
Acknowledgement
discord-ollama © 2023 by Kevin Dang is licensed under CC BY-NC 4.0
Description
Languages
TypeScript
99.8%
Dockerfile
0.2%