Guide and Documentation Overhaul (#79)
* Update: Local setup * Update: docker setup changes * Add: Discord App Creation Guide * Update: readme changes * Update: discord app guide link
21
README.md
@@ -20,8 +20,11 @@ The project aims to:
|
|||||||
* [x] Slash Commands Compatible
|
* [x] Slash Commands Compatible
|
||||||
* [x] Generated Token Length Handling for >2000
|
* [x] Generated Token Length Handling for >2000
|
||||||
* [x] Token Length Handling of any message size
|
* [x] Token Length Handling of any message size
|
||||||
* [ ] External WebUI Integration
|
* [ ] User vs. Server Preferences
|
||||||
|
* [ ] Redis Caching
|
||||||
* [x] Administrator Role Compatible
|
* [x] Administrator Role Compatible
|
||||||
|
* [ ] Multi-User Chat Generation (Multiple users chatting at the same time)
|
||||||
|
* [ ] Automatic and Manual model pulling through the Discord client
|
||||||
* [ ] Allow others to create their own models personalized for their own servers!
|
* [ ] Allow others to create their own models personalized for their own servers!
|
||||||
* [ ] Documentation on creating your own LLM
|
* [ ] Documentation on creating your own LLM
|
||||||
* [ ] Documentation on web scrapping and cleaning
|
* [ ] Documentation on web scrapping and cleaning
|
||||||
@@ -33,25 +36,21 @@ The project aims to:
|
|||||||
* Please refer to the docs for bot setup.
|
* Please refer to the docs for bot setup.
|
||||||
* [Local Machine Setup](./docs/setup-local.md)
|
* [Local Machine Setup](./docs/setup-local.md)
|
||||||
* [Docker Setup for Servers and Local Machines](./docs/setup-docker.md)
|
* [Docker Setup for Servers and Local Machines](./docs/setup-docker.md)
|
||||||
|
* Nvidia is recommended for now, but support for other GPUs should be development.
|
||||||
* Local use is not recommended.
|
* Local use is not recommended.
|
||||||
|
* [Creating a Discord App](./docs/setup-discord-app.md)
|
||||||
> [!NOTE]
|
|
||||||
> These guides assume you already know how to setup a bot account for discord. Documentation will be added later.
|
|
||||||
|
|
||||||
## Resources
|
## Resources
|
||||||
* [NodeJS](https://nodejs.org/en)
|
* [NodeJS](https://nodejs.org/en)
|
||||||
* This project uses `v20.10.0+` (npm `10.2.5`). Consider using [nvm](https://github.com/nvm-sh/nvm) for multiple NodeJS versions.
|
* This project runs on `lts\hydrogen`.
|
||||||
* To run dev in `ts-node`, using `v18.18.2` is recommended.
|
* To run dev in `ts-node`/`nodemon`, using `v18.18.2` is recommended.
|
||||||
* To run dev with `tsx`, you can use `v20.10.0` or earlier.
|
* To run dev with `tsx`, you can use `v20.10.0` or earlier.
|
||||||
* This project supports any NodeJS version above `16.x.x` to only allow ESModules.
|
* This project supports any NodeJS version above `16.x.x` to only allow ESModules.
|
||||||
* [Ollama](https://ollama.ai/)
|
* [Ollama](https://ollama.com/)
|
||||||
* [Ollama Docker Image](https://hub.docker.com/r/ollama/ollama)
|
* [Ollama Docker Image](https://hub.docker.com/r/ollama/ollama)
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> For Nvidia GPU setup, **install** `nvidia container toolkit/runtime` then **configure** it with Docker to utilize Nvidia driver.
|
|
||||||
|
|
||||||
> [!CAUTION]
|
> [!CAUTION]
|
||||||
> `v18.X.X` or `lts/hydrogen` will not run properly for `npm run dev-mon`.
|
> `v18.X.X` or `lts/hydrogen` will not run properly for `npm run dev-mon`. It is recommended to just use `npm run dev-tsx` for development. The nodemon version will likely be removed in a future update.
|
||||||
|
|
||||||
* [Discord Developer Portal](https://discord.com/developers/docs/intro)
|
* [Discord Developer Portal](https://discord.com/developers/docs/intro)
|
||||||
* [Discord.js Docs](https://discord.js.org/docs/packages/discord.js/main)
|
* [Discord.js Docs](https://discord.js.org/docs/packages/discord.js/main)
|
||||||
|
|||||||
47
docs/setup-discord-app.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
## Discord App/Bot Setup
|
||||||
|
* Refer to the [Discord Developers](https://discord.com/build/app-developers) tab on their site.
|
||||||
|
* Click on **Getting Started** and it may prompt you to log in. Do that.
|
||||||
|
* You should see this upon logging in.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* Click on **Create App**, you should not be prompted to create an App with a name. If you are apart of a team, you may choose to create it for your team or for yourself.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* Great! Not you should have your App created. It should bring you to a page like this.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* From here, you will need you App's token, navigate to the **Bot** tab and click **Reset Token** to generate a new token to interact with you bot.
|
||||||
|
* The following app will not exist, usage of this token will be pointless as this is a guide.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* You will also need your App's **Client ID**, navigate to **OAuth2** and copy your id.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* That should be all of the environment variables needed from Discord, now we need this app on your server.
|
||||||
|
* Navigate to **Installation** and Copy the provided **Install Link** to allow your App to your server.
|
||||||
|
* You should set the **Guild Install** permissions as you like, for this purpose we will allow admin priviledges for now. Ensure the **bot** scope is added to do this.
|
||||||
|
|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|
* Notice that your App's **Client Id** is apart of the **Install Link**.
|
||||||
|
* Paste this link in a web browser and you should see something like this.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* Click **Add to Server** and you should see this.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* Choose a server to add the App to, then click **Continue** then **Authorize**. You should see this after that.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
* Congratulations! You should now see you App on your server!
|
||||||
|
|
||||||
|

|
||||||
@@ -47,6 +47,7 @@ sudo systemctl restart docker
|
|||||||
* `DISCORD_IP = 172.18.0.3`
|
* `DISCORD_IP = 172.18.0.3`
|
||||||
* `SUBNET_ADDRESS = 172.18.0.0`
|
* `SUBNET_ADDRESS = 172.18.0.0`
|
||||||
* Don't understand any of this? watch a Networking video to understand subnetting.
|
* Don't understand any of this? watch a Networking video to understand subnetting.
|
||||||
|
* You also need all environment variables shown in [`.env.sample`](../.env.sample)
|
||||||
* You will need a model in the container for this to work properly, on Docker Desktop go to the `Containers` tab, select the `ollama` container, and select `Exec` to run as root on your container. Now, run `ollama pull [model name]` to get your model.
|
* You will need a model in the container for this to work properly, on Docker Desktop go to the `Containers` tab, select the `ollama` container, and select `Exec` to run as root on your container. Now, run `ollama pull [model name]` to get your model.
|
||||||
* For Linux Servers, you need another shell to pull the model, or if you run `docker compose build && docker compose up -d`, then it will run in the background to keep your shell. Run `docker exec -it ollama bash` to get into the container and run the samme pull command above.
|
* For Linux Servers, you need another shell to pull the model, or if you run `docker compose build && docker compose up -d`, then it will run in the background to keep your shell. Run `docker exec -it ollama bash` to get into the container and run the samme pull command above.
|
||||||
* Otherwise, there is no need to install any npm packages for this, you just need to run `npm run start` to pull the containers and spin them up.
|
* Otherwise, there is no need to install any npm packages for this, you just need to run `npm run start` to pull the containers and spin them up.
|
||||||
@@ -54,6 +55,7 @@ sudo systemctl restart docker
|
|||||||
* `docker compose stop`
|
* `docker compose stop`
|
||||||
* `docker compose rm`
|
* `docker compose rm`
|
||||||
* `docker ps` to check if containers have been removed.
|
* `docker ps` to check if containers have been removed.
|
||||||
|
* This may not work if the nvidia installation was done incorrectly. If this is the case, please utilize the [Manual "Clean-up"](#manual-run-with-docker) shown below.
|
||||||
* You can also use `npm run clean` to clean up the containers and remove the network to address a possible `Address already in use` problem.
|
* You can also use `npm run clean` to clean up the containers and remove the network to address a possible `Address already in use` problem.
|
||||||
|
|
||||||
## Manual Run (with Docker)
|
## Manual Run (with Docker)
|
||||||
|
|||||||
@@ -1,19 +1,24 @@
|
|||||||
## Ollama Setup
|
## Ollama Setup
|
||||||
* Go to Ollama's [Linux download page](https://ollama.ai/download/linux) and run the simple curl command they provide. The command should be `curl https://ollama.ai/install.sh | sh`.
|
* Go to Ollama's [Linux download page](https://ollama.ai/download/linux) and run the simple curl command they provide. The command should be `curl https://ollama.ai/install.sh | sh`.
|
||||||
* Now the the following commands in separate terminals to test out how it works!
|
* Since Ollama will run as a systemd service, there is no need to run `ollama serve` unless you disable it. If you do disable it or have an older `ollama` version, do the following:
|
||||||
* In terminal 1 -> `ollama serve` to setup ollama
|
* In terminal 1 -> `ollama serve` to setup ollama
|
||||||
* In terminal 2 -> `ollama run [model name]`, for example `ollama run llama2`
|
* In terminal 2 -> `ollama run [model name]`, for example `ollama run llama2`
|
||||||
* The models can vary as you can create your own model. You can also view ollama's [library](https://ollama.ai/library) of models.
|
* The models can vary as you can create your own model. You can also view ollama's [library](https://ollama.ai/library) of models.
|
||||||
* If there are any issues running ollama because of missing LLMs, run `ollama pull [model name]` as it will pull the model if Ollama has it in their library.
|
* Otherwise, if you have the latest `ollama`, you can just run `ollama run [model name]` rather than running this in 2 terminals.
|
||||||
|
* If there are any issues running ollama because of missing LLMs, run `ollama pull [model name]` as it will pull the model if Ollama has it in their library.
|
||||||
* This can also be done in [wsl](https://learn.microsoft.com/en-us/windows/wsl/install) for Windows machines.
|
* This can also be done in [wsl](https://learn.microsoft.com/en-us/windows/wsl/install) for Windows machines.
|
||||||
|
* This should also not be a problem is a future feature that allows for pulling of models via discord client. For now, they must be pulled manually.
|
||||||
* You can now interact with the model you just ran (it might take a second to startup).
|
* You can now interact with the model you just ran (it might take a second to startup).
|
||||||
* Response time varies with processing power!
|
* Response time varies with processing power!
|
||||||
|
|
||||||
## To Run Locally (without Docker)
|
## To Run Locally (without Docker)
|
||||||
* Run `npm install` to install the npm packages.
|
* Run `npm install` to install the npm packages.
|
||||||
* Ensure that your [.env](../.env.sample) file's `OLLAMA_IP` is `127.0.0.1` to work properly.
|
* Ensure that your [.env](../.env.sample) file's `OLLAMA_IP` is `127.0.0.1` to work properly.
|
||||||
|
* You only need your `CLIENT_TOKEN`, `GUILD_ID`, `MODEL`, `CLIENT_UID`, `OLLAMA_IP`, `OLLAMA_PORT`.
|
||||||
|
* The ollama ip and port should just use it's defaults by nature. If not, utilize `OLLAMA_IP = 127.0.0.1` and `OLLAMA_PORT = 11434`.
|
||||||
* Now, you can run the bot by running `npm run client` which will build and run the decompiled typescript and run the setup for ollama.
|
* Now, you can run the bot by running `npm run client` which will build and run the decompiled typescript and run the setup for ollama.
|
||||||
* **IMPORTANT**: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
|
* **IMPORTANT**: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
|
||||||
* Refer to the [resources](../README.md#resources) on what node version to use.
|
* Refer to the [resources](../README.md#resources) on what node version to use.
|
||||||
* Open up a separate terminal/shell (you will need wsl for this if on windows) and run `ollama serve` to startup ollama.
|
* If you are using wsl, open up a separate terminal/shell to startup the ollama service. Again, if you are running an older ollama, you must run `ollama serve` in that shell.
|
||||||
|
* If you are on an actual Linux machine/VM there is no need for another terminal (unless you have an older ollama version).
|
||||||
* If you do not have a model, you will need to run `ollama pull [model name]` in a separate terminal to get it.
|
* If you do not have a model, you will need to run `ollama pull [model name]` in a separate terminal to get it.
|
||||||
BIN
imgs/tutorial/bot-in-server.png
Normal file
|
After Width: | Height: | Size: 7.2 KiB |
BIN
imgs/tutorial/client-id.png
Normal file
|
After Width: | Height: | Size: 142 KiB |
BIN
imgs/tutorial/create-app.png
Normal file
|
After Width: | Height: | Size: 21 KiB |
BIN
imgs/tutorial/created-app.png
Normal file
|
After Width: | Height: | Size: 139 KiB |
BIN
imgs/tutorial/discord-dev.png
Normal file
|
After Width: | Height: | Size: 98 KiB |
BIN
imgs/tutorial/invite.png
Normal file
|
After Width: | Height: | Size: 147 KiB |
BIN
imgs/tutorial/scope.png
Normal file
|
After Width: | Height: | Size: 119 KiB |
BIN
imgs/tutorial/server-invite-1.png
Normal file
|
After Width: | Height: | Size: 139 KiB |
BIN
imgs/tutorial/server-invite-2-auth.png
Normal file
|
After Width: | Height: | Size: 196 KiB |
BIN
imgs/tutorial/server-invite-3.png
Normal file
|
After Width: | Height: | Size: 134 KiB |
BIN
imgs/tutorial/token.png
Normal file
|
After Width: | Height: | Size: 141 KiB |