From 7f1326f93ed1a25427ab9a7d9db022eb3ca0ba68 Mon Sep 17 00:00:00 2001 From: Kevin Dang <77701718+kevinthedang@users.noreply.github.com> Date: Fri, 28 Jun 2024 21:45:38 -0700 Subject: [PATCH] Guide and Documentation Overhaul (#79) * Update: Local setup * Update: docker setup changes * Add: Discord App Creation Guide * Update: readme changes * Update: discord app guide link --- README.md | 21 ++++++----- docs/setup-discord-app.md | 47 +++++++++++++++++++++++++ docs/setup-docker.md | 2 ++ docs/setup-local.md | 11 ++++-- imgs/tutorial/bot-in-server.png | Bin 0 -> 7410 bytes imgs/tutorial/client-id.png | Bin 0 -> 145085 bytes imgs/tutorial/create-app.png | Bin 0 -> 21595 bytes imgs/tutorial/created-app.png | Bin 0 -> 142405 bytes imgs/tutorial/discord-dev.png | Bin 0 -> 100257 bytes imgs/tutorial/invite.png | Bin 0 -> 150846 bytes imgs/tutorial/scope.png | Bin 0 -> 121731 bytes imgs/tutorial/server-invite-1.png | Bin 0 -> 142334 bytes imgs/tutorial/server-invite-2-auth.png | Bin 0 -> 200209 bytes imgs/tutorial/server-invite-3.png | Bin 0 -> 137324 bytes imgs/tutorial/token.png | Bin 0 -> 144326 bytes 15 files changed, 67 insertions(+), 14 deletions(-) create mode 100644 docs/setup-discord-app.md create mode 100644 imgs/tutorial/bot-in-server.png create mode 100644 imgs/tutorial/client-id.png create mode 100644 imgs/tutorial/create-app.png create mode 100644 imgs/tutorial/created-app.png create mode 100644 imgs/tutorial/discord-dev.png create mode 100644 imgs/tutorial/invite.png create mode 100644 imgs/tutorial/scope.png create mode 100644 imgs/tutorial/server-invite-1.png create mode 100644 imgs/tutorial/server-invite-2-auth.png create mode 100644 imgs/tutorial/server-invite-3.png create mode 100644 imgs/tutorial/token.png diff --git a/README.md b/README.md index ee5a44c..7c77a4e 100644 --- a/README.md +++ b/README.md @@ -20,8 +20,11 @@ The project aims to: * [x] Slash Commands Compatible * [x] Generated Token Length Handling for >2000 * [x] Token Length Handling of any message size - * [ ] External WebUI Integration + * [ ] User vs. Server Preferences + * [ ] Redis Caching * [x] Administrator Role Compatible + * [ ] Multi-User Chat Generation (Multiple users chatting at the same time) + * [ ] Automatic and Manual model pulling through the Discord client * [ ] Allow others to create their own models personalized for their own servers! * [ ] Documentation on creating your own LLM * [ ] Documentation on web scrapping and cleaning @@ -33,25 +36,21 @@ The project aims to: * Please refer to the docs for bot setup. * [Local Machine Setup](./docs/setup-local.md) * [Docker Setup for Servers and Local Machines](./docs/setup-docker.md) + * Nvidia is recommended for now, but support for other GPUs should be development. * Local use is not recommended. - -> [!NOTE] -> These guides assume you already know how to setup a bot account for discord. Documentation will be added later. + * [Creating a Discord App](./docs/setup-discord-app.md) ## Resources * [NodeJS](https://nodejs.org/en) - * This project uses `v20.10.0+` (npm `10.2.5`). Consider using [nvm](https://github.com/nvm-sh/nvm) for multiple NodeJS versions. - * To run dev in `ts-node`, using `v18.18.2` is recommended. + * This project runs on `lts\hydrogen`. + * To run dev in `ts-node`/`nodemon`, using `v18.18.2` is recommended. * To run dev with `tsx`, you can use `v20.10.0` or earlier. * This project supports any NodeJS version above `16.x.x` to only allow ESModules. -* [Ollama](https://ollama.ai/) +* [Ollama](https://ollama.com/) * [Ollama Docker Image](https://hub.docker.com/r/ollama/ollama) -> [!NOTE] -> For Nvidia GPU setup, **install** `nvidia container toolkit/runtime` then **configure** it with Docker to utilize Nvidia driver. - > [!CAUTION] -> `v18.X.X` or `lts/hydrogen` will not run properly for `npm run dev-mon`. +> `v18.X.X` or `lts/hydrogen` will not run properly for `npm run dev-mon`. It is recommended to just use `npm run dev-tsx` for development. The nodemon version will likely be removed in a future update. * [Discord Developer Portal](https://discord.com/developers/docs/intro) * [Discord.js Docs](https://discord.js.org/docs/packages/discord.js/main) diff --git a/docs/setup-discord-app.md b/docs/setup-discord-app.md new file mode 100644 index 0000000..481b88d --- /dev/null +++ b/docs/setup-discord-app.md @@ -0,0 +1,47 @@ +## Discord App/Bot Setup +* Refer to the [Discord Developers](https://discord.com/build/app-developers) tab on their site. +* Click on **Getting Started** and it may prompt you to log in. Do that. +* You should see this upon logging in. + + + +* Click on **Create App**, you should not be prompted to create an App with a name. If you are apart of a team, you may choose to create it for your team or for yourself. + + + +* Great! Not you should have your App created. It should bring you to a page like this. + + + +* From here, you will need you App's token, navigate to the **Bot** tab and click **Reset Token** to generate a new token to interact with you bot. +* The following app will not exist, usage of this token will be pointless as this is a guide. + + + +* You will also need your App's **Client ID**, navigate to **OAuth2** and copy your id. + + + +* That should be all of the environment variables needed from Discord, now we need this app on your server. +* Navigate to **Installation** and Copy the provided **Install Link** to allow your App to your server. +* You should set the **Guild Install** permissions as you like, for this purpose we will allow admin priviledges for now. Ensure the **bot** scope is added to do this. + + + + +* Notice that your App's **Client Id** is apart of the **Install Link**. +* Paste this link in a web browser and you should see something like this. + + + +* Click **Add to Server** and you should see this. + + + +* Choose a server to add the App to, then click **Continue** then **Authorize**. You should see this after that. + + + +* Congratulations! You should now see you App on your server! + + \ No newline at end of file diff --git a/docs/setup-docker.md b/docs/setup-docker.md index 83dd279..9aa79a4 100644 --- a/docs/setup-docker.md +++ b/docs/setup-docker.md @@ -47,6 +47,7 @@ sudo systemctl restart docker * `DISCORD_IP = 172.18.0.3` * `SUBNET_ADDRESS = 172.18.0.0` * Don't understand any of this? watch a Networking video to understand subnetting. + * You also need all environment variables shown in [`.env.sample`](../.env.sample) * You will need a model in the container for this to work properly, on Docker Desktop go to the `Containers` tab, select the `ollama` container, and select `Exec` to run as root on your container. Now, run `ollama pull [model name]` to get your model. * For Linux Servers, you need another shell to pull the model, or if you run `docker compose build && docker compose up -d`, then it will run in the background to keep your shell. Run `docker exec -it ollama bash` to get into the container and run the samme pull command above. * Otherwise, there is no need to install any npm packages for this, you just need to run `npm run start` to pull the containers and spin them up. @@ -54,6 +55,7 @@ sudo systemctl restart docker * `docker compose stop` * `docker compose rm` * `docker ps` to check if containers have been removed. + * This may not work if the nvidia installation was done incorrectly. If this is the case, please utilize the [Manual "Clean-up"](#manual-run-with-docker) shown below. * You can also use `npm run clean` to clean up the containers and remove the network to address a possible `Address already in use` problem. ## Manual Run (with Docker) diff --git a/docs/setup-local.md b/docs/setup-local.md index 9dede13..00a2fe6 100644 --- a/docs/setup-local.md +++ b/docs/setup-local.md @@ -1,19 +1,24 @@ ## Ollama Setup * Go to Ollama's [Linux download page](https://ollama.ai/download/linux) and run the simple curl command they provide. The command should be `curl https://ollama.ai/install.sh | sh`. -* Now the the following commands in separate terminals to test out how it works! +* Since Ollama will run as a systemd service, there is no need to run `ollama serve` unless you disable it. If you do disable it or have an older `ollama` version, do the following: * In terminal 1 -> `ollama serve` to setup ollama * In terminal 2 -> `ollama run [model name]`, for example `ollama run llama2` * The models can vary as you can create your own model. You can also view ollama's [library](https://ollama.ai/library) of models. - * If there are any issues running ollama because of missing LLMs, run `ollama pull [model name]` as it will pull the model if Ollama has it in their library. +* Otherwise, if you have the latest `ollama`, you can just run `ollama run [model name]` rather than running this in 2 terminals. +* If there are any issues running ollama because of missing LLMs, run `ollama pull [model name]` as it will pull the model if Ollama has it in their library. * This can also be done in [wsl](https://learn.microsoft.com/en-us/windows/wsl/install) for Windows machines. + * This should also not be a problem is a future feature that allows for pulling of models via discord client. For now, they must be pulled manually. * You can now interact with the model you just ran (it might take a second to startup). * Response time varies with processing power! ## To Run Locally (without Docker) * Run `npm install` to install the npm packages. * Ensure that your [.env](../.env.sample) file's `OLLAMA_IP` is `127.0.0.1` to work properly. + * You only need your `CLIENT_TOKEN`, `GUILD_ID`, `MODEL`, `CLIENT_UID`, `OLLAMA_IP`, `OLLAMA_PORT`. + * The ollama ip and port should just use it's defaults by nature. If not, utilize `OLLAMA_IP = 127.0.0.1` and `OLLAMA_PORT = 11434`. * Now, you can run the bot by running `npm run client` which will build and run the decompiled typescript and run the setup for ollama. * **IMPORTANT**: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience). * Refer to the [resources](../README.md#resources) on what node version to use. -* Open up a separate terminal/shell (you will need wsl for this if on windows) and run `ollama serve` to startup ollama. +* If you are using wsl, open up a separate terminal/shell to startup the ollama service. Again, if you are running an older ollama, you must run `ollama serve` in that shell. + * If you are on an actual Linux machine/VM there is no need for another terminal (unless you have an older ollama version). * If you do not have a model, you will need to run `ollama pull [model name]` in a separate terminal to get it. \ No newline at end of file diff --git a/imgs/tutorial/bot-in-server.png b/imgs/tutorial/bot-in-server.png new file mode 100644 index 0000000000000000000000000000000000000000..111305bfec1f3aac6ba5106f2514c6a6e15caede GIT binary patch literal 7410 zcmd6MXHZjJ7cSC7P>`426{ScORFG}~qy?mj^dL>?Edham0g>LDkN`?YT4
>lqLkgzi8XJzD#x@vBA6IN2G&is&s>;fsd#Z8WWemPhC6L_Tl
zw8}GS7#&(P9N7|rdLmt1tPL?F$Q9@$6O+{PLzJ9I$Zr4F*uK3baG5Rkdc_U!+Y793
zevPM;S24<4uY)f9m-E#dWK`>{(c$9} 0?jJbvbWw{RuG#j}DS
zy9%gk5R0TUcVE7gHJSbO{PSlh@X<~#uXX*?scLQApf)B9nwCYz#Yr=7l(z#6pOBbu
zX)#K` |i%g|WQJ&Q+W!!ep^&
z#)!OOxmEdmw9+Mt{d^5js}LKzh7eGCq*ORugby(a!Eb+DNoqVz^5G7)*`twc--*Pv
z2$Z;%%xLe5EL J_xwT!nLJ`B+QI}@pHjKmX{i3xQ
z=p=6Ae;nB65pUqK)DpNd9t(D`x_Zfiw@xoRJY39Zz$(z@wm<0R+LwH_WJB)M{Cd4=
z;AGVkx-e^}5u}rMD+)~$edVYjamPWHFqATs@~7(Kx(ydlMF5cVe0=^aFqvMqzBXd^7AQ-;zAaIyBXoK3R100UeJ7MU0&Wn-q
zRT7d;@h<>y335I0-K-R81buIwe_qqlp%Q(o<+ELR`vW8D^}Q$B
2LgSnlG4QyKQgiwHvJV$bQ
zu9hJ2VA9jN{InwLO;sO=b~+cw5XBFehy4q!2${{5c#FoFP~BbAd2bg8#~5#+eo@pa
zyh%4+m!nro$Ft;4k?!Vc
L?6Wzm>8=N;s$1sV}!njAXt={(Xg~>cYy8%9#Bn8SIA;p}t&7*8{LMw-#Qc
z(p|AxEDwih&tVaKImgQxz9tK0FT5GFBKoCa)yAF7&ZV8Nw38gyyUt0J>8ll~>PxI$
z4sJtxPd^vumxBdQ*9D1X-5q_}B$qzUZ@Db{oBcxzLqDCn+8@Uo#|37VYMqVa)0&%vAU(2RGL8Q(3XI9AX^L&+;r)$s?P^@stdZEmhjXf6`N>J-EMU;AXf+>*Y-oWHr;-->yC9kn+
z8aOxWCi3@~khax#R^+TtQp&Mq#e40Q3|%W`9z2~%&HW@#Qu3OEB*#V7fF~vd_MYWw
z2&kWZ&HbH#rPxiAVaKw~`4;;7|&!bz*$WSeqP*1><*<-0BP4DFV}6=(Qk2f!iQI
zU2clPAEopGVXrp&AIDXA_03tl?8u7&3;$6khL%RdH^0Ic+cjB7<2lT;kB@m=I8+xqe~P{
zpLbt1M_%8}{I4nmOD-V1xCLZ^)2Ayrp71*Q)#!F9DUdu%n4I1EMhV_XlWzpM5_`o9taU#BoT4uB
u0m@1hDb31mIYGe(pkC|a0tHKBJHB}Xq8VUz(xqJl5zOJhm_w_mZfyYc;O?oNy
zfi#I{1+1E{A^s3r=&=MWg!kW_3=*za@K?KU#dccckloh2fjdY+zYllP=i@Aux*_gf
z+AM7^GWE3J`fbS}Js7mBqw^DovPIsE*9A8o_i{xrQ(^HN1#OJ)NsRLO)#=#$q2rND
zyLxt!!tI=xw@&t6ub`b`uI7$WU&<9!+_y*WbyCL|AGOM*FI>D=<`PP_0t7V`8%Rgo
z;@SKz+YdW5(Nf%;RPvf}ZpyMvl(K$B+PY%hKGyH-yX~>SeqQ!nId=N~FqZOaM5gU-
z$0^{aQ|AdIS?$_6C2{$k;a$?DYT1r+pR&hZSHiXv4mx!G+#iCGn~?a@