Compare commits

...

10 Commits

Author SHA1 Message Date
Kevin Dang
89c19990fa slash commands integrated
* sample env and late version incr

* added slash command compatibility

* updated command name

* updated environment sample

* updated interaction comment
2024-01-31 10:28:02 -08:00
Kevin Dang
b94ff55449 formatting and contributing
* fixed some formatting

* contributing format

* simple style rules
2024-01-30 16:15:35 -08:00
Kevin Dang
9247463480 hardcoded and mentions
* added options to queries

* removed hard coded vals, added message options

* updated importing

* added check for message mentions

* fix missing botID

* updated token to uid

* added contributer

---------

Co-authored-by: JT2M0L3Y <jtsmoley@icloud.com>
2024-01-29 12:50:59 -08:00
Kevin Dang
97acae3d08 added embed msg and stream parser 2024-01-28 12:59:45 -08:00
Kevin Dang
aaf734b06c added ollamajs esm 2024-01-25 18:24:37 -08:00
Kevin Dang
78921ee571 added persistence in chat endpoint 2024-01-23 22:24:26 -08:00
Kevin Dang
f8956b0b50 bot can edit message response 2024-01-23 21:26:39 -08:00
Kevin Dang
70103c1f5a readme ollama setup 2024-01-22 23:24:32 -08:00
Kevin Dang
4bcaae8461 ollama responds to discord msgs 2023-12-25 18:59:07 -08:00
Kevin Dang
7dd9b8f90c readme updates 2023-12-22 21:53:49 -08:00
23 changed files with 996 additions and 190 deletions

14
.env.sample Normal file
View File

@@ -0,0 +1,14 @@
# Discord token for the bot
CLIENT_TOKEN = INSERT_BOT_TOKEN
# id token of a discord server
GUILD_ID = INSERT_GUILD_ID
# Channel where the bot listens to messages
CHANNEL_ID = INSERT_CHANNEL_ID
# model for the bot to query from (i.e. llama2 [llama2:13b], mistral, ... )
MODEL = INSERT_MODEL_NAME
# discord bot user id for mentions
BOT_UID = INSERT_BOT_USER_ID

38
.github/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,38 @@
<!--
Author: Kevin Dang
Date: 1-30-2024
-->
## Run the Bot
* Refer to all sections below before running the bot.
* You should now have `Ollama`, `NodeJS`, ran `npm install`.
* You will also need a discord bot to run. Refer to the [developer portal](https://discord.com/developers/) to learn how to set one up and invite it to your server. If that does not help then look up a YouTube video like this [one](https://www.youtube.com/watch?v=KZ3tIGHU314&ab_channel=UnderCtrl).
* Now run `npm run start` to run the client and ollama at the same time (this must be one in wsl or a Linux distro)
## Set up (Development-side)
* Pull the repository using `https://github.com/kevinthedang/discord-ollama.git`.
* Refer to `Ollama Setup` in the readme to set up Ollama.
* This must be set up in a Linux environment or wsl2.
* Install NodeJS `v18.18.2`
* You can check out `Resources` and `To Run` in the readme for a bit of help.
* You can also reference [NodeJS Setup](#nodejs-setup)
* When you have the project pulled from github, open up a terminal and run `npm i` or `npm install` to get all of the packages for the project.
* In some kind of terminal (`git bash` is good) to run the client. You can run Ollama but opening up wsl2 and typing `ollama serve`.
* Refer to `Ollama Setup` if there are any issues.
## Environment
* You will need two environment files:
* `.env`: for running the bot
* `CLIENT_TOKEN`: the token for the bot to log in
* `CHANNEL_ID`: the id of the channel you wish for the bot to listen in
* `MODEL`: the mode you wish to use
* `BOT_UID`: the user id the bot goes by (the id of the discord user)
* `.env.dev.local`: also runs the bot, but with development variables
* Currently there are no differences between the two, but when needed, you may add environment variables as needed.
## NodeJS Setup
* Install [nvm](https://github.com/nvm-sh/nvm?tab=readme-ov-file#installing-and-updating) using `curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash`
* Ensure this in the profile of what shell you use (for `git bash` it would be `.bash_profile` found in your home directory)
* Ensure it has been install correctly by running `nvm -v`
* Now, install `v18.18.2` by running `nvm install 18.18.2`
* Then run `nvm use 18.18.2 | nvm alias default 18.18.2` or you can run them separately if that does not work. This just sets the default NodeJS to `v18.18.2` when launching a shell.

5
.github/style.md vendored Normal file
View File

@@ -0,0 +1,5 @@
## Style Preferences
* Please just make sure that you are using a `Tab Default` of 4 for spacing
* You don't need semicolons at the end of everything.
* Comments for functions would be nice to help explain what they do and what the parameters are for.
* If there are any other issues, just refer to the [Google Style Guide](https://google.github.io/styleguide/tsguide.html)

22
.gitignore vendored
View File

@@ -1,10 +1,17 @@
# Credentials
.env
.dev.env
# Created by https://www.toptal.com/developers/gitignore/api/node # Created by https://www.toptal.com/developers/gitignore/api/node
# Edit at https://www.toptal.com/developers/gitignore?templates=node # Edit at https://www.toptal.com/developers/gitignore?templates=node
# builds
build/
dist/
# dotenv environment variable files
.env
.env.dev.local
.env.test.local
.env.production.local
.env.local
### Node ### ### Node ###
# Logs # Logs
logs logs
@@ -80,13 +87,6 @@ web_modules/
# Yarn Integrity file # Yarn Integrity file
.yarn-integrity .yarn-integrity
# dotenv environment variable files
.env
.env.development.local
.env.test.local
.env.production.local
.env.local
# parcel-bundler cache (https://parceljs.org/) # parcel-bundler cache (https://parceljs.org/)
.cache .cache
.parcel-cache .parcel-cache

View File

@@ -1,17 +1,37 @@
# Discord Ollama Integration [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC_BY--NC_4.0-blue.svg)](https://creativecommons.org/licenses/by-nc/4.0/) [![Release Badge](https://img.shields.io/github/v/release/kevinthedang/Space-Guardians?logo=github)](https://github.com/kevinthedang/discord-ollama/releases/latest) # Discord Ollama Integration [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC_BY--NC_4.0-darkgreen.svg)](https://creativecommons.org/licenses/by-nc/4.0/) [![Release Badge](https://img.shields.io/github/v/release/kevinthedang/discord-ollama?logo=github)](https://github.com/kevinthedang/discord-ollama/releases/latest)
Ollama is an AI model management tool that allows users to install and use custom large language models locally. The goal is to create a discord bot that will utilize Ollama and chat with it on a Discord! Ollama is an AI model management tool that allows users to install and use custom large language models locally. The goal is to create a discord bot that will utilize Ollama and chat with it on a Discord!
## Ollama Setup
* Go to Ollama's [Linux download page](https://ollama.ai/download/linux) and run the simple curl command they provide. The command should be `curl https://ollama.ai/install.sh | sh`.
* Now the the following commands in separate terminals to test out how it works!
* In terminal 1 -> `ollama serve` to setup ollama
* In terminal 2 -> `ollama run [model name]`, for example `ollama run llama2`
* The models can vary as you can create your own model. You can also view ollama's [library](https://ollama.ai/library) of models.
* This can also be done in [wsl](https://learn.microsoft.com/en-us/windows/wsl/install) for Windows machines.
* You can now interact with the model you just ran (it might take a second to startup).
* Response time varies with processing power!
## To Run
* Clone this repo using `git clone https://github.com/kevinthedang/discord-ollama.git` or just use [GitHub Desktop](https://desktop.github.com/) to clone the repo.
* Run `npm install` to install the npm packages.
* You will need a `.env` file in the root of the project directory with the bot's token.
* For example, `CLIENT_TOKEN = [Bot Token]`
* Now, you can run the bot by running `npm run start` which will build and run the decompiled typescript and run the setup for ollama.
* **IMPORTANT**: This must be ran in the wsl/Linux instance to work properly! Using Command Prompt/Powershell/Git Bash/etc. will not work on Windows (at least in my experience).
* Refer to the [resources](#resources) on what node version to use.
## Resources ## Resources
* [NodeJS](https://nodejs.org/en) * [NodeJS](https://nodejs.org/en)
* This project uses `v20.10.0` (npm `10.2.5`). Consider using [nvm](https://github.com/nvm-sh/nvm) for multiple NodeJS versions. * This project uses `v20.10.0` (npm `10.2.5`). Consider using [nvm](https://github.com/nvm-sh/nvm) for multiple NodeJS versions.
* To run dev in `ts-node`, using `v18.x.x` is recommended. * To run dev in `ts-node`, using `v18.18.2` is recommended. **CAUTION**: `v18.19.0` or `lts/hydrogen` will not run properly.
* To run dev with `tsx`, you can use `v20.10.0`. * To run dev with `tsx`, you can use `v20.10.0` or earlier.
* This project supports any NodeJS version above `16.x.x` to only allow ESModules. * This project supports any NodeJS version above `16.x.x` to only allow ESModules.
* [Ollama](https://ollama.ai/) * [Ollama](https://ollama.ai/)
* [Docker Documentation](https://docs.docker.com/?_gl=1*nof6f8*_ga*MTQxNTc1MTYxOS4xNzAxNzI1ODAx*_ga_XJWPQMJYHQ*MTcwMjQxODUzOS4yLjEuMTcwMjQxOTgyMC41OS4wLjA.)
* [Discord Developer Portal](https://discord.com/developers/docs/intro) * [Discord Developer Portal](https://discord.com/developers/docs/intro)
* [Discord.js Docs](https://discord.js.org/docs/packages/discord.js/main)
## Acknowledgement ## Acknowledgement
* [Kevin Dang](https://github.com/kevinthedang) * [Kevin Dang](https://github.com/kevinthedang)
* [Jonathan Smoley](https://github.com/JT2M0L3Y)
[discord-ollama](https://github.com/kevinthedang/discord-ollama) © 2023 by [Kevin Dang](https://github.com/kevinthedang) is licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/?ref=chooser-v1) [discord-ollama](https://github.com/kevinthedang/discord-ollama) © 2023 by [Kevin Dang](https://github.com/kevinthedang) is licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/?ref=chooser-v1)

686
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +1,26 @@
{ {
"name": "discord-ollama", "name": "discord-ollama",
"version": "0.0.1", "version": "0.1.4",
"description": "Ollama Integration into discord", "description": "Ollama Integration into discord",
"main": "dist/index.js", "main": "build/index.js",
"exports": "./dist/index.js", "exports": "./build/index.js",
"scripts": { "scripts": {
"dev-tsx": "tsx watch src/index.ts", "dev-tsx": "tsx watch src/index.ts",
"dev-mon": "nodemon --config nodemon.json src/index.ts", "dev-mon": "nodemon --config nodemon.json src/index.ts",
"build": "tsc", "build": "tsc",
"prod": "node .", "prod": "node .",
"start": "npm run build && npm run prod" "client": "npm run build && npm run prod",
"API": "ollama serve",
"start": "concurrently \"npm:API\" \"npm:client\""
}, },
"author": "Kevin Dang", "author": "Kevin Dang",
"license": "ISC", "license": "ISC",
"dependencies": { "dependencies": {
"axios": "^1.6.2",
"concurrently": "^8.2.2",
"discord.js": "^14.14.1", "discord.js": "^14.14.1",
"dotenv": "^16.3.1" "dotenv": "^16.3.1",
"ollama": "^0.4.3"
}, },
"devDependencies": { "devDependencies": {
"@types/node": "^20.10.5", "@types/node": "^20.10.5",

View File

@@ -1,10 +1,12 @@
import { Client, GatewayIntentBits } from "discord.js"; import { Client, GatewayIntentBits } from 'discord.js'
import { registerEvents } from "./utils/events.js"; import { registerEvents } from './utils/events.js'
import Events from "./events/index.js"; import Events from './events/index.js'
// Import keys/tokens // Import keys/tokens
import Keys from "./keys.js"; import Keys from './keys.js'
// initialize the client with the following permissions when logging in
const client = new Client({ const client = new Client({
intents: [ intents: [
GatewayIntentBits.Guilds, GatewayIntentBits.Guilds,
@@ -14,11 +16,25 @@ const client = new Client({
] ]
}); });
registerEvents(client, Events) const messageHistory = [
{
role: 'system',
content: 'Your name is Ollama GU'
}
]
/**
* register events for bot to listen to in discord
* @param messageHistory message history for the llm
* @param Events events to register
* @param client the bot reference
* @param Keys tokens from .env files
*/
registerEvents(client, Events, messageHistory, Keys)
// Try to log in the client // Try to log in the client
client.login(Keys.clientToken) await client.login(Keys.clientToken)
.catch((error) => { .catch((error) => {
console.error('[Login Error]', error); console.error('[Login Error]', error)
process.exit(1); process.exit(1)
}); })

6
src/commands/index.ts Normal file
View File

@@ -0,0 +1,6 @@
import { SlashCommand } from '../utils/commands.js'
import { ThreadCreate } from './threadCreate.js'
export default [
ThreadCreate
] as SlashCommand[]

View File

@@ -0,0 +1,28 @@
import { ChannelType, Client, CommandInteraction, TextChannel } from 'discord.js'
import { SlashCommand } from '../utils/commands.js'
export const ThreadCreate: SlashCommand = {
name: 'thread',
description: 'creates a thread and mentions user',
// Query for server information
run: async (client: Client, interaction: CommandInteraction) => {
// fetch the channel
const channel = await client.channels.fetch(interaction.channelId)
if (!channel || channel.type !== ChannelType.GuildText) return
const thread = await (channel as TextChannel).threads.create({
name: `support-${Date.now()}`,
reason: `Support ticket ${Date.now()}`
})
// Send a message in the thread
thread.send(`**User:** ${interaction.user}`)
// user only reply
return interaction.reply({
content: 'I can help you in the Thread below.',
ephemeral: true
})
}
}

View File

@@ -1,7 +1,11 @@
import { Event } from '../utils/index.js' import { Event } from '../utils/index.js'
import interactionCreate from './interactionCreate.js'
import messageCreate from './messageCreate.js'
import ready from './ready.js' import ready from './ready.js'
// Centralized export for all events // Centralized export for all events
export default [ export default [
ready ready,
messageCreate,
interactionCreate
] as Event[] // staticly is better ts practice, dynamic exporting is possible ] as Event[] // staticly is better ts practice, dynamic exporting is possible

View File

@@ -0,0 +1,19 @@
import { event, Events } from '../utils/index.js'
import commands from '../commands/index.js'
/**
* Interaction creation listener for the client
* @param interaction the interaction received from the server
*/
export default event(Events.InteractionCreate, async ({ log, client }, interaction) => {
if (!interaction.isCommand() || !interaction.isChatInputCommand()) return
log(`Interaction called \'${interaction.commandName}\' from ${interaction.client.user.tag}.`)
// ensure command exists, otherwise kill event
const command = commands.find(command => command.name === interaction.commandName)
if (!command) return
// the command exists, execute it
command.run(client, interaction)
})

View File

@@ -0,0 +1,39 @@
import { embedMessage, event, Events } from '../utils/index.js'
/**
* Max Message length for free users is 2000 characters (bot or not).
* @param message the message received from the channel
*/
export default event(Events.MessageCreate, async ({ log, msgHist, tokens }, message) => {
log(`Message created \"${message.content}\" from ${message.author.tag}.`)
// Hard-coded channel to test output there only, in our case "ollama-endpoint"
if (message.channelId != tokens.channel) return
// Do not respond if bot talks in the chat
if (message.author.tag === message.client.user.tag) return
// Only respond if message mentions the bot
if (!message.mentions.has(tokens.clientUid)) return
// push user response
msgHist.push({
role: 'user',
content: message.content
})
// Try to query and send embed
const response = await embedMessage(message, tokens, msgHist)
// Try to query and send message
// log(normalMessage(message, tokens, msgHist))
// If something bad happened, remove user query and stop
if (response == undefined) { msgHist.pop(); return }
// successful query, save it as history
msgHist.push({
role: 'assistant',
content: response.message.content
})
})

View File

@@ -1,5 +1,17 @@
import { event, Events } from '../utils/index.js' import { event, Events, registerCommands } from '../utils/index.js'
import { ActivityType } from 'discord.js'
import commands from '../commands/index.js'
// Log when the bot successfully logs in and export it
export default event(Events.ClientReady, ({ log }, client) => { export default event(Events.ClientReady, ({ log }, client) => {
return log(`Logged in as ${client.user.username}.`) log(`Logged in as ${client.user.username}.`)
// Register the commands associated with the bot upon loggin in
registerCommands(client, commands)
// set status of the bot
client.user.setActivity({
name: 'Powered by Ollama',
type: ActivityType.Custom
})
}) })

View File

@@ -1,7 +1,11 @@
import { getEnvVar } from "./utils/env.js" import { getEnvVar } from './utils/env.js'
export const Keys = { export const Keys = {
clientToken: getEnvVar('CLIENT_TOKEN') clientToken: getEnvVar('CLIENT_TOKEN'),
channel: getEnvVar('CHANNEL_ID'),
model: getEnvVar('MODEL'),
clientUid: getEnvVar('CLIENT_UID'),
guildId: getEnvVar('GUILD_ID')
} as const // readonly keys } as const // readonly keys
export default Keys export default Keys

22
src/utils/commands.ts Normal file
View File

@@ -0,0 +1,22 @@
import { CommandInteraction, ChatInputApplicationCommandData, Client } from 'discord.js'
/**
* interface for how slash commands should be run
*/
export interface SlashCommand extends ChatInputApplicationCommandData {
run: (client: Client, interaction: CommandInteraction) => void
}
/**
* register the command to discord for the channel
* @param client the bot reference
* @param commands commands to register to the bot
*/
export function registerCommands(client: Client, commands: SlashCommand[]): void {
// ensure the bot is online before registering
if (!client.application) return
// iterate through all commands and register them with the bot
for (const command of commands)
client.application.commands.create(command)
}

View File

@@ -1,8 +1,8 @@
import { resolve } from "path" import { resolve } from 'path'
import { config } from "dotenv" import { config } from 'dotenv'
// Find config - ONLY WORKS WITH NODEMON // Find config - ONLY WORKS WITH NODEMON
const envFile = process.env.NODE_ENV === "development" ? ".dev.env" : ".env" const envFile = process.env.NODE_ENV === 'development' ? '.env.dev.local' : '.env'
// resolve config file // resolve config file
const envFilePath = resolve(process.cwd(), envFile) const envFilePath = resolve(process.cwd(), envFile)

View File

@@ -1,43 +1,58 @@
import type { ClientEvents, Awaitable, Client } from 'discord.js'; import type { ClientEvents, Awaitable, Client } from 'discord.js'
// Export events through here to reduce amount of imports // Export events through here to reduce amount of imports
export { Events } from 'discord.js'; export { Events } from 'discord.js'
export type LogMethod = (...args: unknown[]) => void; export type LogMethod = (...args: unknown[]) => void
export type EventKeys = keyof ClientEvents; // only wants keys of ClientEvents object export type EventKeys = keyof ClientEvents // only wants keys of ClientEvents object
// Event properties // Event properties
export interface EventProps { export interface EventProps {
client: Client; client: Client
log: LogMethod; log: LogMethod
msgHist: { role: string, content: string }[]
tokens: {
channel: string,
model: string,
clientUid: string
}
} }
export type EventCallback<T extends EventKeys> = ( export type EventCallback<T extends EventKeys> = (
props: EventProps, props: EventProps,
...args: ClientEvents[T] ...args: ClientEvents[T]
) => Awaitable<unknown>; // Method can be synchronous or async, unknown so we can return anything ) => Awaitable<unknown> // Method can be synchronous or async, unknown so we can return anything
// Event interface // Event interface
export interface Event<T extends EventKeys = EventKeys> { export interface Event<T extends EventKeys = EventKeys> {
key: T; key: T
callback: EventCallback<T>; callback: EventCallback<T>
} }
export function event<T extends EventKeys>(key: T, callback: EventCallback<T>): Event<T> { export function event<T extends EventKeys>(key: T, callback: EventCallback<T>): Event<T> {
return { key, callback }; return { key, callback }
} }
export function registerEvents(client: Client, events: Event[]): void { export function registerEvents(
client: Client,
events: Event[],
msgHist: { role: string, content: string }[],
tokens: {
channel: string,
model: string,
clientUid: string
}
): void {
for (const { key, callback } of events) { for (const { key, callback } of events) {
client.on(key, (...args) => { client.on(key, (...args) => {
// Create a new log method for this event // Create a new log method for this event
const log = console.log.bind(console, `[Event: ${key}]`); const log = console.log.bind(console, `[Event: ${key}]`)
// Handle Errors, call callback, log errors as needed // Handle Errors, call callback, log errors as needed
try { try {
callback({ client, log }, ...args); callback({ client, log, msgHist, tokens }, ...args)
} catch (error) { } catch (error) {
log('[Uncaught Error]', error); log('[Uncaught Error]', error)
} }
}); })
} }
} }

View File

@@ -1,3 +1,6 @@
// Centralized import index // Centralized import index
export * from './env.js'; export * from './env.js'
export * from './events.js'; export * from './events.js'
export * from './messageEmbed.js'
export * from './messageNormal.js'
export * from './commands.js'

66
src/utils/messageEmbed.ts Normal file
View File

@@ -0,0 +1,66 @@
import { EmbedBuilder, Message } from 'discord.js'
import ollama, { ChatResponse } from 'ollama'
/**
* Method to send replies as normal text on discord like any other user
* @param message message sent by the user
* @param tokens tokens to run query
* @param msgHist message history between user and model
*/
export async function embedMessage(
message: Message,
tokens: {
channel: string,
model: string
},
msgHist: {
role: string,
content: string
}[]
) {
// bot response
let response: ChatResponse
// initial message to client
const botMessage = new EmbedBuilder()
.setTitle(`Responding to ${message.author.tag}`)
.setDescription('Generating Response . . .')
.setColor('#00FF00')
// send the message
const sentMessage = await message.channel.send({ embeds: [botMessage] })
try {
// Attempt to query model for message
response = await ollama.chat({
model: tokens.model,
messages: msgHist,
options: {
num_thread: 8, // remove if optimization needed further
mirostat: 1,
mirostat_tau: 2.0,
top_k: 70
},
stream: false
})
const newEmbed = new EmbedBuilder()
.setTitle(`Responding to ${message.author.tag}`)
.setDescription(response.message.content)
.setColor('#00FF00')
// edit the message
sentMessage.edit({ embeds: [newEmbed] })
} catch(error: any) {
const errorEmbed = new EmbedBuilder()
.setTitle(`Responding to ${message.author.tag}`)
.setDescription(error.error)
.setColor('#00FF00')
// send back error
sentMessage.edit({ embeds: [errorEmbed] })
}
// Hope there is a response! undefined otherwie
return response!!
}

View File

@@ -0,0 +1,48 @@
import { Message } from 'discord.js'
import ollama, { ChatResponse } from 'ollama'
/**
* Method to send replies as normal text on discord like any other user
* @param message message sent by the user
* @param tokens tokens to run query
* @param msgHist message history between user and model
*/
export function normalMessage(
message: Message,
tokens: {
channel: string,
model: string
},
msgHist: {
role: string,
content: string
}[]
) {
// bot's respnse
let response: ChatResponse
message.reply('Generating Response . . .').then(async sentMessage => {
try {
// Attempt to query model for message
response = await ollama.chat({
model: tokens.model,
messages: msgHist,
options: {
num_thread: 8, // remove if optimization needed further
mirostat: 1,
mirostat_tau: 2.0,
top_k: 70
},
stream: false
})
// edit the 'generic' response to new message
sentMessage.edit(response.message.content)
} catch(error: any) {
sentMessage.edit(error.error)
}
})
// Hope there is a response, force client to believe
return response!!
}

23
src/utils/streamParse.ts Normal file
View File

@@ -0,0 +1,23 @@
import { AxiosResponse } from 'axios'
/**
* When running a /api/chat stream, the output needs to be parsed into an array of objects
* @param stream Axios response to from Ollama
*/
export function parseStream(stream: AxiosResponse<any, any>) {
// split string by newline
const keywordObjects: string[] = stream.data.trim().split('\n')
// parse string and load them into objects
const keywordsArray: {
model: string,
created_at: string,
message: {
role: string,
content: string
},
done: boolean
}[] = keywordObjects.map((keywordString) => JSON.parse(keywordString))
return keywordsArray
}

View File

@@ -14,7 +14,8 @@
// We can import json files like JavaScript // We can import json files like JavaScript
"resolveJsonModule": true, "resolveJsonModule": true,
// Decompile .ts to .js into a folder named dist // Decompile .ts to .js into a folder named dist
"outDir": "dist" "outDir": "build",
"rootDir": "src"
}, },
// environment for env vars // environment for env vars
"include": ["src/**/*"], "include": ["src/**/*"],