See it in action on my Twitch channel!
- Overview
- Prerequisites
- Package Installation/Environment Setup
- Creating a Twitch Chatbot
- Set Up .env File
- Running the Chatbot
- Automatic Responses
- Further Customisation
This repository is a step by step process to set up a Twitch chatbot and enable auto-authentication with token refreshing. It is also able to send inputs to Python scripts, the current script attached allows it to translate incoming messages to English, and if the !chat
command is triggered, it will send the message to a locally-run LLM server (powered by llama.cpp in my use case) for a response.
This repository is run and tested on MacBook Air with Apple M2 Chip (ARM64/Aarch64 architecture).
MacOS Sonoma 14.2.1
8GB RAM
Other requirements:
- Twitch account, and secondary Twitch account for your chatbot
- Miniconda
- Homebrew (For MacOS/Linux)
- Node.js (v18.19.0)
Run
brew install node@18
node -v
to check on your version, it should returnv18.19.1
. If you get a responsenode: command not found
, run the command provided when you install node@18:
If you need to have node@18 first in your PATH, run: echo 'export PATH="/opt/homebrew/opt/node@18/bin:$PATH"' >> ~/.zshrc
- Twitch CLI (Currently using version 1.1.22)
The link provided has instructions for Windows installation as well. To check version, run
brew install twitchdev/twitch/twitch-cli
twitch version
.
After cloning the repository, navigate to the root folder to proceed with Node package installation.
npm install
Proceed to create Python environment using miniconda. Adjust requirements.txt
as per your needs if you are using a different Python script.
conda env create -n env_name python=3.10
conda activate env_name
pip install -r requirements.txt
Follow the guidelines to Registering Your App.
- Assign any name you wish, your chatbot's name will be the secondary account you created.
- For
OAuth Redirect URLs
, assign it tohttp://localhost:3000
. Category
: Chat BotClient Type
: Confidential- Make sure to take note of the
Client ID
andClient Secret
Proceed with configuring your Twitch CLI, you will be prompted to provide your Client ID and Client Secret.
Once that is done, log in to your browser with your secondary account, i.e. your chatbot's account.
Next on the agenda is to Get a User Access Token.
Specifying the scope here is crucial, as it determines your app's access to information. Here is a list of scopes that you may wish to use. For a chatbot, the scopes required are chat:read
and chat:edit
, meaning to read and write to the channel. So the command will be:
twitch token -u -s 'chat:read chat:edit'
A browser will be opened for you to authorize with your chatbot account, following which your User Access Token
and Refresh Token
will be provided in your terminal output. Do not share this information. Please save the provided information in the repository root folder under tokens.json
in the following format:
{
"access_token": "xxxxxxxxxxxxxxxxxxx",
"refresh_token": "xxxxxxxxxxxxxxxxxxx",
"scope": [
"chat:edit",
"chat:read"
],
"token_type": "bearer",
"expires_in": 14688
}
Create a file in your repository root folder .env
. It should have the following parameters in this format:
TWITCH_CHANNEL = "myTwitchChannel"
TWITCH_BOT_USERNAME = "ChatBotName"
BOT_LIST=bot1,bot2,bot3,ChatBotName
TWITCH_CLIENT_ID = "xxxxxxxxxxxxxxxxxxxx"
TWITCH_CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxx"
PYTHON_SCRIPT = "testing.py"
LLM_PORT = 8080
TWITCH_CHANNEL
: The name of your Twitch channel.TWITCH_BOT_USERNAME
: The name of your chatbot/secondary channelTWITCH_CLIENT_ID
: Obtained when creating your chatbotTWITCH_CLIENT_SECRET
: Obtained when creating your chatbotPYTHON_SCRIPT
: File name of python script that will be acting on the input messages. Messages from a user inBOT_LIST
or a chat command e.g. "!hi user1234" will be ignored.BOT_LIST
: List of excluded bots/users. Python script will not act on messages from them, make sure to includeTWITCH_BOT_USERNAME
. I simultaneously run a chatbot Nightbot(which I used before this project) and the extension Pokemon Community Game, so I don't want my Python script to be acting on their automated messages as well.LLM_PORT
: localhost port which your LLM server has exposed.
From the repository root folder, start up the chatbot:
$ npm start
> meowdybuddy@1.0.0 start
> node bot.js
Connected to Twitch chat!
Your access token expires every few hours, so in the event it has expired when you start it up, it should refresh your access token and update in tokens.json
automatically. Then it will attempt to connect to Twitch again.
$ npm start
> meowdybuddy@1.0.0 start
> node bot.js
[17:25] error: Login authentication failed
Login authentication failed
Refreshing access token...
refresh token: abcdefgh
{
access_token: 'xxxxxxxxxxxxxxx',
expires_in: 14404,
refresh_token: 'abcdefgh',
scope: [ 'chat:edit', 'chat:read' ],
token_type: 'bearer'
}
Connected to Twitch chat!
Make sure your local LLM server is up and running, and you should be good to go. Input "!chat " before your message to talk to your LLM! Have fun!
The file bot_messages.json
contains a dictionary of messages that the chatbot will auto respond to, if mentioned by the channel owner or any of the users in BOT_LIST
. The key is the message it will look out for, while the value is the automatic response.
{
"Bot message goes here.":"Your intended response goes here."
}
For example, if the channel owner sends the message "Bot message goes here.", the chatbot will automatically reply "Your intended response goes here."
The current system prompt, found in function send_post_request
in testing.py
, is as such:
url = f"http://localhost:{port_number}/completion"
headers = {
"Content-Type": "application/json"
}
data = {
"prompt": f"{user}: {message}. {twitch_bot_username}:",
"stop": [f"{user}:",":"],
"system_prompt": {
"prompt": f"You are {twitch_bot_username}, a cheerful and helpful cat assistant \
who speaks like a cat. You occasionally pepper your conversation with cat sounds.",
"anti_prompt": f"{user}:",
"assistant_name": f"{twitch_bot_username}:"
}
}
I currently use llama2-7b-chat, quantized to 4 bits. Do adjust the prompt accordingly to cater to your specific needs. If you are running your model on the cloud, do update the URL as well. This will be made configurable in future updates.