ChatGPT Integration with Wildix PBX

This Guide describes how to integrate ChatGPT with Wildix. 

Created: February 2023

Updated: January 2024

Permalink: https://wildix.atlassian.net/wiki/x/AQBxC

Introduction

ChatGPT is a state-of-the-art language model developed by OpenAI, designed to generate human-like responses to natural language queries. It is built using deep learning techniques and trained on a massive amount of text data, which allows it to understand the nuances of human language and generate relevant and coherent responses.

One practical application of ChatGPT is providing real-time technical support. With ChatGPT, companies can set up chat interfaces that allow users to ask questions about their products or services. ChatGPT can then respond to those questions with helpful answers or provide guidance to users.

Integrating ChatGPT with chatbots can enhance the functionality and responsiveness of the bot. Chatbots may not always have the correct answer to every question or situation. By integrating ChatGPT with a chatbot, the chatbot can access the vast knowledge base of ChatGPT and provide more comprehensive and accurate responses to users.

ChatGPT can be integrated as:

  • Chatbot integrated in WebRTC Kite Chatbot - you can use it a personal assistance in Collaboration/ x-bees or use it as a Kite chatbot for external users 
  • Speech To Text (STT) integration 

Related sources:

Requirements

  • Min. WMS version: starting from WMS 5.02.20210127
  • Minimum Business license or higher (assigned to the chatbox user)
  • Premium license for STT integration
  • Last stable version of Node.js 

Installation

  1. Download the archivekite-xmpp-bot-master.zip
  2. Open the archive, navigate to /kite-xmpp-bot-master/app and open config.js with an editor of your choice 

    Replace the following values with your own:

    • domain: 'XXXXXX.wildixin.com' - domain name of the PBX
    • service: 'xmpps://XXXXXX.wildixin.com:443', - domain name of the PBX 
    • username: 'XXXX', - Kitebot user extension number, do not change it
    • password: 'XXXXXXXXXXXX' - Kitebot user password
    • authorization: 'Bearer sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', - ChatGPT authorization token
    • organization: 'org-XXXXXXXXXXXXXXXXXXXXXXXXX', – ChatGPT organization ID
    • model: 'gpt-3.5-turbo', - this parameter specifies which GPT model to use for text generation. gpt-3.5-turbo point to the latest model version (e.g. gpt-3.5-turbo-0613). GPT-3.5 models can understand and generate natural language or code. It is the most capable and cost effective model in the GPT-3.5 family which can return up to 4096 tokens
    • temperature: 0.1, - this parameter controls the "creativity" of the generated text by specifying how much randomness should be introduced into the model's output
    • externalmaxtokens: 250, - response token limit for Kite users contacting the chatbot
    • internalmaxtokens: 500  - response token limit for internal users contacting the chatbot
    • systemcontent: 'Be a helpful assistant. Provide accurate but concise response.', - this parameter can be used as an instruction for the model. Define text style, tone, volume, formality, etc. here

  3. Upload the archive to the /home/admin/ directory using WinSCP or any alternative SFTP client

  4. Connect to the web terminal. Log in as the super user via the su command, password wildix
  5. Install nodejs: 

    apt-get install nodejs unzip
  6. Unzip the archive: 

    unzip /home/admin/kite-xmpp-bot-master.zip 
  7. Copy the chatbot folder to /mnt/backups

    cp -r ./kite-xmpp-bot-master /mnt/backups/
  8. Move the chatbot.service.txt file to the appropriate directory and enable chatbot as a service to run in the background, then start the service: 

    cp /mnt/backups/kite-xmpp-bot-master/chatbot.service.txt /etc/systemd/system/chatbot.service
    systemctl enable chatbot.service 
    systemctl daemon-reload
    systemctl start chatbot.service
  9. Verify that the chatbot is running by either running the ps command:

    ps aux | grep node


    Or simply send the bot a message 

The service sends standard and error output to wms.log so you can check /var/log/wms.log for debug if need be.

STT Dialplan configuration example using ChatGPT and speech recognition

Use case: to interact with ChatGPT via voice and have a response sent via TTS to the caller. 

How-to:

  1. Download the Dialplan dialplan_2024.01.29_11.24.09_testupgrade_6.05.20240119.1_2211000083f3.bkp
  2. Go to WMS Dialplan -> Dialplan rules 
  3. Click Import to add downloaded Dialplan and click Apply



Dialplan applications explained:

  1. Set STT & TTS language
  2. Set DIAL_OPTIONS -> g to continue Dialplan execution after the Speech to text application, this allows looping the Dialplan to potentially answer multiple questions

  3. Set PROMPT_ENGINEERING -> Be a helpful assistant. Provide accurate but concise response. - this parameter can be used as an instruction for the model. Define text style, tone, volume, formality, etc. here

  4. Speech to text application that plays the voice prompt and captures the user’s response converting it to text. An in-depth guide for the STT Dialplan application: Dialplan applications Admin Guide

  5. NoOp(${RECOGNITION_RESULT}) - a debug application that shows the result of speech recognition, can be safely removed

  6. Set(RECOGNITION_RESULT=${SHELL(echo "${RECOGNITION_RESULT}" | tr -d \'):0:-1}) - this sets the RECOGNITION_RESULT variable to the value of the RECOGNITION_RESULT channel variable, with any single quotes removed. The :0:-1 substring removes the newline character

  7. Set(CONVERSATION=${CONVERSATION} \n${CALLERID(name)} ${CALLERID(num)}: ${RECOGNITION_RESULT}) - creating a variable that stores the conversation between the user and ChatGPT

  8. Set(chatGPT=${SHELL(curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -d '{ "model": "gpt-3.5-turbo", "messages": [ {"role": "system", "content": "${PROMPT_ENGINEERING}"}, {"role": "user", "content": "${RECOGNITION_RESULT}"} ], "temperature": 0.8, "max_tokens": 500 }' | tr -d "'()"):0:-1}) - this sends an HTTP request to the OpenAI ChatGPT API to generate a response based on the RECOGNITION_RESULT and PROMPT_ENGINEERING variables. The resulting JSON string is assigned to the ChatGPT variable. gpt-3.5-turbo is the model name, you can learn more about ChatGPT models by watching the dedicated Tech talk webinar or in the OpenAI documentation. Replace Bearer sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX with your API token

  9. Set(result=${SHELL(echo "${chatGPT}" | grep -oP '(?<=content).*'):0:-1}) - this extracts the response text from the chatGPT variable by using grep to search for the text following the “content“ field.

  10. Play sound -> ${result}. End of response. - this uses the text in the result variable to generate text to speech message and play back the response from chatGPT API to the caller, appending “End of response” to mark the end of response, which is something you can change to your preference

  11. Set(CONVERSATION=${CONVERSATION} \nChatGPT: ${result}) - this app add the AI’s response to the conversation variable created earlier

  12. System(echo "${CONVERSATION}" | mutt -F /etc/companies.d/0/Muttrc -s "ChatGPT ASR interaction" test@test.com) - this app is used to email the conversation to an email address; replace test@test.com with the address you want it to send to. This is optional and can be safely removed without any impact to the STT part of the Dialplan


Example callweaver output for debug purposes: