Chatbot
Customer Service chatbot
Last updated
Customer Service chatbot
Last updated
Scenario: We want to build a customer service chatbot that answers questions about our product and embed it into the website. The main data source we want the chatbot to leverage is our documentation.
At a high level, we need to create the following pipeline components:
A way to store our docs in a knowledge base (vector database that will allow for semantic based queries that can be leveraged by an LLM to answer a user question).
A way for the LLM to receive 1) queries from the knowledge base, 2) store conversation history, and 3) receive a question from the user.
A LLM is instructed to be a chatbot that answers questions from the user based on content from the website.
Knowledge bases can be created from the "storage" tab. Once created, you can leverage knowledge bases within a pipeline.
Once you are on the storage tab, click "New" >> "Create Knowledge Base". Give a name and description for the knowledge base.
Click the "+ Add Documents", for loader type, select "URL" (in this case, the documentation is URLs), and add in the links of URLs we want in the knowledge base. Repeat until all the URLs are added.
Note, the above can also be accomplished through the knowledge base reader node within the pipeline builder.
Click "Finish" (bottom right) to create the knowledge base
Open the pipeline builder: click "New" >> "Create Pipeline" within the "Pipelines" tab.
We leverage an "input" node (under the "General" tab of the no-code builder) to allow for user inputs (questions about documentation). The default for the input node is to have type "Text" which is what we want (questions are text).
Next, we leverage a knowledge base reader node (under the "knowledge base" tab). We then select the name of the knowledge base that we created in step 1. In this case, it is called "VectorShift Documentation".
We then connect the input node to the "query" edge of the knowledge base node. What this does is that the question from the user (the input) will query the knowledge base. The knowledge base will then return the most relevant chunks of data that are pertinent to the user question. This will then be utilized by the LLM to answer the user question effectively (to be discussed next).
Note: you may also create a new knowledge base by clicking "Create New Knowledge Base" on the knowledge base reader node.
We need to do three things (1) leverage a nLLM, (2) create a prompt that contains queries from the knowledge base, store conversation history, and the question from the user, and (3) create the system prompt (the prompt that tells the LLM how to behave. Here's how to do this:
First, we use the OpenAI node (under the LLM tab). The Open AI node has two fields - the "system" field and the "prompt" field. The "system" field allows you to tell the LLM how to behave. The "prompt" field allows you to input the prompt for the LLM. Within either you can use double curly braces ""{{}}" to create variables / additional edges. Whatever you place within the braces will automatically appear on the left-hand side of the block; the associated data connected to the named nodes will “replace” the curly brackets when the pipeline runs.
Next, we drag out a Chat Memory node (under the Chat tab in the no-code builder). This node will allow the LLM to “remember” the previous conversation history. We create and label a variable within the prompt field called "{{history}}" and connect the Chat Memory node to the created edge. As a result, every time the pipeline is run, the previous conversation history will be passed to the LLM through the prompt. You can click the gear on the chat memory node to adjust the size of the token window.
Prompt field within the Open AI node: we need to be able to pass to the LLM conversational history, the context provided by the knowledge base, and the user input. Thus, we label each piece of data (e.g., "Context") and then create a variable (e.g., "{{Context}}"). See below for reference. We then connect all the other nodes to the respective edges created on the LLM node (e.g., the "results" edge on the knowledge base node to the "Context" edge on the Open AI node).
System field within the Open AI node: here, we explain how the LLM should behave. The crux of the system prompt is "You are a chatbot specializing in answering a Question given Conversation History and Context". Note, that we use the labels from the prompt within the system prompt (and maintain the same spelling/capitalization).
Finally, we connect the "response" edge to an "output" node. The "Output" node can be found under the "General" tab.
Ensure you save the pipeline by clicking "Save" on the top right. Here, you name and give the pipeline a description.
There are four ways to deploy this pipeline. You can:
Run within the pipeline builder
Run as a form
Generate an API call
Use this pipeline as the "backend" for a chatbot
Run within the pipeline builder
Click "Run" in the top right of the pipeline builder. In this case, you can directly ask a question in the input box and click "run" to run the pipeline.
Run as a form
Access the pipeline in a “form”
type format. In the “Pipelines”
tab, select the pipeline you just created (and saved) and click “run”
. Ask questions directly in the input box.
Generate an API call
Click on the three dots of the pipeline and click "Generate API Call". You can find your API key under "Settings" (by clicking on your profile on the top right).
Backend for a chatbot
Access the pipeline in chatbot format. In the “Chatbot”
tab, click “Add”.
In the popup and in the “pipeline”
field, select the name of the pipeline that you just created. After creating the chatbot, click “Run”
to start the chatbot. Chatbot can also be accessed via API (click on your profile on the top right and click settings to access the API key).
Finally, to publish your pipeline to the marketplace, go back to the “Pipelines”
tab and find the pipeline you would like to publish. Then, click the three dots on the right-hand side and click “Publish”
.