Quick Start Guide
Quick start using VectorShift
Last updated
Quick start using VectorShift
Last updated
To get started, we will describe how to build a basic pipeline using the VectorShift platform. In doing so, we will discuss a few different parts of the platform!
We will discuss how to build a pipeline that answers questions about a given website. In summary:
Part 1: Navigating to our pipeline builder
Part 2: Building a pipeline that can analyze a website
Part 3: Deploying the pipeline
After logging into VectorShift, you will be able to open our pipeline builder by selecting "New" >> "Create Pipeline" within the "Pipelines" tab. See image below:
Within the pipeline builder,
You will be use the "nodes" under the various tabs at the top of the page to build a pipeline (a workflow).
On the top right, you can click "Run" to run the pipeline within the pipeline builder to iterate on pipeline architecture.
After making an edit to a pipeline (e.g., using a node) a "Save pipeline" button will appear on the top right. View saved pipelines in the "Pipelines" tab.
At a high level, we need the following functionality:
Step 1: A way to input the URL of the website you want to analyze and embed the contents into a vector database. A vector database allows for semantic-based queries that return the most relevant pieces of information. The information can then be subsequently used by an LLM to answer questions.
Step 2: A way for the 1) vector database to return relevant context based on the user query and 2) for the pipeline to receive a query from the user.
Step 3: A large language model instructed to be an analyst that answers questions based on relevant context from the vector database.
We need to feed the contents of a URL into a Vector database and allow for relevant information to be queried from the Vector database.
In the “General” tab, use a text node and copy in the URL link you want to analyze. In this case, we are using Vectorshift.ai.
We connect the text node to a URL loader which reads data from a URL and transforms the data into a format that can be loaded into a vector database.
We connect the URL loader to a Vector Query node (the "documents" edge) which stores the contents of the website in a temporary Vector database (a database that allows for semantic / meaning-based search).
In the "General" tab, we drag out an input node. This gives the functionality for a user to ask a question / enter queries. We need to connect it both to the Vector Query node and the LLM. This is because:
Connecting the input node to the Vector Query node and the output of the Vector Query node to the prompt edge of the LLM allows the user question to query the Vector Database to retrieve relevant information, which is then outputted into the prompt (the LLM uses this context to be able to answer the user question).
We use the "Prompt" field within the LLM node (here, we are using OpenAI's GPT 3.5 Turbo - discussed later) to write the prompt. Within a field, every time you use double curly braces {{}}, the text within the double braces will automatically appear on the left-hand side of the node (an "edge"); the associated data connected to the named nodes will “replace
” the curly brackets when the pipeline runs. Here, we use two curly braces. One for the user question ({{user_question}}), which we label as "User Question" and is connected to the input node, and another for Context ({{Context}}), which we label as "Context" which is connected to the output of the Vector Query node.
Labeling simply means we call out "User Question" or "Context" right above the variable.
You may also use a text node as a prompt. To do this, you can place the same contents of what is in the prompt field in a text node and connect the output of the text node to the "Prompt" handle of an LLM node.
In the LLMs tab, you will find your LLM options. In this case, we are using OpenAI's GPT 3.5-Turbo.
We have already completed the "Prompt" field from above. Now, we need to complete the "Systems" field which tells the LLM how to behave. In this case, we write: "You are an analyst that answers User Question based on Context". Note, that we use the same labels, "User Question" and "Context" which match the labels we used in the prompt.
Finally, we use an output node (from the "home" tab) and connect it to the "response" edge of the Open AI LLM node.
Finally, save the pipeline by clicking "Save" on the top right of the pipeline builder. Here you can name and add a short description. The pipeline will now appear in your "Pipelines" tab.
There are four ways to deploy this pipeline. You can:
Run within the pipeline builder
Run as a form
Generate an API call
Use this pipeline as the "backend" for a chatbot
Click " run pipeline" in the top right of the pipeline builder. In this case, you can directly ask a question in the "input_1" input box and click "run" to run the pipeline.
You can change the naming of the input variables through the input node.
On the "Pipelines" Tab, click "Run" on the pipeline. This will show a pop-up that will allow you to run the pipeline.
Generate an API Call
Click on the three dots on the right-hand side of a pipeline and click "Generate API Call". You can find your API key under "Settings" (by clicking on your profile on the top right).
In the "Chatbots" tab, click "+Add" and follow the instructions to create a chatbot.
Click "Run" on the chatbot to run or you can generate a website or API call for your chatbot.