Large Language Models (LLMs) are AI models trained on large corpuses of data, and can generate text, images, videos, etc. In this section, we will discuss LLMs that generate text.

Through careful prompting LLMs can accomplish a variety of tasks. The VectorShift platform is LLM agnostic, meaning you can choose which model to use in your workflows (OpenAI, Anthropic, Google,m Mistral, Llama, etc.). You can choose the model and prompt best suited for your application.

The AI landscape changes very fast and we can expect new models to be released by various research labs. The VectorShift team will add new LLMs as soon as they are released. Within pipelines, you can then swap between LLM models / providers with ease.

How to use an LLM

To use an LLM, you must provide the following inputs:

  1. System Prompt: instruct how the LLM should behave in the system prompt within the LLM node or write it in a text box and connect it to the “System” input edge. Reference data sources you use in the Prompt within System (Answer the User Question using Context)
  2. Prompt: define variables using double curly braces (open the variable builder by typing : {{ in any text field)

LLM Settings

System and Prompt

Some models (e.g., OpenAI) are trained to take two inputs, a “system” prompt that contains instruction for the model to follow and a “prompt” input with various data sources (e.g., the user message, context, data sources, etc.). Other models (e.g., Gemini) have one singular prompt where you place both the instructions and the data sources.

Token Limits

Each model has a limit max number of input and output tokens. To adjust the limit for a particular model you can alter the max tokens parameter.

You cannot increase the max tokens beyond the maximum supported for a particular model. This setting is found in the gear on the LLM node.

Streaming

To stream output, click on the gear and then check “Stream Response”. This setting is found in the gear on the LLM node.

Citations

You can display the sources the LLM uses by checking off “Show Sources” in the gear. This setting is found in the gear on the LLM node.

JSON Response

To have the model return a structured JSON output rather than pure text check the “Json output” box. This setting is found in the gear on the LLM node.

When using Json mode you can optionally provide the Json Schema. This will help the LLM know which json fields to generate.

For example if I want the output json to have a “temperature” field which is an integer and a unit field which is either Celsius or Fahrenheit I can define the schema as follows:

{
    "type": "object",
    "properties": {
        "temperature": {
            "type": "integer",
            "description": "temperature"
        },
        "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"],
              "description": "the temperature unit to use"
        }
    },
    "required": ["temperature", "unit"]
}

Temperature

Temperature controls the diversity of LLM generation. You can adjust the temperature settings for your models. To have more diverse or creative generations increase the temperature. To have more deterministic response decrease the temperature. This setting is found in the gear on the LLM node.

Top P

The TopP parameter constrains how many tokens the LLM considers for generation at each step. For more diverse responses increase top p towards a maximum value of 1.0. This setting is found in the gear on the LLM node.

AI Model Costs

Model usage is billed based on the number of tokens that you use, both in the model input and the tokens generated in the model output. One token is equal to 4 characters.

ProviderModelInput cost per 1000 tokensOutput cost per 1000 tokens
OpenAIgpt-4.5-preview0.0750.15
OpenAIgpt-40.030.06
OpenAIgpt-4o0.00250.01
OpenAIgpt-4o-audio-preview0.00250.01
OpenAIgpt-4o-audio-preview-2024-10-010.00250.01
OpenAIgpt-4o-mini0.000150.0006
OpenAIgpt-4o-mini-2024-07-180.000150.0006
OpenAIo1-mini0.0030.012
OpenAIo1-mini-2024-09-120.0030.012
OpenAIo1-preview0.0150.06
OpenAIo1-preview-2024-09-120.0150.06
OpenAIo10.0150.06
OpenAIo1-2024-12-170.0150.06
OpenAIo3-mini0.00110.0044
OpenAIchatgpt-4o-latest0.0050.015
OpenAIgpt-4o-2024-05-130.0050.015
OpenAIgpt-4o-2024-08-060.00250.01
OpenAIgpt-4o-2024-11-200.00250.01
OpenAIgpt-4-turbo-preview0.010.03
OpenAIgpt-4-03140.030.06
OpenAIgpt-4-06130.030.06
OpenAIgpt-4-32k0.060.12
OpenAIgpt-4-32k-03140.060.12
OpenAIgpt-4-32k-06130.060.12
OpenAIgpt-4-turbo0.010.03
OpenAIgpt-4-turbo-2024-04-090.010.03
OpenAIgpt-4-1106-preview0.010.03
OpenAIgpt-4-0125-preview0.010.03
OpenAIgpt-4-1106-vision-preview0.010.03
OpenAIgpt-3.5-turbo0.00150.002
Anthropicclaude-20.0080.024
Anthropicclaude-2.10.0080.024
Anthropicclaude-3-haiku-202403070.000250.00125
Anthropicclaude-3-5-haiku-202410220.0010.005
Anthropicclaude-3-opus-202402290.0150.075
Anthropicclaude-3-sonnet-202402290.0030.015
Anthropicclaude-3-5-sonnet-202410220.0030.015
Anthropicclaude-3-7-sonnet-202502190.0030.015
Coherecommand-r0.000150.0006
Coherecommand-r-08-20240.000150.0006
Coherecommand-light0.00030.0006
Coherecommand-r-plus0.00250.01
Coherecommand-r-plus-08-20240.00250.01
Coherecommand-nightly0.0010.002
Coherecommand0.0010.002
Perplexityllama-3.1-sonar-huge-128k-online0.0050.005
Perplexityllama-3.1-sonar-large-128k-online0.0010.001
Perplexityllama-3.1-sonar-small-128k-online0.00020.0002
Googletext-bison0.00025nan
Googletext-bison@0010.00025nan
Googletext-bison@0020.00025nan
Googletext-bison32k0.0001250.000125
Googletext-bison32k@0020.0001250.000125
Googletext-unicorn0.010.028
Googletext-unicorn@0010.010.028
Googlechat-bison0.0001250.000125
Googlechat-bison@0010.0001250.000125
Googlechat-bison@0020.0001250.000125
Googlechat-bison-32k0.0001250.000125
Googlechat-bison-32k@0020.0001250.000125
Googlecode-bison0.0001250.000125
Googlecode-bison@0010.0001250.000125
Googlecode-bison@0020.0001250.000125
Googlecode-bison32k0.0001250.000125
Googlecode-bison-32k@0020.0001250.000125
Googlecode-gecko@0010.0001250.000125
Googlecode-gecko@0020.0001250.000125
Googlecode-gecko0.0001250.000125
Googlecode-gecko-latest0.0001250.000125
Googlecodechat-bison@latest0.0001250.000125
Googlecodechat-bison0.0001250.000125
Googlecodechat-bison@0010.0001250.000125
Googlecodechat-bison@0020.0001250.000125
Googlecodechat-bison-32k0.0001250.000125
Googlecodechat-bison-32k@0020.0001250.000125
Googlegemini-pro0.00050.0015
Googlegemini-1.0-pro0.00050.0015
Googlegemini-1.0-pro-0010.00050.0015
Googlegemini-1.5-pro0.001250.005
Googlegemini-1.5-pro-0020.001250.005
Googlegemini-1.5-pro-0010.001250.005
Googlegemini-1.5-flash7.5e-050.0003
Googlegemini-1.5-flash-exp-08274.688e-064.6875e-06
Googlegemini-1.5-flash-0027.5e-050.0003
Googlegemini-1.5-flash-0017.5e-050.0003
Googlegemini-1.5-flash-preview-05147.5e-054.6875e-06
Googlegemini-2.0-flash-exp00
Googlegemini-2.0-flash-thinking-exp-01-2100
Googlegemini-2.0-flash-thinking-exp00
Googlegemini-2.0-flash-lite-preview-02-057.5e-050.0003
Googlegemini-2.0-flash-0010.000150.0006
Amazon Bedrockamazon.titan-text-lite-v10.00030.0004
Amazon Bedrockamazon.titan-text-express-v10.00130.0017
Amazon Bedrockamazon.nova-micro-v1:03.5e-050.00014
Amazon Bedrockamazon.nova-lite-v1:06e-050.00024
Amazon Bedrockamazon.nova-pro-v1:00.00080.0032
Amazon Bedrockmeta.llama3-8b-instruct-v1:00.00030.0006
Amazon Bedrockus.meta.llama3-1-8b-instruct-v1:00.00030.0006
Amazon Bedrockus.meta.llama3-1-70b-instruct-v1:00.00150.002
Amazon Bedrockus.meta.llama3-2-1b-instruct-v1:00.00040.0008
Amazon Bedrockus.meta.llama3-2-3b-instruct-v1:00.00050.001
Amazon Bedrockus.meta.llama3-2-11b-instruct-v1:00.0010.0015
Amazon Bedrockus.meta.llama3-2-90b-instruct-v1:00.0020.0025
Azure OpenAIgpt-3.5-turbo0.00150.002
XAIgrok-20.0020.01

Azure LLM

Select your model using the Model dropdown. Available options typically include gpt-3.5-turbo, or any other model from Azure OpenAI.

To connect with your own model:

To get started, you’ll need to have access to an Azure OpenAI resource and a deployed model (for example, gpt-3.5-turbo).

The Endpoint field is where you enter the base URL of your Azure OpenAI resource. It usually looks like https://your-resource-name.openai.azure.com. Make sure you do not include a trailing slash.

In the Deployment ID field, enter the name of the deployment you’ve configured in your Azure OpenAI service. This must exactly match the ID used in Azure.

If you’re using a personal API key, check the Use Personal API Key option. Then, paste your Azure API key into the API Key field. This key is available in the Azure portal under your resource’s Keys and Endpoint section. Note that this field is required when using a personal key, and you should not share this key with anyone you do not trust.

The Finetuned Model ID field is optional. You only need to fill it out if you’re working with a fine-tuned version of the model.

A common configuration might look like this: the model is set to gpt-3.5-turbo, the endpoint is https://my-openai-resource.openai.azure.com, and the deployment ID is gpt-35-turbo. The prompt could be something like Answer the following: {{input_0.text}}.

Some common issues to watch for include leaving the API Key field blank, using the wrong deployment ID, or including a trailing slash in the endpoint URL. Always double-check that all values exactly match your Azure setup.

For more information, refer to the official Azure OpenAI documentation at [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference).

Custom LLMs

Want to connect to a specialized model provider or a locally hosted LLM? Use the custom LLM node.

We have support for sending requests to models that are compatible with the OpenAI Chat API format. You can use models from your own accounts with LLM providers such as TogetherAI and Replicate. The custom LLM node requires the following parameters:

  • model
  • api_key
  • base_url

For example using Together as an model provider the base_url will be “https://api.together.xyz”. The API Key will be the key you find on your account. For the model you can choose from the available models for a provider.

Local Models

Models hosted locally on your computer are good for prototyping, experimenting with new models and cost savings. You can access your local models be setting up a connection to a locally running LLM server.

Make sure to find a secure way to forward your locally running server’s port to the internet.

LM Studio

Follow the instructions to start a local LM studio server. The standard API key for LM studio is lm-studio

Ollama

Start a local LLM server using the Ollama CLI.

Prompt Engineering Guidelines

Be as specific as possible - if the output should be one sentence or if the output should be in the first person, include the instructions in the text block connected to the system prompt. Within the system prompt, you can also mention things like:

  • The tone you want the model to use (e.g., Respond in a professional manner).
  • Data sources and how the model should use them (e.g., Use datasource X when the question is related to sales; use datasource Y when the question is related to customer support).
  • Specific information related to your company / situation that the model can reference (e.g., calendly link)
  • Specific text that you want the model to output in certain situations (e.g., if you are unable to answer the question, respond with “I am unable to answer the question”).
  • What type of reasoning to use (here - it is important to think through step by step how you would actually perform the action. You then, want to encode this into the system prompt).

FAQs

What is the best model for my tasks?

The best model depends on your use case as well as cost/latency constraints. Make sure to evaluate the performance of the model on your task.

What should I do if I keep running into token limit errors?

Try reducing the amount of text passed to the LLM. You can use semantic search to feed only the most relevant input to the language model. See the Knowledge Base documentation.

Why won’t the LLM follow my instructions?

Many language model applications require iterative development to find an effective prompt. Try stating your instruction in clear language. Additionally, you can try using a more powerful model.

For more questions about developing applications with LLMs check out the resource below or drop a question in our Discord server.