Links

Welcome to VectorShift

VectorShift's platform allows you to build, design, prototype, and deploy generative AI workflows and automations across two interfaces: no-code and code SDK. Example use cases:
  • Chatbots: Build a fully functional chatbot and embed it into your website. Chatbot is able to respond to user queries and answer questions based on a knowledge base (e.g., product documentation, support articles).
  • Automations: automate workflows end to end: schedule workflows to run at certain time intervals or allow certain triggers (e.g., email, slack message, typeform message) to trigger the running of a workflow; have the VectorShift platform directly integrate with your other software tools (e.g., send an email, create a notion document, add data to an airtable database).
  • Document Search: Summarize and answer questions about documents, videos, audio files, and websites instantly.
  • Content Creation: Create marketing copy, personalized outbound emails, call summaries, proposals, and graphics at scale and in a predetermined format and style.
  • Analyst: Replicate the logical thinking of an analyst to generate synthesized output.
At the core of the VectorShift platform is a pipeline. Pipelines are chains of logic that can be used to automate tasks. Each no-code block is code a "node". Nodes contain "edges" which are the places of connection between nodes (e.g., the "input" node from below has one output edge: "message"). Additionally, within text and LLM ("Prompt" and "System" field) nodes , you can create variables by using double curly braces {{}}. The text within the double braces will automatically appear on the left-hand side of the node (and become an additional input "edge"); the associated data connected to the named nodes will “replace” the curly brackets when the pipeline runs.
VectorShift pipeline example - simple chatbot
Pipelines can either be used to automate tasks themselves (e.g., writing a blog article / automating a report) or they can be leveraged as the "backend" of a chatbot or hook into an automation (e.g., trigger this pipeline to run when I receive an email).
Our site is structured into the following tabs:
  1. 1.
    Pipelines: where you can find pipelines you have built. Clicking “new” button and then "Create pipeline" opens up the pipeline builder, our drag-and-drop tool to build generative AI workflows.
  2. 2.
    Marketplace: a library of pre-built pipelines. Contribute to the marketplace by sharing your pipelines with other users! Within the marketplace, you can search for pipelines and “import” them into your workspace.
  3. 3.
    Storage: on this tab, you can (1) create VectorStores (vector databases that contain your data; vector stores can be used to allow for semantic based queries within a pipeline) or (2) upload files to be directly used within pipelines as well. Toggle between the two functionalities with the tabs on the top left on this page.
  4. 4.
    Integrations: allows you to integrate the VectorShift platform with your other software tools and platforms. Create an integration by clicking on "new" . When creating the integration, you will select what data you will allow VectorShift to get access to and what actions you want the VectorShift platform to be able to perform (e.g., write a Notion file). You will then be able to use the created integrations within the pipeline builder.
  5. 5.
    Automations: schedule workflows to run at predetermined intervals or to run based on triggers.
  6. 6.
    Chatbots: turn pipelines you have built into Chatbots. Test the Chatbots in our interface before deploying to your end users via an iFrame or an API.
  7. 7.
    Transformations:
Platform overview
At its core, a pipeline will have the following components:
  1. 1.
    Input / Output Node: Information (e.g., written text, video, audio, files) provided in the input node is used as an “input” to connecting nodes. The output node represents the result of the pipeline.
  2. 2.
    Large Language Model (LLM): an LLM is a trained deep-learning model that can generate output in a human-like fashion. You can connect other nodes (e.g., the input node) to the LLM to use as inputs into the model to instruct the model on what to do.
  3. 3.
    VectorDB: vector databases are a type of database that stores unstructured data in vector embeddings (e.g., vector representation of the data) to facilitate retrieval (e.g., allow users to query to identify relevant data). This allows for semantic based queries of data.
  4. 4.
    Data Loaders: nodes that convert data that can then be embedded into a vector database. We currently offer the following data loaders: files, URL (data scraping), PDF, Wikipedia, YouTube video, and Arxiv. Additionally, you can use the integrations / integrations node to give pipelines access to your data (e.g., notion documents, google drive files, CRM data).
  5. 5.
    Integrations: allows you to leverage your data, whever it sits!
Nodes - General tab
Last modified 11d ago