Chat File Reader Node
Allows for document uploads within chatbots
This node allows for document uploads in chat conversations. Documents uploaded during chat will be vectorized and stored in a temporary vector database. During runs, the node will provide connected nodes with relevant chunks from the uploaded document based on the user query (often connected to the LLM node).
Effectively, it allows a user to chat with a document that is uploaded during a chat conversation.
Node Inputs
No inputs for this node.
Node Parameters
In the gear:
-
Max Chunks per Query: Sets the maximum number of chunks to retrieve for each query. The default value is 10.
- Type:
Integer
- Type:
-
Chunk Size: The number of tokens per chunk (1 token = 4 characters). The value ranges from 0 to 4096. The default value is 1000.
- Type:
Integer
- Type:
-
Chunk Overlap: The number of tokens of overlap between chunks. This is defaulted to 0 tokens. Increase the chunk overlap if you are concerned that chunking eliminates essential data (e.g. if a chunk cuts in the middle of a word). The value ranges from 0 to 399.
- Type:
Integer
- Type:
-
Processing Model: The model that you want to use to process files. The models available are: Default (Basic OCR), Llama Parse and Textract.
-
Use default if your files contain primarily text.
-
If your files contain complex tables, images, diagrams, etc., use Llama Parse or Textract (Note: additional costs will be charged).
-
Type:
Dropdown
-
-
Retrieval Unit: Return the most relevant Chunks (text content) or Documents (will return the document metadata). The default option is Chunks.
- Type:
Dropdown
- Type:
Node Outputs
-
Documents: Relevant chunks from the uploaded document based on the search query.
-
Type:
List<Text>
-
Example usage:
{{chat_file_reader_0.documents}}
-