
Node Inputs
No inputs for this node.Node Parameters
In the gear:- Max Chunks per Query: Sets the maximum number of chunks to retrieve for each query. The default value is 10.
- Type:
Integer
- Type:
- Chunk Size: The number of tokens per chunk (1 token = 4 characters). The value ranges from 0 to 4096. The default value is 1000.
- Type:
Integer
- Type:
- Chunk Overlap: The number of tokens of overlap between chunks. This is defaulted to 0 tokens. Increase the chunk overlap if you are concerned that chunking eliminates essential data (e.g. if a chunk cuts in the middle of a word). The value ranges from 0 to 399.
- Type:
Integer
- Type:
- Processing Model: The model that you want to use to process files. The models available are: Default (Basic OCR), Llama Parse and Textract.
- Use default if your files contain primarily text.
- If your files contain complex tables, images, diagrams, etc., use Llama Parse or Textract (Note: additional costs will be charged).
- Type:
Dropdown
- Retrieval Unit: Return the most relevant Chunks (text content) or Documents (will return the document metadata). The default option is Chunks.
- Type:
Dropdown
- Type:
Node Outputs
- Documents: Relevant chunks from the uploaded document based on the search query.
- Type:
List<Text>
- Example usage:
{{chat_file_reader_0.documents}}
- Type:
Example
The below example is a pipeline which answers questions about the uploaded file from chat interface.- Input Node: The user query.
- Chat File Reader Node: Enables users to upload a file within the chat interface.
- LLM Node: Answer a user query (input node) based on relevant information (uploaded file).
- Prompt:
{{input_0.text}} and {{chat_file_reader_0.documents}}
- Prompt:
- Output Node: Display the LLM’s response
- Output
{{openai_0.response}}
- Output

