LLM Processor
Category: Ai Ml Standards: HIPAA (with BAA) · Data anonymization
Process data using Large Language Models (GPT-4, Claude, etc.)
What this node does
- Multi-model support
- Prompt templates
- Structured output
- Streaming
How to use
- In the Agentic Studio, open or create a workflow
- In the node palette on the left, find LLM Processor under the Ai Ml category (or use the search bar)
- Drag the node onto the canvas
- Double-click the node to open its configuration dialog
- Fill in the required parameters (see Configuration below)
- Connect the Input Text input port from an upstream node
- Optionally connect the Context Data port if needed
- Connect the LLM Response and Structured Output output to the next node downstream
Inputs
| Port | Type | Required | Description |
|---|---|---|---|
| Input Text | text | ✓ | Plain text string |
| Context Data | json | Optional | JSON data object |
Outputs
| Port | Type | Description |
|---|---|---|
| LLM Response | text | Plain text string |
| Structured Output | json | JSON data object |
Configuration
Open the configuration dialog by double-clicking the LLM Processor node on the canvas.
| Parameter | What to enter |
|---|---|
model | AI model to use, e.g. claude-3-5-sonnet, gpt-4o. Affects cost and quality |
temperature | Creativity of the output: 0.0 for deterministic, 1.0 for creative (default: 0.3) |
systemPrompt | Background context given to the AI before the main prompt |
outputFormat | Output format: json, csv, fhir-bundle, or hl7 |
maxTokens | Maximum length of the AI response in tokens (1 token ≈ 4 characters) |
When to use this node
- Clinical summarization
- Data extraction
- Natural language queries
Need help configuring this node?
Go to Settings → Connectors to set up the connection this node depends on, then reference the connector ID in the node configuration dialog.