Skip to main content

LLM Processor

Category: Ai Ml Standards: HIPAA (with BAA) · Data anonymization

Process data using Large Language Models (GPT-4, Claude, etc.)

What this node does

  • Multi-model support
  • Prompt templates
  • Structured output
  • Streaming

How to use

  1. In the Hydra Builder, open or create a workflow
  2. In the node palette on the left, find LLM Processor under the Ai Ml category (or use the search bar)
  3. Drag the node onto the canvas
  4. Double-click the node to open its configuration dialog
  5. Fill in the required parameters (see Configuration below)
  6. Connect the Input Text input port from an upstream node
  7. Optionally connect the Context Data port if needed
  8. Connect the LLM Response and Structured Output output to the next node downstream

Inputs

PortTypeRequiredDescription
Input TexttextPlain text string
Context DatajsonOptionalJSON data object

Outputs

PortTypeDescription
LLM ResponsetextPlain text string
Structured OutputjsonJSON data object

Configuration

Open the configuration dialog by double-clicking the LLM Processor node on the canvas.

ParameterWhat to enter
modelAI model to use, e.g. claude-3-5-sonnet, gpt-4o. Affects cost and quality
temperatureCreativity of the output: 0.0 for deterministic, 1.0 for creative (default: 0.3)
systemPromptBackground context given to the AI before the main prompt
outputFormatOutput format: json, csv, fhir-bundle, or hl7
maxTokensMaximum length of the AI response in tokens (1 token ≈ 4 characters)

When to use this node

  • Clinical summarization
  • Data extraction
  • Natural language queries

Need help configuring this node?

Go to Settings → Connectors to set up the connection this node depends on, then reference the connector ID in the node configuration dialog.