nebulaflow

LLM Nodes (Agent Nodes)

LLM (Large Language Model) nodes, also called Agent nodes, allow you to interact with AI models like GPT-5.1, Claude, or other language models within your workflows. These nodes support streaming output, tool usage, image attachments, and chat history.

Configuration

Model Selection

Prompt (Content)

System Prompt Override

Reasoning Effort

Tools (Builtin Tools)

Dangerously Allow All

Timeout

Image Attachments

Input/Output

Inputs

Outputs

Variable Usage

Template Variables

Use positional inputs in the prompt:

Analyze the following data: ${1}
Provide a summary in ${2} sentences.

Context Variables

Access workflow context (not yet implemented in the current version).

Node References

Reference previous node outputs by order:

Examples

Basic Text Generation

You are a helpful assistant. Write a welcome message for new users.

Data Transformation

Convert the following data to JSON format:
${1}

Code Generation

You are a code assistant. Write a Python function to ${1}.

Analysis

Analyze the following data and provide insights:
${1}

Best Practices

Prompt Engineering

  1. Be Specific: Clear, detailed prompts yield better results.
  2. Provide Examples: Include examples when possible.
  3. Use Delimiters: Separate different parts of your prompt.
  4. Specify Format: Request specific output formats (e.g., JSON, Markdown).
  5. Set Constraints: Define length and style requirements.

Error Handling

  1. Validate Input: Check data before sending to LLM.
  2. Handle Failures: Use condition nodes to detect errors.
  3. Retry Logic: Implement retries for transient failures.
  4. Timeouts: Set appropriate timeout values.

Performance

  1. Batch Prompts: Combine related queries when possible.
  2. Cache Results: Store frequently used responses.
  3. Use Smaller Models: For simple tasks, use faster models.
  4. Optimize Tokens: Be concise to reduce costs.

Security

  1. Never Send Secrets: Don’t include API keys or passwords.
  2. Sanitize Input: Clean user data before processing.
  3. Rate Limiting: Prevent abuse with rate limits.
  4. Audit Logging: Log all LLM interactions.

Common Patterns

Chain of Thought

Start → LLM (Reasoning) → LLM (Refinement) → End

Data Extraction

Input → LLM (Extract) → Transform → End

Content Generation

Prompt → LLM (Generate) → Validate → LLM (Edit) → End

Analysis Pipeline

Data → LLM (Analyze) → Condition → LLM (Report) → End

Troubleshooting

Model Not Available

Poor Output Quality

High Token Usage

Timeout Errors

Amp SDK Not Available

AMP_API_KEY Not Set

Integration with Other Nodes

Before LLM

After LLM

Configuration Example

llm_node:
  title: "My Agent"
  model: "openai/gpt-5.1"
  content: "Analyze the following data: ${1}"
  systemPromptTemplate: "You are a data analyst."
  reasoningEffort: "medium"
  disabledTools: ["Bash"]  # Disable Bash tool
  dangerouslyAllowAll: false
  timeoutSec: 300
  attachments:
    - kind: "image"
      source: "file"
      path: "screenshots/error.png"

Next Steps