NebulaFlow provides powerful integration capabilities to connect with external services, APIs, and LLM providers. This guide covers the available integrations and how to configure them.
Connect to Large Language Models for intelligent processing:
Use cases:
Execute shell commands and scripts:
Use cases:
Connect to external APIs:
Use cases:
The Amp SDK is the primary LLM provider for NebulaFlow.
Environment Variable:
export AMP_API_KEY="your_amp_api_key_here"
Workspace Settings:
Create .nebulaflow/settings.json in your workspace root:
{
"nebulaflow": {
"settings": {
"amp.dangerouslyAllowAll": true,
"amp.experimental.commandApproval.enabled": false,
"amp.commands.allowlist": ["*"],
"amp.commands.strict": false,
"internal.primaryModel": "openrouter/anthropic/claude-3-5-sonnet"
}
}
}
The Amp SDK supports various settings that can be configured:
| Setting | Type | Description |
|---|---|---|
amp.dangerouslyAllowAll |
boolean | Bypass safety checks (use with caution) |
amp.experimental.commandApproval.enabled |
boolean | Enable command approval workflow |
amp.commands.allowlist |
string[] | List of allowed commands ([“*”] for all) |
amp.commands.strict |
boolean | Strict command validation |
internal.primaryModel |
string | Default model ID |
openrouter.models |
array | OpenRouter model configurations |
You can configure multiple models in your workspace settings:
{
"nebulaflow": {
"settings": {
"openrouter.models": [
{
"model": "openrouter/anthropic/claude-3-5-sonnet",
"provider": "anthropic",
"maxOutputTokens": 4096,
"contextWindow": 200000
},
{
"model": "openrouter/openai/gpt-5.2-codex",
"provider": "openai",
"maxOutputTokens": 8192,
"contextWindow": 128000
}
]
}
}
}
Example:
Text Node (input: "What is 2+2?")
└── LLM Node (model: gpt-4, prompt: "Calculate: ${1}")
└── Preview Node (display result)
OpenRouter SDK provides access to multiple LLM providers through a single API.
Environment Variable:
export OPENROUTER_API_KEY="your_openrouter_api_key_here"
Workspace Settings:
{
"nebulaflow": {
"settings": {
"openrouter.key": "sk-or-...",
"openrouter.models": [
{
"model": "openrouter/anthropic/claude-3-5-sonnet",
"provider": "anthropic"
}
]
}
}
}
OpenRouter supports models from various providers:
Models are loaded automatically when you request them:
Example model IDs:
openrouter/anthropic/claude-3-5-sonnetopenrouter/openai/gpt-4oopenrouter/google/gemini-prointerface LLMNode {
type: NodeType.LLM
data: {
title: string
content: string // Prompt template
model: { id: string; title?: string }
reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
systemPromptTemplate?: string
disabledTools?: string[]
timeoutSec?: number
dangerouslyAllowAll?: boolean
attachments?: AttachmentRef[]
}
}
{
type: NodeType.LLM,
data: {
title: 'Code Review Agent',
content: 'Review the following code: ${1}',
model: { id: 'openrouter/anthropic/claude-3-5-sonnet' },
reasoningEffort: 'high',
systemPromptTemplate: 'You are a senior code reviewer.',
disabledTools: ['bash', 'filesystem'],
timeoutSec: 300,
dangerouslyAllowAll: false,
attachments: [
{
id: 'image1',
kind: 'image',
source: 'file',
path: '/path/to/diagram.png'
}
]
}
}
Use template variables to reference upstream node outputs:
# Basic variable reference
"Process this: ${1}"
# Multiple variables
"Analyze ${1} and summarize: ${2}"
# Named variables (if using Variable nodes)
"User query: ${userQuery}"
LLM nodes support tool/function calling:
// Tools are automatically resolved via Amp SDK
// Disabled tools prevent certain capabilities
disabledTools: ['bash', 'filesystem']
Control the reasoning effort level:
Support for image attachments:
attachments: [
{
id: 'image1',
kind: 'image',
source: 'file',
path: '/path/to/image.png',
altText: 'Diagram showing workflow'
}
]
CLI nodes execute shell commands via Node.js child_process.
interface CLINode {
type: NodeType.CLI
data: {
title: string
content: string // Command to execute
mode?: 'command' | 'script'
shell?: 'bash' | 'sh' | 'zsh' | 'pwsh' | 'cmd'
safetyLevel?: 'safe' | 'advanced'
streamOutput?: boolean
stdin?: {
source?: 'none' | 'parents-all' | 'parent-index' | 'literal'
parentIndex?: number
literal?: string
stripCodeFences?: boolean
normalizeCRLF?: boolean
}
env?: {
exposeParents?: boolean
names?: string[]
static?: Record<string, string>
}
flags?: {
exitOnError?: boolean
unsetVars?: boolean
pipefail?: boolean
noProfile?: boolean
nonInteractive?: boolean
executionPolicyBypass?: boolean
}
}
}
{
type: NodeType.CLI,
data: {
title: 'Git Status',
content: 'git status',
mode: 'command',
shell: 'bash',
streamOutput: true
}
}
{
type: NodeType.CLI,
data: {
title: 'Process File',
content: 'cat ${1} | grep "error"',
mode: 'command',
stdin: {
source: 'parent-index',
parentIndex: 0
}
}
}
{
type: NodeType.CLI,
data: {
title: 'Run Script',
content: '#!/bin/bash\necho "Hello from script"\nls -la',
mode: 'script',
shell: 'bash'
}
}
{
type: NodeType.CLI,
data: {
title: 'Build with Env',
content: 'npm run build',
env: {
static: {
NODE_ENV: 'production',
API_KEY: '${apiKey}'
},
exposeParents: true,
names: ['PATH', 'HOME']
}
}
}
Safe Mode (default):
Advanced Mode:
executionPolicyBypass: trueCLI nodes require approval by default:
pending_approvalBypass approval:
{
data: {
dangerouslyAllowAll: true,
flags: { executionPolicyBypass: true }
}
}
Connect to REST APIs using CLI nodes with curl or similar tools.
{
type: NodeType.CLI,
data: {
title: 'Fetch Data',
content: 'curl https://api.example.com/data',
streamOutput: true
}
}
{
type: NodeType.CLI,
data: {
title: 'Send Data',
content: 'curl -X POST -H "Content-Type: application/json" -d \'${1}\' https://api.example.com/data',
stdin: {
source: 'parent-index',
parentIndex: 0
}
}
}
{
type: NodeType.CLI,
data: {
title: 'Authenticated Request',
content: 'curl -H "Authorization: Bearer ${apiKey}" https://api.example.com/protected',
env: {
static: {
apiKey: '${apiKey}'
}
}
}
}
{
type: NodeType.CLI,
data: {
title: 'GraphQL Query',
content: 'curl -X POST -H "Content-Type: application/json" -d \'{"query": "${1}"}\' https://api.example.com/graphql',
stdin: {
source: 'parent-index',
parentIndex: 0
}
}
}
{
type: NodeType.CLI,
data: {
title: 'Process Webhook',
content: 'node /path/to/webhook-processor.js',
mode: 'script',
stdin: {
source: 'parent-index',
parentIndex: 0
}
}
}
jq or Node.js scriptsConfigure workspace-specific settings in .nebulaflow/settings.json:
{
"nebulaflow": {
"settings": {
"amp.dangerouslyAllowAll": false,
"amp.experimental.commandApproval.enabled": true,
"amp.commands.allowlist": ["git", "npm", "node"],
"amp.commands.strict": true,
"internal.primaryModel": "openrouter/anthropic/claude-3-5-sonnet",
"openrouter.key": "sk-or-...",
"openrouter.models": [
{
"model": "openrouter/anthropic/claude-3-5-sonnet",
"provider": "anthropic",
"maxOutputTokens": 4096,
"contextWindow": 200000
},
{
"model": "openrouter/openai/gpt-5.2-codex",
"provider": "openai",
"maxOutputTokens": 8192,
"contextWindow": 128000
}
]
}
}
}
| Key | Type | Description |
|---|---|---|
amp.dangerouslyAllowAll |
boolean | Bypass Amp safety checks |
amp.experimental.commandApproval.enabled |
boolean | Enable command approval |
amp.commands.allowlist |
string[] | Allowed commands ([“*”] for all) |
amp.commands.strict |
boolean | Strict command validation |
internal.primaryModel |
string | Default model ID |
openrouter.key |
string | OpenRouter API key |
openrouter.models |
array | OpenRouter model configurations |
# Amp SDK (Required for LLM nodes)
export AMP_API_KEY="your_amp_api_key_here"
# OpenRouter SDK (Optional)
export OPENROUTER_API_KEY="your_openrouter_api_key_here"
# Disable hybrid parallel execution
export NEBULAFLOW_DISABLE_HYBRID_PARALLEL=1
# Set default concurrency
export NEBULAFLOW_DEFAULT_CONCURRENCY=10
# Set per-type limits
export NEBULAFLOW_LLM_CONCURRENCY=8
export NEBULAFLOW_CLI_CONCURRENCY=8
# Set maximum safe iterations
export NEBULAFLOW_MAX_ITERATIONS=1000
Use case: Generate and execute commands
Text Node (user request)
└── LLM Node (generate command)
└── CLI Node (execute command)
└── Preview Node (display result)
Example:
"List all JavaScript files"
└── LLM: "Generate find command: ${1}"
└── CLI: "find . -name \"*.js\""
└── Preview: Display results
Use case: Fetch data and process with LLM
CLI Node (fetch API data)
└── LLM Node (analyze data)
└── Preview Node (display insights)
Example:
CLI: "curl https://api.github.com/repos/owner/repo"
└── LLM: "Summarize this repository: ${1}"
└── Preview: Display summary
Use case: Compare responses from different models
Text Node (query)
├── LLM Node (Model A)
├── LLM Node (Model B)
└── Accumulator Node (collect responses)
Use case: Call API based on condition
Text Node (input)
└── IF Node (check condition)
├── True: CLI (call API A)
└── False: CLI (call API B)
└── Preview Node (display result)
Never commit API keys to version control:
# Good: Use environment variables
export AMP_API_KEY="your_key"
# Bad: Hardcoding in workflows
# content: "curl -H \"Authorization: Bearer sk-...\""
Validate and sanitize inputs:
// Good: Use stdin for data
stdin: {
source: 'parent-index',
parentIndex: 0
}
// Bad: Direct string interpolation
content: "echo ${userInput}" // Risk of injection
Use approval for dangerous operations:
// Safe: Requires approval
data: {
content: "rm -rf /path/to/dir",
needsUserApproval: true
}
// Dangerous: Bypasses approval
data: {
content: "rm -rf /path/to/dir",
dangerouslyAllowAll: true
}
Secure handling of secrets:
// Good: Use environment variables
env: {
static: {
API_KEY: '${apiKey}'
}
}
// Bad: Hardcoded secrets
env: {
static: {
API_KEY: 'sk-...'
}
}
Cause: SDK not properly linked
Solution:
npm i /home/prinova/CodeProjects/upstreamAmp/sdk
Cause: Environment variable missing
Solution:
export AMP_API_KEY="your_key"
Cause: Model ID incorrect or not available
Solution:
Cause: Command not in PATH
Solution:
{
data: {
content: "/full/path/to/command",
env: { exposeParents: true }
}
}
Cause: Insufficient permissions
Solution:
{
data: {
content: "chmod +x script.sh && ./script.sh",
flags: { executionPolicyBypass: true }
}
}
Cause: Command taking too long
Solution:
{
data: {
content: "long-running-command",
flags: { exitOnError: true }
}
}
Cause: API endpoint unreachable
Solution:
Cause: Invalid API key or token
Solution:
Cause: Too many requests
Solution:
jq or Node.js scripts