This guide covers how to configure NebulaFlow for your environment, workspace, and individual workflows. Configuration includes environment variables, VS Code settings, workspace-specific settings, and node-level settings.
Environment variables are used for API keys and debugging flags. They can be set in your shell, in a .env file in your workspace root, or in your system environment.
AMP_API_KEY: Required for the Amp SDK (LLM node execution). Set this to your Amp API key.OPENROUTER_API_KEY: Optional for OpenRouter SDK integration. Set this if you want to use OpenRouter models.NEBULAFLOW_DEBUG_LLM: Set to 1 to enable verbose logging for LLM node execution (warns on errors reading workspace settings).NEBULAFLOW_DISABLE_HYBRID_PARALLEL: When truthy, disables hybrid parallel execution (affects looped graphs).NEBULAFLOW_FILTER_PAUSE_SEEDS: When truthy, filters pause seeds in certain resume scenarios.NEBULAFLOW_SHELL_MAX_OUTPUT: Maximum characters to capture from shell command output (default: 1000000). Truncates output if exceeded.Linux/macOS (temporary):
export AMP_API_KEY="your_amp_api_key_here"
export OPENROUTER_API_KEY="your_openrouter_api_key_here" # optional
Linux/macOS (permanent):
Add the lines to your shell profile file (e.g., ~/.bashrc, ~/.zshrc, ~/.profile):
export AMP_API_KEY="your_amp_api_key_here"
export OPENROUTER_API_KEY="your_openrouter_api_key_here"
Windows (PowerShell):
$env:AMP_API_KEY="your_amp_api_key_here"
$env:OPENROUTER_API_KEY="your_openrouter_api_key_here"
Windows (Command Prompt):
set AMP_API_KEY=your_amp_api_key_here
set OPENROUTER_API_KEY=your_openrouter_api_key_here
Note: Environment variables set in a terminal session only apply to that session. For permanent changes, set them system-wide or use a .env file in your workspace (see below).
.env FileYou can also place a .env file in your workspace root (or the first workspace folder). NebulaFlow will read environment variables from this file when the extension activates.
Create a file named .env in your workspace:
AMP_API_KEY=your_amp_api_key_here
OPENROUTER_API_KEY=your_openrouter_api_key_here
NebulaFlow provides VS Code configuration options that can be set in the Settings UI (Ctrl+, or Cmd+,) or in .vscode/settings.json.
nebulaFlow.storageScopestring["workspace", "user"]useruser: Global storage in your user folder (default).workspace: Storage in the current workspace (.nebulaflow/ directory).nebulaFlow.globalStoragePathstring"".nebulaflow/..vscode/settings.json{
"nebulaFlow.storageScope": "workspace",
"nebulaFlow.globalStoragePath": "/path/to/custom/storage"
}
You can configure LLM settings per workspace by creating a .nebulaflow/settings.json file in your workspace root. This file can contain Amp SDK settings, model configurations, and more.
The file must be placed in the first workspace folder at .nebulaflow/settings.json.
The file must contain a nebulaflow.settings object that maps directly to Amp SDK settings keys.
{
"nebulaflow": {
"settings": {
"openrouter.key": "sk-or-...",
"internal.primaryModel": "openrouter/xiaomi/mimo-v2-flash:free"
}
}
}
openrouter.key: OpenRouter API key (overrides OPENROUTER_API_KEY environment variable).internal.primaryModel: Workspace-wide default model for LLM nodes (used when a node has no model selected).openrouter.models: Array of model configurations for OpenRouter models (see example below).amp.dangerouslyAllowAll: Boolean to auto-approve all tool calls (use with caution).tools.disable: Array of tool names to disable (e.g., ["bash"]).reasoning.effort: Default reasoning effort for LLM nodes (minimal, low, medium, high).{
"nebulaflow": {
"settings": {
"openrouter.models": [
{
"model": "openrouter/z-ai/glm-4.7-flash",
"provider": "z-ai",
"maxOutputTokens": 131000,
"contextWindow": 200000,
"temperature": 0.5
},
{
"model": "openrouter/openai/gpt-5.2-codex",
"provider": "openai",
"isReasoning": true,
"reasoning_effort": "medium",
"maxOutputTokens": 128000,
"contextWindow": 400000
}
]
}
}
}
internal.primaryModel in .nebulaflow/settings.json.openai/gpt-5.1 (if neither of the above is set).The Model combobox is populated from the Amp SDK listModels() API and also includes the configured workspace default and any OpenRouter models defined in openrouter.models.
Each node type has specific configuration options. These are set in the Property Editor when a node is selected.
minimal, low, medium, high (default: medium).["bash"]).command (one-liner) or script (multiline via stdin).bash, sh, zsh, pwsh, etc.). Defaults to system shell.none, parents-all, parent-index, literal.INPUT_1, INPUT_2, or custom names).safe (sanitization) or advanced (no sanitization). Default: safe.safe mode).fixed or while.CLI nodes (and LLM tool calls) can require approval before execution. This is controlled by the Safety and Approval settings.
When approval is required, the Right Sidebar shows a preview of the command/script and a structured summary (Mode, Shell, Safety, Stdin, Flags). You can approve or reject.
.nebulaflow/workflows/ (versioned 1.x)..nebulaflow/nodes/..nebulaflow/subflows/.The storage location is determined by nebulaFlow.storageScope and nebulaFlow.globalStoragePath.
.env file, verify it’s in the correct workspace folder and contains the variable.AMP_API_KEY is set correctly.OPENROUTER_API_KEY is set or configured in .nebulaflow/settings.json.nebulaFlow.storageScope setting.