Troubleshooting
This guide covers common errors and solutions when using NebulaFlow. If you encounter an issue not listed here, please open an issue on GitHub or check the FAQ.
Table of Contents
Common Errors
“Amp SDK not available”
Symptom: Error message “Amp SDK not available” when executing an LLM node.
Cause: The Amp SDK is not properly linked or missing from the extension’s dependencies.
Solution:
- If installed from source: Run
npm install in the extension directory (/home/prinova/CodeProjects/nebulaflow) to ensure dependencies are installed.
- If building from source: Run
npm run build to bundle the SDK.
- If using a pre‑built extension: Ensure the
.vsix file is complete; try reinstalling the extension from the marketplace.
- Verify SDK location: Check that
node_modules/@prinova/amp-sdk exists. If missing, run npm install /home/prinova/CodeProjects/upstreamAmp/sdk.
“AMP_API_KEY is not set”
Symptom: Error message “AMP_API_KEY is not set” when executing an LLM node.
Cause: The environment variable for the Amp SDK is not set in the VS Code process environment.
Solution:
- Set the
AMP_API_KEY environment variable in your shell before launching VS Code:
export AMP_API_KEY="your-api-key-here"
- Restart VS Code after setting the variable (environment changes are not picked up automatically).
- Using a
.env file: Create a .env file in your workspace root with AMP_API_KEY=your-api-key-here. Ensure the workspace folder is open.
- Check variable scope: If you use a terminal profile or shell configuration (
.bashrc, .zshrc), make sure the variable is exported.
“CLI Node execution failed”
Symptom: CLI node fails with an error message like “CLI Node execution failed: …”.
Cause: Command not found, permission issues, or safety sanitization blocking the command.
Solution:
- Verify command exists: Ensure the command is in your system’s
PATH.
- Check shell configuration: The node uses the shell specified in its properties (default:
bash on Linux/macOS, pwsh on Windows). Ensure the shell is installed and configured correctly.
- Safety level: In command mode, the
safe safety level uses a denylist. If your command is blocked, switch to advanced safety level (use with caution).
- Script mode: If using script mode, verify stdin source settings and that the script is valid for the chosen shell.
- File permissions: On Linux/macOS, ensure the script file has execute permissions (
chmod +x script.sh).
“LLM Node requires a non-empty prompt”
Symptom: LLM node fails with this error.
Cause: The node’s content (prompt) is empty, or all inputs are empty.
Solution:
- Check node content: Open the node’s property editor and ensure there is text in the “Prompt” field.
- Check inputs: If the prompt uses template variables (e.g.,
${1}), ensure the parent node produces non‑empty output.
- Use a Text node: If you need static text, add a Text node before the LLM node.
“Workflow Error: …”
Symptom: A generic workflow error appears during execution.
Cause: An unexpected error occurred in a node (e.g., attachment loading failure, network error, invalid configuration).
Solution:
- Check the error message: The error details are shown in the VS Code error notification.
- Identify the failing node: Look at the execution status in the webview; the node will be highlighted with an error border.
- Inspect node configuration: Open the node’s property editor and verify all settings.
- Use Preview nodes: Add Preview nodes before and after the suspected node to inspect data flow.
- Enable debug logging: Set
NEBULAFLOW_DEBUG_LLM=1 (for LLM nodes) or NEBULAFLOW_DEBUG=1 for general debugging.
“Resume failed: node not found”
Symptom: Trying to resume a paused workflow fails with this message.
Cause: The node ID stored in the pause state does not exist in the current workflow graph (e.g., after editing the workflow).
Solution:
- Restart the workflow: Execute the workflow from the beginning instead of resuming.
- Check node IDs: If you modified the workflow, ensure you haven’t deleted or changed the node that was paused.
- Clear pause state: Currently, there is no UI to clear pause state; restarting the workflow is the only option.
Environment Setup
Extension fails to activate
Symptom: The NebulaFlow extension does not appear in the Extensions view or fails to load.
Solution:
- VS Code version: Ensure you are using VS Code ≥ 1.90.0 (check Help → About).
- Extension logs: Open the Output panel (View → Output) and select “NebulaFlow” from the dropdown to see activation logs.
- Reload window: Try Developer: Reload Window (Ctrl+Shift+P → “Developer: Reload Window”).
- Reinstall extension: Uninstall and reinstall from the marketplace.
Webview won’t load (blank canvas)
Symptom: Opening the workflow editor shows a blank canvas or an error.
Solution:
- Build webview assets: If you installed from source, run
npm run build:webview or npm run build.
- Check webview files: Verify
dist/webviews/workflow.html exists.
- Developer Tools: Open VS Code Developer Tools (Help → Toggle Developer Tools) and check the Console for errors.
- Extension reload: Reload the window (Ctrl+Shift+P → “Developer: Reload Window”).
LLM node fails with model errors
Symptom: LLM node returns an error about model not available or invalid API key.
Solution:
- Verify API key: Ensure
AMP_API_KEY is set correctly (see above).
- Check model name: Verify the selected model is supported by the Amp SDK or OpenRouter.
- OpenRouter configuration: For OpenRouter models, ensure
OPENROUTER_API_KEY is set or configured in .nebulaflow/settings.json.
- API rate limits: Wait a few minutes and try again; check the provider’s rate limits.
CLI node fails to execute commands
Symptom: Shell command exits with error or is blocked.
Solution:
- Command exists: Verify the command is in your system’s PATH.
- Shell configuration: Check that the shell (bash, zsh, pwsh, etc.) is correctly installed and configured.
- Safety settings: In command mode, the
safe safety level uses a denylist. If your command is blocked, switch to advanced safety level (use with caution).
- Approval required: If the node is set to require approval, you must approve it in the VS Code notification.
- Script mode: For script mode, verify stdin source settings and that the script is valid for the chosen shell.
Permission denied on Linux/macOS
Symptom: CLI node fails with “Permission denied” error.
Solution:
- File permissions: Ensure the script file has execute permissions (
chmod +x script.sh).
- User permissions: Verify your user has permission to execute the command.
- Safety level: Use
safe safety level for command mode to avoid sanitization issues.
Extension not appearing in Extensions view
Symptom: NebulaFlow does not appear in the Extensions marketplace.
Solution:
- Refresh Extensions view: Click the refresh icon in the Extensions view.
- Network connection: Ensure you have internet access.
- VS Code version: Update VS Code to the latest version.
- Install from VSIX: Download the
.vsix file from GitHub releases and install manually (Extensions → … → Install from VSIX).
LLM Nodes
Model not available
Symptom: LLM node fails because the selected model is not available.
Solution:
- Check API key: Ensure
AMP_API_KEY is set correctly.
- Verify model name: Spelling must match exactly (case‑sensitive). Check the Amp SDK or OpenRouter documentation for supported models.
- API rate limits: Wait and retry; some providers have rate limits.
- Network connectivity: Ensure you can reach the API endpoint.
Poor output quality
Symptom: LLM responses are inconsistent or not as expected.
Solution:
- Adjust reasoning effort: Lower the reasoning effort (e.g.,
minimal or low) for more consistent output.
- Improve prompt clarity: Be explicit about the desired output format and constraints.
- Add examples: Include examples in the prompt or system prompt.
- Use system prompt: Define the role and constraints in the system prompt template.
High token usage
Symptom: LLM nodes consume many tokens, increasing cost.
Solution:
- Reduce reasoning effort: Lower the reasoning effort setting.
- Use concise prompts: Remove unnecessary text from prompts.
- Implement summarization: Use a summarization step before sending large inputs.
- Cache repeated queries: Use Variable nodes to store and reuse results.
Timeout errors
Symptom: LLM node times out after the configured timeout (default 300 seconds).
Solution:
- Increase timeout: Set a higher
timeoutSec in the node properties.
- Check network connectivity: Ensure your internet connection is stable.
- Verify API endpoint: The Amp SDK or OpenRouter endpoint should be reachable.
- Consider model response time: Some models are slower; choose a faster model if possible.
Amp SDK not available (LLM node)
Symptom: Error “Amp SDK not available” specifically for LLM nodes.
Solution:
- Install SDK: Run
npm install /home/prinova/CodeProjects/upstreamAmp/sdk in the extension directory.
- Restart VS Code: After installation, reload the window.
- Check node_modules: Verify
node_modules/@prinova/amp-sdk exists.
AMP_API_KEY not set (LLM node)
Symptom: Error “AMP_API_KEY is not set” for LLM nodes.
Solution:
- Set the environment variable before launching VS Code.
- Restart VS Code after setting the variable.
- Use a
.env file in your workspace root.
CLI Nodes
Command not found
Symptom: CLI node fails because the command is not in PATH.
Solution:
- Check PATH: Run
which <command> in a terminal to verify.
- Use absolute path: Provide the full path to the executable in the node content.
- Shell configuration: Ensure the shell used by the node has the correct PATH.
Safety sanitization blocking valid commands
Symptom: Command is blocked by the denylist in safe safety level.
Solution:
- Switch to advanced safety: In node properties, set
safetyLevel to advanced. Warning: This disables sanitization; only use with trusted commands.
- Modify command: If possible, rewrite the command to avoid blocked patterns (e.g., use
ls instead of rm).
Script mode issues
Symptom: Script fails to execute or produce output.
Solution:
- Verify stdin source: Check the
stdin configuration (parents‑all, parent‑index, literal).
- Check shell compatibility: Ensure the script is compatible with the selected shell (bash, pwsh, etc.).
- Strip code fences: If your input contains Markdown code fences, enable
stripCodeFences.
- Normalize CRLF: If mixing Windows/Unix line endings, enable
normalizeCRLF.
Approval required but not prompted
Symptom: CLI node is set to require approval, but no prompt appears.
Solution:
- Check notification settings: Ensure VS Code notifications are enabled.
- Check execution status: The node may be waiting for approval; look for a notification in the VS Code notification center.
- Restart VS Code: Sometimes notifications get stuck; restart VS Code.
Workflow Execution
Workflow doesn’t execute
Symptom: Clicking “Execute” does nothing.
Solution:
- Check API keys: Ensure
AMP_API_KEY is set (for LLM nodes).
- Verify connections: All nodes must have proper connections (edges) from their dependencies.
- Node validation: Check each node’s configuration for errors (highlighted in red).
- Start node: Ensure there is at least one node with no incoming edges (a start node).
Node stuck in “pending” state
Symptom: Node never starts executing.
Solution:
- Check dependencies: Ensure all parent nodes have completed successfully.
- Circular dependencies: Verify there are no cycles in the workflow graph (except loops).
- Node disabled: Check if the node is disabled (node settings).
- Parallel execution: Some nodes may be waiting for parallel branches; add Preview nodes to inspect.
Workflow execution hangs
Symptom: Workflow stops responding or never completes.
Solution:
- Check for infinite loops: Ensure Loop Start/Loop End nodes have proper termination conditions.
- LLM timeout: Increase timeout for LLM nodes.
- CLI command hanging: Ensure CLI commands are non‑interactive or use appropriate flags.
- Approval waiting: If a node is waiting for approval, approve it in the VS Code notification.
Subflow not working
Symptom: Subflow node fails or doesn’t execute.
Solution:
- Subflow saved: Ensure the subflow is saved and has valid input/output nodes.
- Subflow ID: Verify the subflow ID matches the one referenced in the Subflow Node.
- Port mappings: Check that input/output port connections are correct.
- Open subflow: Double‑click the subflow node to edit and debug internally.
Pause/resume issues
Symptom: Pausing a workflow doesn’t work or resuming fails.
Solution:
- Pause button: Click the Pause button in the execution toolbar.
- Resume button: Click Resume to continue from the paused node.
- Node not found: If resuming fails with “node not found”, restart the workflow from the beginning.
- Clear state: Currently, there is no UI to clear pause state; restarting is the only option.
Webview & Extension
Webview shows blank canvas
Symptom: The workflow editor is empty.
Solution:
- Build webview assets: Run
npm run build:webview or npm run build.
- Check console errors: Open VS Code Developer Tools (Help → Toggle Developer Tools) and look for errors.
- Reload window: Developer: Reload Window (Ctrl+Shift+P).
- Reinstall extension: If problems persist, uninstall and reinstall the extension.
Node errors not displayed
Symptom: Node fails but no error border or message appears.
Solution:
- Check VS Code notifications: Errors may appear as VS Code notifications.
- Enable debug logging: Set
NEBULAFLOW_DEBUG=1 environment variable.
- Check execution logs: Look at the Output panel for NebulaFlow logs.
- Refresh webview: Reload the window.
Property editor not updating
Symptom: Changing node properties doesn’t reflect in the editor.
Solution:
- Select node: Click the node to open its property editor.
- Refresh: Click elsewhere and reselect the node.
- Reload window: Developer: Reload Window.
Extension crashes or becomes unresponsive
Symptom: VS Code becomes slow or the extension stops working.
Solution:
- Check VS Code logs: Help → Toggle Developer Tools → Console.
- Disable other extensions: Temporarily disable other extensions to rule out conflicts.
- Update VS Code: Ensure you are using the latest stable version.
- File an issue: Provide steps to reproduce on GitHub.
Workflow execution is slow
Symptom: Workflow takes longer than expected.
Solution:
- Parallel execution: NebulaFlow runs nodes in parallel when possible; ensure your workflow graph allows parallelism.
- LLM model choice: Use faster models (e.g., smaller models) for quick responses.
- Reduce LLM calls: Combine multiple LLM nodes into one where possible.
- CLI command efficiency: Optimize shell commands for speed.
High token usage
Symptom: LLM nodes consume many tokens, increasing cost.
Solution:
- Reduce reasoning effort: Lower the reasoning effort setting.
- Use concise prompts: Remove unnecessary text from prompts.
- Implement summarization: Add a summarization step before sending large inputs.
- Cache repeated queries: Use Variable nodes to store and reuse results.
Memory usage high
Symptom: VS Code or extension uses excessive memory.
Solution:
- Close unused workflows: Close workflow editor tabs when not in use.
- Restart VS Code: Periodically restart VS Code to clear memory.
- Check for memory leaks: If reproducible, file an issue with steps.
Getting Help
If you cannot resolve your issue with this guide:
- Check the FAQ: See FAQ for additional questions.
- Review documentation: Browse the User Guide and API Reference.
- GitHub Issues: Search existing issues on GitHub and open a new one if needed.
- Community: Join the community discussions on GitHub or Discord (link in README).
Last Updated: 2026-01-21