The NebulaFlow extension is a VS Code extension that provides a visual workflow editor for designing and executing LLM+CLI workflows. It orchestrates workflow execution, manages the webview interface, and handles communication between the extension host and the workflow UI.
The extension activates when VS Code loads it. During activation:
VSCodeHost adapter that provides access to VS Code APIsnebulaFlow.openWorkflow commandWhen the extension deactivates, it cancels all active workflows to ensure clean shutdown.
nebulaFlow.openWorkflowDescription: Opens the NebulaFlow workflow editor in a new webview panel.
Usage: Run from VS Code Command Palette (Ctrl/Cmd+Shift+P) and search for “NebulaFlow: Open Workflow Editor”
Behavior:
dist/webviews/workflow.htmlThe extension provides the following configuration options:
nebulaFlow.storageScopestring["workspace", "user"]"user"workspace: Store in current workspace (.nebulaflow/ directory)user: Store in user home directory (global storage)nebulaFlow.globalStoragePathstring""application.nebulaflow/AMP_API_KEY (Required for LLM nodes)OPENROUTER_API_KEY (Optional)The extension and webview communicate using a custom message protocol defined in workflow/Core/Contracts/Protocol.ts. All messages are typed for type safety.
Webview (React UI) ←→ Extension (VS Code)
These are requests from the webview to the extension:
| Command | Description | Payload |
|---|---|---|
open_external_link |
Open URL in external browser | { url: string } |
save_workflow |
Save workflow to disk | { nodes, edges, state } |
load_workflow |
Load workflow from disk | - |
load_last_workflow |
Load last opened workflow | - |
execute_workflow |
Start workflow execution | { nodes, edges, state, resume } |
abort_workflow |
Stop all workflow execution | - |
pause_workflow |
Pause running workflow | - |
get_models |
Request available LLM models | - |
save_customNode |
Save custom node definition | { node: WorkflowNodeDTO } |
delete_customNode |
Delete custom node | { nodeId: string } |
rename_customNode |
Rename custom node | { oldNodeTitle, newNodeTitle } |
get_custom_nodes |
Request custom nodes | - |
get_storage_scope |
Request storage scope info | - |
toggle_storage_scope |
Toggle storage scope | - |
reset_results |
Clear execution results | - |
clear_workflow |
Clear current workflow | - |
create_subflow |
Create new subflow | { subflow: SubflowDefinitionDTO } |
get_subflow |
Get subflow by ID | { id: string } |
get_subflows |
List all subflows | - |
duplicate_subflow |
Duplicate subflow | { id, nodeId } |
copy_selection |
Copy selected nodes/edges | { nodes, edges } |
paste_selection |
Paste clipboard content | - |
execute_node |
Execute single node | { node, inputs, runId, variables } |
llm_node_chat |
Chat with LLM node | { node, threadID, message, mode } |
node_approved |
Approve node execution | { nodeId, modifiedCommand } |
node_rejected |
Reject node execution | { nodeId, reason } |
calculate_tokens |
Calculate token count | { text, nodeId } |
These are notifications from the extension to the webview:
| Event | Description | Payload |
|---|---|---|
workflow_loaded |
Workflow data loaded from disk | { nodes, edges, state } |
workflow_saved |
Workflow successfully saved | { path? } |
workflow_save_failed |
Workflow save failed | { error? } |
execution_started |
Workflow execution started | - |
execution_completed |
Workflow execution completed | { stoppedAtNodeId? } |
execution_paused |
Workflow execution paused | { stoppedAtNodeId? } |
node_execution_status |
Node execution status update | { nodeId, status, result, command } |
token_count |
Token count result | { count, nodeId } |
node_assistant_content |
LLM assistant content stream | { nodeId, threadID?, content, mode? } |
node_output_chunk |
CLI output stream | { nodeId, chunk, stream } |
models_loaded |
Available LLM models loaded | { models: Model[] } |
provide_custom_nodes |
Custom nodes loaded | { nodes: WorkflowNodeDTO[] } |
storage_scope |
Storage scope information | { scope, basePath? } |
subflow_saved |
Subflow saved successfully | { id } |
provide_subflow |
Subflow data provided | { subflow: SubflowDefinitionDTO } |
provide_subflows |
List of subflows provided | { id, title, version }[] |
subflow_copied |
Subflow duplicated | { nodeId, oldId, newId } |
clipboard_paste |
Clipboard paste result | { nodes, edges } |
interface WorkflowNodeDTO {
id: string
type: string
data: Record<string, unknown>
position: { x: number; y: number }
selected?: boolean
}
interface EdgeDTO {
id: string
source: string
target: string
sourceHandle?: string
targetHandle?: string
}
interface WorkflowPayloadDTO {
nodes?: WorkflowNodeDTO[]
edges?: EdgeDTO[]
state?: WorkflowStateDTO
resume?: ResumeDTO
}
interface NodeExecutionPayload {
nodeId: string
status: 'running' | 'completed' | 'error' | 'interrupted' | 'pending_approval'
result?: string
multi?: string[] // For multi-output nodes
command?: string
}
Streamed content from LLM nodes:
type AssistantContentItem =
| { type: 'text'; text: string }
| { type: 'user_message'; text: string }
| { type: 'thinking'; thinking: string }
| { type: 'tool_use'; id: string; name: string; inputJSON?: string }
| { type: 'tool_result'; toolUseID: string; resultJSON?: string }
| { type: 'server_tool_use'; name: string; inputJSON?: string }
| { type: 'server_web_search_result'; query?: string; resultJSON?: string }
NebulaFlow supports two storage scopes:
~/.nebulaflow/ (or custom path via globalStoragePath)<workspace-root>/.nebulaflow/Custom nodes are user-defined node types that extend NebulaFlow’s capabilities:
WorkflowNodeDTO with custom type and dataWorkflow execution state is persisted across sessions:
interface WorkflowStateDTO {
nodeResults: Record<string, NodeSavedState>
ifElseDecisions?: Record<string, 'true' | 'false'>
nodeAssistantContent?: Record<string, AssistantContentItem[]>
nodeThreadIDs?: Record<string, string>
}
execute_workflow commandnode_execution_status and node_output_chunkexecution_completed eventchild_processCLI nodes require approval before execution:
pending_approvalWhen context.extensionMode === vscode.ExtensionMode.Development:
npm i /home/prinova/CodeProjects/upstreamAmp/sdkAMP_API_KEY in your environmentnpm run build or npm run watch:webviewErrors are reported to users via:
vscode.window.showErrorMessage() for critical failuresThe extension integrates with the Amp SDK for LLM operations:
import { AmpClient } from '@prinova/amp-sdk'
const client = new AmpClient({
apiKey: process.env.AMP_API_KEY,
// Additional configuration
})
Optional integration for alternative LLM providers:
// Configuration via environment variables
// OPENROUTER_API_KEY
src/extension.ts # Extension entry point
workflow/Application/register.ts # Command registration & setup
workflow/Core/Contracts/Protocol.ts # Message protocol types
workflow/WorkflowExecution/ # Execution orchestration
workflow/DataAccess/fs.ts # File system operations
workflow/Shared/Host/VSCodeHost.ts # VS Code API adapter
dist/webviews/workflow.html # Webview entry point
Protocol.tsAMP_API_KEY before using LLM nodesnpm run build to build webview assetsdist/webviews/workflow.html existsAMP_API_KEY is set