NebulaFlow workflows are directed acyclic graphs (DAGs) composed of nodes and edges that execute in a parallel, streaming fashion. The workflow engine uses a Kahn’s topological sort algorithm with ordered edges to determine execution order, enabling efficient parallel execution while respecting dependencies.
A workflow consists of:
Each node has:
id: Unique identifiertype: Node type (e.g., llm, cli, loop-start)data: Configuration and state (title, content, active status, etc.)position: Visual coordinates in the webviewselected: Selection state (UI only)Edges define:
id: Unique identifiersource: Source node IDtarget: Target node IDsourceHandle / targetHandle: Optional handle identifiers for multi-port nodesThe workflow maintains several runtime contexts:
The workflow engine uses a parallel scheduler that executes nodes concurrently when dependencies are satisfied. The scheduler implements Kahn’s algorithm with the following optimizations:
1. Initialize in-degree for all nodes
2. Populate ready queue with nodes having zero in-degree
3. While ready queue is not empty:
a. Sort ready queue by edge order
b. Start nodes up to concurrency limits
c. Wait for node completion
d. Decrement in-degree of children
e. Add newly ready nodes to queue
4. Handle completion, errors, and pauses
Loops require special handling because they create cycles in the execution graph:
The scheduler uses a hybrid approach:
llm)Executes language model calls with streaming output.
Configuration:
model: Model ID from available modelssystemPromptTemplate: Optional system prompttimeoutSec: Request timeout (default: 300s)reasoningEffort: Reasoning effort level (minimal/low/medium/high)attachments: Image attachmentsdisabledTools: Tools to disabledangerouslyAllowAll: Allow all tools (security flag)Execution:
cli)Executes shell commands with streaming output.
Configuration:
mode: command or scriptshell: Shell type (bash/sh/zsh/pwsh/cmd)safetyLevel: safe or advancedstreamOutput: Stream stdout/stderrstdin: Input configuration (none/parents-all/parent-index/literal)env: Environment variable configurationflags: Execution flags (exitOnError, pipefail, etc.)Execution:
loop-start)Initiates a loop block.
Configuration:
iterations: Number of iterationsloopVariable: Variable name for iteration counteroverrideIterations: Allow iteration count overrideloopMode: fixed or while-variable-not-emptycollectionVariable: Variable to iterate overmaxSafeIterations: Safety limitExecution:
loop-end)Marks the end of a loop block.
Execution:
if-else)Branches execution based on condition.
Configuration:
truePathActive: Visual indicator for true branchfalsePathActive: Visual indicator for false branchExecution:
variable)Sets a variable value.
Configuration:
variableName: Name of the variableinitialValue: Optional initial valueExecution:
accumulator)Appends to a variable across iterations.
Configuration:
variableName: Name of the accumulator variableinitialValue: Optional initial valueExecution:
text-format)Provides text input to the workflow.
Configuration:
content: Text contentExecution:
preview)Displays text content without execution.
Configuration:
content: Text to displayExecution:
subflow)Executes a reusable workflow.
Configuration:
subflowId: ID of the subflow to executeinputPortCount: Number of input portsoutputPortCount: Number of output portsExecution:
Workflow state is persisted across save/load operations:
interface WorkflowStateDTO {
nodeResults: Record<string, NodeSavedState>
ifElseDecisions?: Record<string, 'true' | 'false'>
nodeAssistantContent?: Record<string, AssistantContentItem[]>
nodeThreadIDs?: Record<string, string>
}
Workflows can be resumed from any point:
interface ResumeDTO {
fromNodeId?: string
seeds?: {
outputs?: Record<string, string>
decisions?: Record<string, 'true' | 'false'>
variables?: Record<string, string>
}
}
Resume Behavior:
fromNodeId are not re-executedNodes can require user approval before execution:
pending_approval statusNodes execute when all dependencies are satisfied:
Fail-Fast Mode (default):
Continue-Subgraph Mode:
Workflows can be paused:
Subflows are reusable workflow components with:
Subflows execute as single nodes in parent workflows:
interface SubflowPortDTO {
id: string
name: string
index: number
}
Ports are ordered by index for consistent mapping.
needsUserApproval for dangerous commandsmaxSafeIterations for loopsWebview → Extension:
save_workflow: Save current workflowload_workflow: Load saved workflowexecute_workflow: Start executionnode_approved / node_rejected: Approval responsesexecute_node: Execute single nodeExtension → Webview:
workflow_loaded: Workflow data loadedexecution_started: Execution begannode_execution_status: Node status updatenode_assistant_content: LLM streaming contentexecution_completed: Execution finishedLLM and CLI nodes stream output in chunks:
node_output_chunk: Individual output chunksnode_assistant_content: Structured assistant contentAMP_API_KEY: Required for LLM nodesOPENROUTER_API_KEY: Alternative LLM providerNEBULAFLOW_DISABLE_HYBRID_PARALLEL: Disable hybrid execution (debug)fail-fast or continue-subgraphInput → LLM → Preview
Input → IF/ELSE → LLM (true) → Preview
→ CLI (false) → Preview
Input → Loop Start → LLM → Accumulator → Loop End → Preview
Input → Subflow (Process Data) → Output
Node Never Starts:
Loop Doesn’t Iterate:
iterations is setloopVariable nameLLM Timeout:
timeoutSecCLI Command Fails:
safetyLevel setting