Reflow Documentation

Welcome to the Reflow documentation! Reflow is a powerful, actor-based workflow execution engine built in Rust that supports multi-language scripting and cross-platform deployment.

What is Reflow?

Reflow is a modular workflow engine that uses the actor model for concurrent, message-passing execution. It supports:

  • Zeal IDE Integration: Real-time event streaming to Zeal via ZIP protocol (WebSocket + HTTP traces)
  • 6,700+ API Actors: Pre-generated actors for 88 API services (Slack, GitHub, Stripe, etc.)
  • Actor-Based Architecture: Isolated, concurrent actors with message passing
  • Graph-Based Workflows: Visual workflow representation with history/undo
  • Unified Rust Runtime Crate: reflow_rt is the crates.io-facing API surface for graph, actor, network, and component crates
  • Real-Time Observability: EventBridge pipeline forwarding execution events to TraceCollector and ZipSession
  • Media Processing: Image, audio, video, and optional graph-driven ML pipelines
  • REST API + WebSocket: HTTP and WebSocket interfaces for headless workflow execution
  • Cross-Platform: Native Rust execution + WebAssembly for browsers

Documentation Structure

Getting Started

Quick start guide, installation, and basic concepts

Runtime Crate

The public Rust runtime API surface, feature flags, and graph/component re-exports

Architecture

System architecture, actor model, execution engine, and event pipeline

Zeal Integration

ZIP session, template registration, real-time event streaming to Zeal IDE

REST API

HTTP and WebSocket API for direct workflow execution

Core API

Detailed API documentation for actors, messaging, and graphs

Components

Standard component library: flow control, transforms, logic, media, optional ML, and 6,700+ API actors

Observability

EventBridge, TraceCollector, ZIP event translation, and trace sessions

Deployment

Deployment options and operational considerations

Reference

Complete API reference and configuration options

Community and Support

  • GitHub Issues: Report bugs and request features
  • Discussions: Community Q&A and announcements
  • Contributing: See CONTRIBUTING.md for development guidelines

License

This project is dual-licensed under MIT or Apache-2.0 - see the LICENSE files for details.

Getting Started with Reflow

Welcome to Reflow! This guide will help you get up and running with the actor-based workflow engine.

What You'll Learn

This getting started guide covers:

  1. Installation - Setting up Reflow on your system
  2. Basic Concepts - Understanding actors, messages, and workflows
  3. Development Setup - Setting up your development environment
  4. First Workflow - Creating your first workflow

Quick Overview

Reflow is an actor-based workflow engine that allows you to:

  • Create actors that process data and communicate via messages
  • Connect actors into workflows that define data flow and processing logic
  • Execute workflows with multi-language support (JavaScript/Deno, Python, WASM)
  • Deploy workflows natively or in WebAssembly environments

Prerequisites

Before getting started with Reflow, you should have:

  • Rust (1.85 or later) - for building and running Reflow
  • Basic understanding of concurrent programming concepts
  • Familiarity with at least one of: JavaScript, Python, or Rust

Architecture at a Glance

┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Actor A   │───▶│   Actor B   │───▶│   Actor C   │
│ (JavaScript)│    │  (Python)   │    │    (Rust)   │
└─────────────┘    └─────────────┘    └─────────────┘
       │                  │                  │
       ▼                  ▼                  ▼
┌─────────────────────────────────────────────────────┐
│              Message Bus & Routing                  │
└─────────────────────────────────────────────────────┘

Key Concepts

  • Actor: An isolated unit of computation that processes messages
  • Message: Data passed between actors
  • Port: Input/output connections on actors
  • Workflow: A graph of connected actors
  • Runtime: The execution environment (Deno, Python, etc.)

Next Steps

  1. Start with Installation to set up Reflow
  2. Read Basic Concepts to understand the fundamentals
  3. Follow the First Workflow tutorial
  4. Explore the Examples for more complex use cases

Getting Help

Ready to start? Let's install Reflow!

Installation

This guide covers installing and setting up Reflow on your system. Pick a path based on the language you're integrating from.

Quickest path: a language SDK

Most users start with one of the language SDKs. They wrap the Rust runtime in idiomatic shapes and ship pre-built native binaries for darwin / linux / windows — no Rust toolchain required to use them.

LanguageInstallNative lib
Node.jsnpm install @offbit-ai/reflowbundled per-platform via optionalDependencies
Pythonpip install offbit-reflowbundled in the wheel (abi3-py39)
Gogo get github.com/offbit-ai/reflow/sdk/go@v0.2.1 + run scripts/install_lib.shexternal libreflow_rt_capi, fetched by the install script
JVM (Java + Kotlin)dependencies { implementation("ai.offbit:reflow:0.2.2") }bundled in the fat jar (classpath resource)
C++add_subdirectory(third_party/reflow/sdk/cpp)external libreflow_rt_capi from the sdk/go/v* release tarball

The SDK chapters walk through each one with a hello-world example. Optional actor packs (loadPack(...)) bring heavier palettes — GPU, ML, browser automation, ~6,700 SaaS API actors — into any SDK install at runtime.

Building from source (Rust)

The rest of this page covers building the Rust runtime from source — what you'll want if you're embedding Reflow in your own native host, contributing to the runtime itself, or rebuilding libreflow_rt_capi for a platform we don't ship pre-built.

Prerequisites

Before installing Reflow, ensure you have:

Required

  • Rust 1.85 or later
  • Git for cloning the repository

Optional (for scripting support)

  • Deno 1.30+ for JavaScript/TypeScript actors
  • Python 3.8+ for Python actors
  • Docker for isolated Python execution

Installation Methods

Method 1: Use Reflow as a Rust Library

For application code, depend on the unified runtime crate:

[dependencies]
reflow_rt = "0.1"

Method 2: Build from Source

  1. Clone the repository:

    git clone https://github.com/offbit-ai/reflow.git
    cd reflow
    
  2. Build the project:

    cargo build --release
    
  3. Run examples or package crates locally:

    cargo test -p reflow_rt
    cargo package -p reflow_rt --list
    

Method 3: Use Lower-Level Crates Directly

reflow_rt is the recommended user-facing entry point. Lower-level crates remain available when a project needs a narrower dependency surface:

[dependencies]
reflow_graph = "0.1"
reflow_actor = "0.1"
reflow_network = "0.1"
reflow_components = { version = "0.1", default-features = false }

Feature Flags

reflow_rt keeps optional component families out of the default install path:

[dependencies]
reflow_rt = { version = "0.1", features = ["gpu", "media", "ml"] }

Available Features

FeatureDescriptionRequirements
gpuGPU-backed rendering and compute componentsNative GPU backend
av-coreAudio/signal processing componentsNone
window-eventsWindow/input event componentsNone
camera-nativeNative camera capturePlatform camera backend
mediaTyped frame/tensor packet cratesNone
mlCV/ML actors, model manifests, taskpacks, and mock inferenceNone
external-litertReal LiteRT adapter supportLiteRT native runtime
api-servicesGenerated API-service actorsLarger compile surface
network-flowtraceDebug tracing support in reflow_networkNone

Runtime Dependencies

JavaScript/TypeScript (Deno)

Install Deno:

# macOS/Linux
curl -fsSL https://deno.land/x/install/install.sh | sh

# Windows (PowerShell)
iwr https://deno.land/x/install/install.ps1 -useb | iex

# Using package managers
brew install deno          # macOS
scoop install deno         # Windows
snap install deno          # Linux

Python Support

Install Python 3.8+:

# macOS
brew install python

# Ubuntu/Debian
sudo apt update
sudo apt install python3 python3-pip

# Windows
# Download from https://python.org

For Docker-based Python execution:

# Install Docker
# macOS/Windows: Docker Desktop
# Linux: docker.io package
sudo apt install docker.io  # Ubuntu/Debian

Verification

Verify your installation:

# Check the user-facing runtime crate
cargo check -p reflow_rt

# Verify the crate package contents
cargo package -p reflow_rt --list

Platform-Specific Notes

macOS

  • Use Homebrew for easy dependency management
  • Xcode Command Line Tools required for Rust compilation

Linux

  • Ensure build-essential is installed
  • Some distributions may need pkg-config and libssl-dev
# Ubuntu/Debian
sudo apt install build-essential pkg-config libssl-dev

# CentOS/RHEL
sudo yum groupinstall "Development Tools"
sudo yum install openssl-devel

Windows

  • Use Windows Subsystem for Linux (WSL) for best experience
  • Visual Studio Build Tools required for Rust compilation
  • Consider using scoop or chocolatey for dependency management

Configuration

Environment Variables

Set these environment variables for optimal performance:

# Enable shared Python environment (optional)
export USE_SHARED_ENV=true

# Set Python path (if needed)
export PYTHON_PATH=/usr/bin/python3

# Configure Deno permissions (optional)
export DENO_PERMISSIONS="--allow-all"

Config File

Create a reflow.toml configuration file:

[runtime]
default_engine = "deno"
enable_networking = true
enable_filesystem = true

[deno]
allow_all = false
allow_net = true
allow_read = true

[python]
use_docker = false
shared_environment = true

[performance]
thread_pool_size = 8
max_memory_mb = 1024

Next Steps

Now that Reflow is installed:

  1. Learn the basics: Read Basic Concepts
  2. Set up development: Follow Development Setup
  3. Create your first workflow: Try First Workflow
  4. Explore examples: Check out the Examples

Troubleshooting

Common Issues

Rust compilation errors:

# Update Rust to latest version
rustup update

Deno not found:

# Add Deno to PATH
export PATH="$HOME/.deno/bin:$PATH"

Python import errors:

# Install required Python packages
pip install numpy pandas  # or other dependencies

Permission denied errors:

# Fix file permissions
chmod +x reflow

For more troubleshooting, see the Troubleshooting Guide.

Basic Concepts

This guide introduces the fundamental concepts of Reflow's actor-based workflow engine.

Core Concepts

Actors

Actors are the building blocks of Reflow workflows. Each actor is an isolated unit of computation that:

  • Processes incoming messages
  • Maintains its own state
  • Communicates only through message passing
  • Runs concurrently with other actors
#![allow(unused)]
fn main() {
// Example: Simple actor that doubles numbers (using actor macro)
use std::collections::HashMap;
use reflow_network::{
    actor::ActorContext,
    message::Message,
};
use actor_macro::actor;

#[actor(
    DoublerActor,
    inports::<100>(number),
    outports::<50>(result)
)]
async fn doubler_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    if let Some(Message::Integer(n)) = payload.get("number") {
        Ok([
            ("result".to_owned(), Message::integer(n * 2))
        ].into())
    } else {
        Err(anyhow::anyhow!("Expected integer input"))
    }
}

// Alternative: Manual implementation
use reflow_network::actor::{Actor, ActorBehavior, Port, ActorLoad};
use parking_lot::Mutex;
use std::sync::Arc;

pub struct ManualDoublerActor {
    inports: Port,
    outports: Port,
    load: Arc<Mutex<ActorLoad>>,
}

impl ManualDoublerActor {
    pub fn new() -> Self {
        Self {
            inports: flume::unbounded(),
            outports: flume::unbounded(),
            load: Arc::new(Mutex::new(ActorLoad::new(0))),
        }
    }
}

impl Actor for ManualDoublerActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                if let Some(Message::Integer(n)) = payload.get("number") {
                    Ok([
                        ("result".to_owned(), Message::Integer(n * 2))
                    ].into())
                } else {
                    Err(anyhow::anyhow!("Expected integer input"))
                }
            })
        })
    }
    
    fn get_inports(&self) -> Port { self.inports.clone() }
    fn get_outports(&self) -> Port { self.outports.clone() }
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> { self.load.clone() }
    
    fn create_process(&self) -> std::pin::Pin<Box<dyn std::future::Future<Output = ()> + 'static + Send>> {
        // Process creation implementation...
        todo!("See creating-actors.md for complete implementation")
    }
}
}

Messages

Messages are the data that flows between actors. Reflow supports various message types:

#![allow(unused)]
fn main() {
pub enum Message {
    String(String),
    Integer(i64),
    Float(f64),
    Boolean(bool),
    Array(Vec<Message>),
    Object(HashMap<String, Message>),
    Binary(Vec<u8>),
    Null,
    Error(String),
}
}

Ports

Ports are the communication channels between actors:

  • Input ports (inports): Receive messages from other actors
  • Output ports (outports): Send messages to other actors
┌─────────────┐
│    Actor    │
│  ┌───────┐  │
│  │ Logic │  │
│  └───────┘  │
│             │
│ in1 ──────→ │ ──────→ out1
│ in2 ──────→ │ ──────→ out2
└─────────────┘

Workflows (Graphs)

Workflows are directed graphs of connected actors that define:

  • Data flow between actors
  • Processing logic and transformations
  • Execution order and dependencies
┌─────────┐    ┌─────────┐    ┌─────────┐
│ Source  │───▶│Transform│───▶│  Sink   │
│ Actor   │    │ Actor   │    │ Actor   │
└─────────┘    └─────────┘    └─────────┘

Actor State

Each actor can maintain its own state that persists between message processing:

#![allow(unused)]
fn main() {
// Example: Counter actor with state (using actor macro)
use reflow_network::actor::MemoryState;

#[actor(
    CounterActor,
    state(MemoryState),
    inports::<100>(increment),
    outports::<50>(count)
)]
async fn counter_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    
    let mut state_guard = state.lock();
    let memory_state = state_guard
        .as_mut_any()
        .downcast_mut::<MemoryState>()
        .expect("Expected MemoryState");
    
    // Initialize state if needed
    if !memory_state.contains_key("count") {
        memory_state.insert("count", serde_json::json!(0));
    }
    
    // Get current count
    let current_count = memory_state.get("count")
        .and_then(|v| v.as_i64())
        .unwrap_or(0);
    
    // Increment by 1 or by specified amount
    let increment_by = if let Some(Message::Integer(amount)) = payload.get("increment") {
        *amount
    } else {
        1 // Default increment
    };
    
    let new_count = current_count + increment_by;
    
    // Update state
    memory_state.insert("count", serde_json::json!(new_count));
    
    Ok([
        ("count".to_owned(), Message::Integer(new_count))
    ].into())
}
}

Actor Types

Native Actors (Rust)

Built directly in Rust for maximum performance:

#![allow(unused)]
fn main() {
struct ProcessorActor {
    // Implementation in Rust
}
}

Script Actors

Execute scripts in various languages:

JavaScript/TypeScript (Deno)

// JavaScript actor function
function process(inputs, context) {
    const data = inputs.data;
    return { result: data.toUpperCase() };
}

Python

# Python actor script
import numpy as np

inputs = Context.get_inputs()
data = np.array(inputs["data"])
__return_value = data.sum()

WebAssembly

#![allow(unused)]
fn main() {
// WASM actor using Reflow Plugin SDK (reflow_wasm_actor)
use reflow_wasm_actor::*;
use std::collections::HashMap;

// Define plugin metadata
fn metadata() -> PluginMetadata {
    PluginMetadata {
        component: "ProcessorActor".to_string(),
        description: "Processes input data".to_string(),
        inports: vec![
            port_def!("input", "Input data", "String", required),
        ],
        outports: vec![
            port_def!("output", "Processed output", "String"),
        ],
        config_schema: None,
    }
}

// Implement actor behavior
fn process_actor(context: ActorContext) -> Result<ActorResult, Box<dyn std::error::Error>> {
    let mut outputs = HashMap::new();
    
    // Get input and process it
    if let Some(Message::String(input)) = context.payload.get("input") {
        let processed = format!("Processed: {}", input);
        outputs.insert("output".to_string(), Message::String(processed));
    }
    
    Ok(ActorResult {
        outputs,
        state: None, // No state changes
    })
}

// Register the plugin
actor_plugin!(
    metadata: metadata(),
    process: process_actor
);
}

Message Passing Patterns

Point-to-Point

One actor sends to one specific actor:

Actor A ───────▶ Actor B

Broadcast

One actor sends to multiple actors:

        ┌─────▶ Actor B
Actor A ┤
        └─────▶ Actor C

Collect/Merge

Multiple actors send to one actor:

Actor A ┐
        ├─────▶ Actor C
Actor B ┘

Concurrency Model

Actor Isolation

  • Each actor runs in its own execution context
  • No shared memory between actors
  • Thread-safe by design

Message Processing

  • Actors process messages asynchronously
  • Messages are queued for processing
  • Backpressure handling prevents overflow

Parallelism

  • Multiple actors can run simultaneously
  • Work is distributed across available CPU cores
  • Network can span multiple machines

Error Handling

Actor-Level Errors

#![allow(unused)]
fn main() {
// Errors are returned as Error messages
Err(anyhow::anyhow!("Processing failed"))
}

Network-Level Errors

#![allow(unused)]
fn main() {
// Error propagation through the network
HashMap::from([
    ("error".to_string(), Message::Error("Network timeout".to_string()))
])
}

Recovery Patterns

  • Dead letter queues for failed messages
  • Circuit breakers for failing actors
  • Supervisor actors for monitoring

Lifecycle Management

Actor Creation

#![allow(unused)]
fn main() {
let actor = MyActor::new(config);
let process = actor.create_process();
tokio::spawn(process);
}

Actor Termination

#![allow(unused)]
fn main() {
// Graceful shutdown
drop(inports); // Closes input channels
// Actor completes current message and exits
}

State Persistence

#![allow(unused)]
fn main() {
// State can be persisted and restored
let state = actor.get_state();
// Serialize state for persistence
}

Advanced Concepts

Hot Code Reloading

  • Script actors can be updated without stopping the workflow
  • State preservation during updates

Multi-tenancy

  • Isolated workspaces for different users/projects
  • Resource quotas and permissions

Distributed Execution

  • Actors can run on different machines
  • Network-transparent message passing

Best Practices

Actor Design

  • Keep actors small and focused
  • Avoid blocking operations in actor logic
  • Use async/await for I/O operations

Message Design

  • Use typed messages when possible
  • Keep messages small and serializable
  • Include error context in error messages

Workflow Design

  • Design for failure (circuit breakers, timeouts)
  • Monitor actor performance and health
  • Use appropriate parallelism levels

Next Steps

Now that you understand the basic concepts:

  1. Set up development: Development Setup
  2. Create your first workflow: First Workflow
  3. Learn about specific actors: Actor API
  4. Explore scripting: JavaScript Runtime

Further Reading

Development Setup

This guide helps you set up a development environment for building workflows with Reflow.

Development Environment

  • IDE: Visual Studio Code or RustRover
  • Version Control: Git
  • Package Manager: Cargo for Rust dependencies
  • Terminal: Modern terminal with good Unicode support

VS Code Extensions

For the best development experience with VS Code:

{
  "recommendations": [
    "rust-lang.rust-analyzer",
    "vadimcn.vscode-lldb",
    "serayuzgur.crates",
    "tamasfe.even-better-toml",
    "ms-vscode.vscode-json"
  ]
}

Project Structure

Creating a New Reflow Project

# Create a new Rust project
cargo new my-reflow-app
cd my-reflow-app

# Add Reflow dependencies
cargo add reflow_rt
cargo add tokio --features rt-multi-thread,macros
cargo add serde_json anyhow
my-reflow-app/
├── Cargo.toml
├── src/
│   ├── main.rs
│   ├── actors/
│   │   ├── mod.rs
│   │   └── custom_actor.rs
│   ├── workflows/
│   │   ├── mod.rs
│   │   └── data_pipeline.rs
│   └── scripts/
│       ├── process.js
│       └── transform.py
├── config/
│   └── reflow.toml
├── tests/
│   └── integration_tests.rs
└── examples/
    └── basic_workflow.rs

Cargo.toml Configuration

[package]
name = "my-reflow-app"
version = "0.1.0"
edition = "2021"

[dependencies]
reflow_rt = "0.1"
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
serde_json = "1"
anyhow = "1"

[dev-dependencies]
tokio-test = "0.4"

[[example]]
name = "basic_workflow"
path = "examples/basic_workflow.rs"

Development Workflow

1. Setting Up the Main Application

Create src/main.rs:

use reflow_rt::prelude::*;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut graph = Graph::new("development", false, None);
    graph.add_node("tap", "tpl_passthrough", None);

    let network = Network::with_graph(NetworkConfig::default(), &graph);
    let _ = network;

    Ok(())
}

2. Creating Custom Actors

Create src/actors/mod.rs:

#![allow(unused)]
fn main() {
pub mod custom_actor;

pub use custom_actor::CustomActor;
}

Create src/actors/custom_actor.rs:

#![allow(unused)]
fn main() {
use reflow_rt::actor_runtime::{Actor, ActorBehavior, ActorContext, Port};
use reflow_rt::actor_runtime::message::Message;
use std::collections::HashMap;

pub struct CustomActor {
    inports: Port,
    outports: Port,
}

impl CustomActor {
    pub fn new() -> Self {
        Self {
            inports: flume::unbounded(),
            outports: flume::unbounded(),
        }
    }
}

impl Actor for CustomActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                
                // Your processing logic here
                let result = HashMap::from([
                    ("output".to_string(), Message::String("processed".to_string()))
                ]);
                
                Ok(result)
            })
        })
    }
    
    fn get_inports(&self) -> Port {
        self.inports.clone()
    }
    
    fn get_outports(&self) -> Port {
        self.outports.clone()
    }
    
    fn create_process(&self) -> std::pin::Pin<Box<dyn std::future::Future<Output = ()> + 'static + Send>> {
        // Default implementation from trait
        todo!("Implement process creation")
    }
}
}

3. Organizing Workflows

Create src/workflows/mod.rs:

#![allow(unused)]
fn main() {
pub mod data_pipeline;

pub use data_pipeline::create_data_pipeline;
}

Create src/workflows/data_pipeline.rs:

#![allow(unused)]
fn main() {
use reflow_network::network::Network;
use crate::actors::CustomActor;

pub async fn create_data_pipeline() -> Result<Network, Box<dyn std::error::Error>> {
    let mut network = Network::new();
    
    // Create actors
    let source_actor = CustomActor::new();
    let transform_actor = CustomActor::new();
    let sink_actor = CustomActor::new();
    
    // Add actors to network
    network.add_actor("source", Box::new(source_actor)).await?;
    network.add_actor("transform", Box::new(transform_actor)).await?;
    network.add_actor("sink", Box::new(sink_actor)).await?;
    
    // Connect actors
    network.connect("source", "output", "transform", "input").await?;
    network.connect("transform", "output", "sink", "input").await?;
    
    Ok(network)
}
}

Testing Setup

Unit Tests

Create src/actors/custom_actor.rs with tests:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    use tokio_test;
    
    #[tokio::test]
    async fn test_custom_actor() {
        let actor = CustomActor::new();
        let behavior = actor.get_behavior();
        
        // Create test context
        let payload = HashMap::from([
            ("input".to_string(), Message::String("test".to_string()))
        ]);
        
        // Test behavior
        // Note: You'll need to create proper ActorContext for testing
        let result = behavior(/* test context */).await;
        assert!(result.is_ok());
    }
}
}

Integration Tests

Create tests/integration_tests.rs:

#![allow(unused)]
fn main() {
use my_reflow_app::workflows::create_data_pipeline;

#[tokio::test]
async fn test_data_pipeline() {
    let network = create_data_pipeline().await.unwrap();
    
    // Test the complete workflow
    // Send test data and verify output
}
}

Configuration Management

Environment Configuration

Create config/reflow.toml:

[development]
log_level = "debug"
thread_pool_size = 4

[production]
log_level = "info"
thread_pool_size = 8

[scripting]
deno_permissions = ["--allow-net", "--allow-read"]
python_interpreter = "python3"

[networking]
bind_address = "127.0.0.1:8080"
enable_metrics = true

Loading Configuration

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
use std::fs;

#[derive(Debug, Deserialize, Serialize)]
struct Config {
    development: Option<EnvConfig>,
    production: Option<EnvConfig>,
    scripting: Option<ScriptingConfig>,
    networking: Option<NetworkingConfig>,
}

#[derive(Debug, Deserialize, Serialize)]
struct EnvConfig {
    log_level: String,
    thread_pool_size: usize,
}

fn load_config() -> Result<Config, Box<dyn std::error::Error>> {
    let config_str = fs::read_to_string("config/reflow.toml")?;
    let config: Config = toml::from_str(&config_str)?;
    Ok(config)
}
}

Development Scripts

Makefile

Create a Makefile for common tasks:

.PHONY: build test run clean docs

build:
	cargo build

test:
	cargo test

run:
	cargo run

clean:
	cargo clean

docs:
	cargo doc --open

check:
	cargo check
	cargo clippy -- -D warnings
	cargo fmt -- --check

dev:
	cargo watch -x run

install-tools:
	cargo install cargo-watch
	cargo install cargo-expand

Development Commands

# Development workflow
make build          # Build the project
make test           # Run tests
make check          # Run linting and formatting checks
make dev            # Run with auto-reload on changes

# Documentation
make docs           # Generate and open documentation
cargo doc --document-private-items  # Include private items

Debugging

Logging Setup

#![allow(unused)]
fn main() {
use tracing::{info, warn, error, debug};
use tracing_subscriber;

// In main.rs
fn init_logging() {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::DEBUG)
        .init();
}

// In your actors
debug!("Processing message: {:?}", message);
info!("Actor started successfully");
warn!("High memory usage detected");
error!("Failed to process message: {}", error);
}

Debug Configuration

Add to Cargo.toml:

[profile.dev]
debug = true
debug-assertions = true
overflow-checks = true

[dependencies]
tracing = "0.1"
tracing-subscriber = "0.3"

Using Debugger

For VS Code, create .vscode/launch.json:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "lldb",
            "request": "launch",
            "name": "Debug Reflow App",
            "cargo": {
                "args": ["build", "--bin=my-reflow-app"],
                "filter": {
                    "name": "my-reflow-app",
                    "kind": "bin"
                }
            },
            "args": [],
            "cwd": "${workspaceFolder}"
        }
    ]
}

Performance Profiling

Basic Profiling

#![allow(unused)]
fn main() {
use std::time::Instant;

// Time critical sections
let start = Instant::now();
// ... your code
let duration = start.elapsed();
println!("Time elapsed: {:?}", duration);
}

Advanced Profiling Tools

# Install profiling tools
cargo install cargo-profiler
cargo install flamegraph

# Generate flame graphs
cargo flamegraph --bin my-reflow-app

# Memory profiling with valgrind (Linux)
cargo build --release
valgrind --tool=massif ./target/release/my-reflow-app

Continuous Integration

GitHub Actions

Create .github/workflows/ci.yml:

name: CI

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: actions-rs/toolchain@v1
      with:
        toolchain: stable
    - name: Build
      run: cargo build --verbose
    - name: Run tests
      run: cargo test --verbose
    - name: Check formatting
      run: cargo fmt -- --check
    - name: Run clippy
      run: cargo clippy -- -D warnings

Next Steps

Now that your development environment is set up:

  1. Create your first workflow: First Workflow
  2. Learn about actors: Creating Actors
  3. Explore scripting: Deno Runtime
  4. See examples: Examples

Resources

Your First Workflow

This tutorial will guide you through creating and running your first Reflow workflow using the actual implementation patterns. We'll build a simple data processing pipeline that demonstrates the core concepts.

Overview

We'll create a workflow that:

  1. Processes input numbers (Sum Actor)
  2. Squares the result (Square Actor)
  3. Validates the output (Assert Actor)
┌─────────┐    ┌─────────┐    ┌─────────┐
│   Sum   │───▶│ Square  │───▶│ Assert  │
│ Actor   │    │ Actor   │    │ Actor   │
└─────────┘    └─────────┘    └─────────┘

Prerequisites

Before starting, make sure you have:

Step 1: Create a New Project

# Create a new Rust project
cargo new hello-reflow
cd hello-reflow

# Add Reflow dependencies
cargo add reflow_network
cargo add actor_macro
cargo add tokio --features full
cargo add serde --features derive
cargo add serde_json anyhow
cargo add parking_lot

Your Cargo.toml should look like this:

[package]
name = "hello-reflow"
version = "0.1.0"
edition = "2021"

[dependencies]
reflow_network = { path = "../path/to/reflow/crates/reflow_network" }
actor_macro = { path = "../path/to/reflow/crates/actor_macro" }
tokio = { version = "1.0", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
parking_lot = "0.12"

Step 2: Create Your First Actors

Create src/main.rs with the correct actor patterns:

use std::collections::HashMap;
use reflow_network::{
    actor::{ActorContext, MemoryState},
    network::{Network, NetworkConfig},
    connector::{ConnectionPoint, Connector, InitialPacket},
    message::Message,
};
use actor_macro::actor;

// Sum Actor - adds two input numbers
#[actor(
    SumActor,
    inports::<100>(A, B),
    outports::<100>(Out),
    await_all_inports
)]
async fn sum_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();

    let a_msg = payload.get("A").expect("expected to get data from port A");
    let b_msg = payload.get("B").expect("expected to get data from port B");

    let a = match a_msg {
        Message::Integer(value) => *value,
        _ => 0,
    };

    let b = match b_msg {
        Message::Integer(value) => *value,
        _ => 0,
    };

    let result = a + b;
    println!("Sum Actor: {} + {} = {}", a, b, result);

    Ok([("Out".to_owned(), Message::integer(result))].into())
}

// Square Actor - squares the input number
#[actor(
    SquareActor,
    inports::<100>(In),
    outports::<50>(Out)
)]
async fn square_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let message = payload.get("In").expect("expected input");
    
    let input = match message {
        Message::Integer(value) => *value,
        _ => 0,
    };

    let result = input * input;
    println!("Square Actor: {} squared = {}", input, result);

    Ok([("Out".to_owned(), Message::Integer(result))].into())
}

// Print Actor - displays the final result
#[actor(
    PrintActor,
    inports::<100>(Value),
    outports::<50>(Done)
)]
async fn print_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let message = payload.get("Value").expect("expected value");
    
    match message {
        Message::Integer(value) => {
            println!("🎉 Final Result: {}", value);
        },
        _ => {
            println!("📄 Final Result: {:?}", message);
        }
    }

    Ok([("Done".to_owned(), Message::Boolean(true))].into())
}

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    println!("🚀 Starting Hello Reflow workflow...");
    
    // Create network with default configuration
    let mut network = Network::new(NetworkConfig::default());

    // Register actor types
    network.register_actor("sum_process", SumActor::new())?;
    network.register_actor("square_process", SquareActor::new())?;
    network.register_actor("print_process", PrintActor::new())?;

    // Add actor instances (nodes)
    network.add_node("sum", "sum_process")?;
    network.add_node("square", "square_process")?;
    network.add_node("print", "print_process")?;

    // Connect the workflow: sum -> square -> print
    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "sum".to_owned(),
            port: "Out".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "square".to_owned(),
            port: "In".to_owned(),
            ..Default::default()
        },
    });

    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "square".to_owned(),
            port: "Out".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "print".to_owned(),
            port: "Value".to_owned(),
            ..Default::default()
        },
    });

    // Add initial data to start the workflow
    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "sum".to_owned(),
            port: "A".to_owned(),
            initial_data: Some(Message::Integer(5)),
        },
    });

    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "sum".to_owned(),
            port: "B".to_owned(),
            initial_data: Some(Message::Integer(3)),
        },
    });

    // Start the network
    network.start().await?;

    // Give the workflow time to complete
    tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;

    println!("✅ Workflow completed!");
    
    Ok(())
}

Step 3: Run the Workflow

cargo run

You should see output like:

🚀 Starting Hello Reflow workflow...
Sum Actor: 5 + 3 = 8
Square Actor: 8 squared = 64
🎉 Final Result: 64
✅ Workflow completed!

Step 4: Understanding the Code

Actor Macro Usage

The #[actor] macro simplifies actor creation:

#![allow(unused)]
fn main() {
#[actor(
    SumActor,                    // Generated struct name
    inports::<100>(A, B),        // Input ports with capacity
    outports::<100>(Out),        // Output ports with capacity
    await_all_inports            // Wait for all inputs before processing
)]
async fn sum_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error>
}

Function Signature

All actor functions must have this exact signature:

  • async fn - Asynchronous function
  • context: ActorContext - Single parameter containing payload, state, config
  • Result<HashMap<String, Message>, anyhow::Error> - Return type

Network API Pattern

  1. Register actor types: network.register_actor("name", ActorStruct::new())
  2. Add node instances: network.add_node("instance_id", "actor_type")
  3. Connect with Connector structs
  4. Initialize with InitialPacket structs

Step 5: Add State Management

Let's create a stateful actor that counts operations:

#![allow(unused)]
fn main() {
// Counter Actor - keeps track of how many values it has processed
#[actor(
    CounterActor,
    state(MemoryState),
    inports::<100>(Value),
    outports::<50>(Count, Total)
)]
async fn counter_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let input = payload.get("Value").expect("expected value");
    
    let value = match input {
        Message::Integer(n) => *n,
        _ => 0,
    };

    // Update state
    let (count, total) = {
        let mut state_guard = state.lock();
        let memory_state = state_guard
            .as_mut_any()
            .downcast_mut::<MemoryState>()
            .expect("Expected MemoryState");
        
        // Get current count and total
        let current_count = memory_state
            .get("count")
            .and_then(|v| v.as_i64())
            .unwrap_or(0);
        
        let current_total = memory_state
            .get("total")
            .and_then(|v| v.as_i64())
            .unwrap_or(0);
        
        // Update values
        let new_count = current_count + 1;
        let new_total = current_total + value;
        
        memory_state.insert("count", serde_json::json!(new_count));
        memory_state.insert("total", serde_json::json!(new_total));
        
        (new_count, new_total)
    };

    println!("Counter Actor: processed {} values, total sum: {}", count, total);

    Ok([
        ("Count".to_owned(), Message::Integer(count)),
        ("Total".to_owned(), Message::Integer(total)),
    ].into())
}
}

Step 6: Multiple Input Example

Create an actor that waits for multiple inputs:

#![allow(unused)]
fn main() {
// Multiply Actor - multiplies two inputs
#[actor(
    MultiplyActor,
    inports::<100>(X, Y),
    outports::<50>(Result),
    await_all_inports  // This makes it wait for both X and Y
)]
async fn multiply_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();

    let x = match payload.get("X").expect("expected X") {
        Message::Integer(value) => *value,
        _ => 1,
    };

    let y = match payload.get("Y").expect("expected Y") {
        Message::Integer(value) => *value,
        _ => 1,
    };

    let result = x * y;
    println!("Multiply Actor: {} × {} = {}", x, y, result);

    Ok([("Result".to_owned(), Message::Integer(result))].into())
}
}

Step 7: Complex Workflow Example

Here's a more complex workflow that demonstrates multiple patterns:

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    println!("🚀 Starting Complex Reflow workflow...");
    
    let mut network = Network::new(NetworkConfig::default());

    // Register all actor types
    network.register_actor("sum_process", SumActor::new())?;
    network.register_actor("multiply_process", MultiplyActor::new())?;
    network.register_actor("counter_process", CounterActor::new())?;
    network.register_actor("print_process", PrintActor::new())?;

    // Create network topology
    network.add_node("sum1", "sum_process")?;
    network.add_node("multiply1", "multiply_process")?;
    network.add_node("counter1", "counter_process")?;
    network.add_node("print1", "print_process")?;

    // Connect workflow
    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "sum1".to_owned(),
            port: "Out".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "multiply1".to_owned(),
            port: "X".to_owned(),
            ..Default::default()
        },
    });

    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "multiply1".to_owned(),
            port: "Result".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "counter1".to_owned(),
            port: "Value".to_owned(),
            ..Default::default()
        },
    });

    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "counter1".to_owned(),
            port: "Total".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "print1".to_owned(),
            port: "Value".to_owned(),
            ..Default::default()
        },
    });

    // Initial data
    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "sum1".to_owned(),
            port: "A".to_owned(),
            initial_data: Some(Message::Integer(10)),
        },
    });

    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "sum1".to_owned(),
            port: "B".to_owned(),
            initial_data: Some(Message::Integer(5)),
        },
    });

    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "multiply1".to_owned(),
            port: "Y".to_owned(),
            initial_data: Some(Message::Integer(3)),
        },
    });

    // Start the network
    network.start().await?;
    
    tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
    
    println!("✅ Complex workflow completed!");
    
    Ok(())
}

Expected output:

🚀 Starting Complex Reflow workflow...
Sum Actor: 10 + 5 = 15
Multiply Actor: 15 × 3 = 45
Counter Actor: processed 1 values, total sum: 45
🎉 Final Result: 45
✅ Complex workflow completed!

Key Concepts Demonstrated

Actor Macro Features

  • Port Definitions: inports::<capacity>(Port1, Port2)
  • State Management: state(MemoryState) for stateful actors
  • Input Synchronization: await_all_inports waits for all inputs

Network Configuration

  • Registration: Register actor types before use
  • Instantiation: Create specific instances with unique IDs
  • Connection: Use structured Connector objects
  • Initialization: Send initial data with InitialPacket

Message Flow

  • Messages flow through typed ports
  • Actors process inputs and produce outputs
  • State is maintained per actor instance

Error Handling

Actors can return errors that will be logged:

#![allow(unused)]
fn main() {
#[actor(
    ValidatorActor,
    inports::<100>(Input),
    outports::<50>(Valid, Invalid)
)]
async fn validator_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let input = payload.get("Input").expect("expected input");
    
    match input {
        Message::Integer(n) if *n > 0 => {
            Ok([("Valid".to_owned(), input.clone())].into())
        },
        Message::Integer(n) if *n <= 0 => {
            Ok([("Invalid".to_owned(), input.clone())].into())
        },
        _ => {
            Err(anyhow::anyhow!("Expected integer input, got {:?}", input))
        }
    }
}
}

Next Steps

Now that you understand the basic patterns:

  1. Learn more actor patterns: Creating Actors
  2. Explore message types: Message Passing
  3. Add scripting: JavaScript Integration
  4. Use pre-built components: Standard Library
  5. See more examples: Examples

Troubleshooting

Common Issues

Compilation errors with actor macro: Make sure actor_macro is in your dependencies

Port connection errors: Verify port names match exactly between connections

Runtime panics: Check that initial data types match what actors expect

Deadlocks: Ensure await_all_inports actors receive all required inputs

For more help, see the Troubleshooting Guide.

Complete Example Code

The complete working examples are available in the examples directory.

Language SDKs

Reflow's runtime is Rust, but its native shape is a language-agnostic actor model — graphs of typed-port nodes connected by message edges. Each first-party SDK is a thin native binding plus an idiomatic surface in that language. They all link to (or embed) the same reflow_rt_capi C ABI, so behavior is identical across languages.

At a glance

SDKPackageNative libAuthoringNotes
Node.js@offbit-ai/reflow (npm).node addon (per-platform optional dep)napi-rsInstall just works — npm picks the right .node
Pythonoffbit-reflow (PyPI)inside the wheel (abi3-py39)pyo3Pre-built wheels for darwin / linux / windows
Gogithub.com/offbit-ai/reflow/sdk/goexternal libreflow_rt_capicgoRun scripts/install_lib.sh after go get to drop the platform tarball into the module
JVMai.offbit:reflow (Maven Central)bundled in the fat jar (classpath resource)JNI + Kotlin DSLSingle dep; loader extracts the right native lib at runtime
C++header-only at sdk/cpp/external libreflow_rt_capiRAII wrapperC++17, CMake add_subdirectory or find_package

What's the same across languages

Each SDK exposes the same five core types with idiomatic naming:

  • Message — typed payload (Flow / Boolean / Integer / Float / String / Object / Array / Bytes / Stream).
  • Graph — declarative DAG: add nodes, connect ports, register groups, expose subgraph ports.
  • Actor — either a registered template id (the bundled catalog) or a callback-driven actor authored in the host language.
  • Network — the executor that runs a graph; takes initial packets, emits a runtime event stream.
  • EventStream — async event tap for tracing / observability.

Plus a pack loader that lets any SDK install ship optional actor palettes (GPU renderers, ML, browser automation, ~6,700 SaaS API actors) at runtime via loadPack(path).

What's slightly different

  • Concurrency model. Node uses ThreadSafeFunction callbacks; Python uses GIL-aware shims; Go uses cgo callbacks; JVM uses JNIEnv::attach_current_thread; C++ pushes raw C ABI threading concerns to the user. The SDK READMEs cover the per-language gotchas.
  • JSON conversion. Each SDK auto-converts where idiomatic (Map/dict/map[string]any/Map<String,Object>/std::string). C++ returns std::string and lets you pick the JSON parser.
  • Async patterns. Node returns Promises, Python exposes async-friendly entry points, Go uses channels, JVM has Kotlin suspend/Flow adapters, C++ uses callbacks.

Picking an SDK

  • Browser / Electron / VS Code extension → Node.
  • ML pipelines, data engineering, CLI tooling → Python.
  • Backend services, CLI binaries, embedded gateways → Go.
  • Android, JVM-based platforms (Kafka, Spark, Flink), desktop with Compose → JVM.
  • Embedded, native engines, plugins for existing C++ apps → C++.
  • Roll your own runtime → use reflow_rt_capi directly; every SDK above is built on it.

Versioning

The SDKs ship independently but track the same runtime semantics. Tag schemes:

Tag patternTriggers
node-vX.Y.ZBuilds + publishes @offbit-ai/reflow@X.Y.Z to npm
python-vX.Y.ZBuilds + publishes offbit-reflow X.Y.Z to PyPI
sdk/go/vX.Y.ZBuilds libreflow_rt_capi for 5 triples + GitHub Release
sdk/jvm/vX.Y.ZBuilds + publishes ai.offbit:reflow:X.Y.Z to Maven Central
pack-vX.Y.ZBuilds + publishes the 6 first-party .rflpack bundles

Pack ABI versions are toolchain-locked — pair a pack-v* release with the SDK release built from the same workspace revision.

Node.js SDK

@offbit-ai/reflow — Node 18+, prebuilt addons for darwin / linux / windows.

Install

npm install @offbit-ai/reflow

npm resolves the right per-platform .node addon via optionalDependencies; no compilation step on the user side.

Hello world

import {
  Graph, Network, Actor, Message,
} from "@offbit-ai/reflow";

class Doubler extends Actor {
  static component = "doubler";
  static inports = ["in"];
  static outports = ["out"];

  run(ctx) {
    const n = ctx.inputs.in?.data ?? 0;
    ctx.done({ out: Message.integer(Number(n) * 2) });
  }
}

const net = new Network();
net.registerActor("tpl_doubler", new Doubler());
net.addNode("a", "tpl_doubler");
net.addInitial("a", "in", { type: "Integer", data: 21 });
net.start();

for await (const ev of net.events()) {
  console.log(ev);
  if (ev._type === "ActorCompleted") break;
}

Authoring graphs

const g = new Graph("demo");
g.addNode("a", "tpl_x");
g.addNode("b", "tpl_y");
g.addConnection("a", "out", "b", "in");
g.addGroup("pipe", ["a", "b"], { caption: "pipeline" });
g.renameNode("a", "alpha");
console.log(g.groups());           // [{ id: "pipe", nodes: ["alpha","b"], … }]
console.log(g.connections());      // …

The full graph API (T1 mutators + T2 queries — renames, groups, port lifecycle, metadata setters, queries) is mirrored in the SDK; see the SDK README for the complete surface.

Bundled component catalog

Every install ships reflow_components's av-core slice — ~270 templates covering animation, flow control, math, vector / 2D graphics, asset DB, scene graph, HTTP integration, stream ops, DSP, procedural generation. Heavier palettes (GPU, ML, browser, video, window events, ~6,700 API actors) install as .rflpack bundles.

import { templateActor, templateList, loadPack } from "@offbit-ai/reflow";

console.log(templateList().filter(id => id.startsWith("tpl_math_")));

// Plug in the ML pack at runtime — adds 12 more template ids.
loadPack("./reflow.pack.ml-0.2.0.rflpack");
const inferenceActor = templateActor("tpl_ml_run_inference");

Subgraphs

import { SubgraphBuilder } from "@offbit-ai/reflow";
const sub = new SubgraphBuilder(graphExportJson);
sub.registerActor("my_custom", new MyCustom());
sub.fillFromCatalog();   // resolve remaining components from bundled + loaded packs
net.registerActor("tpl_sub", sub.build());

Streams

const stream = net.createStream({ bufferSize: 64, contentType: "image/jpeg" });
producer.emit({ stream });        // Flow + chunked frames
for await (const frame of stream) console.log(frame.kind, frame.data?.length);

See also

Python SDK

offbit-reflow — Python 3.9+, abi3 wheels for darwin / linux / windows.

Install

pip install offbit-reflow

PyPI ships pre-built wheels (one per platform, abi3-py39 so 3.9–3.13+ work from the same wheel). No build dependencies on the user side.

Hello world

import offbit_reflow as reflow

class Doubler(reflow.Actor):
    component = "doubler"
    inports = ["in"]
    outports = ["out"]

    async def run(self, ctx):
        n = ctx.inputs.get("in", {}).get("data", 0)
        ctx.emit("out", reflow.Message.integer(int(n) * 2))

net = reflow.Network()
net.register_actor("tpl_doubler", Doubler())
net.add_node("a", "tpl_doubler")
net.add_initial("a", "in", {"type": "Integer", "data": 21})
net.start()

for ev in net.events():
    print(ev)
    if ev["_type"] == "ActorCompleted":
        break

Authoring graphs

g = reflow.Graph("demo")
g.add_node("a", "tpl_x")
g.add_node("b", "tpl_y")
g.add_connection("a", "out", "b", "in")
g.add_group("pipe", ["a", "b"], {"caption": "pipeline"})
g.rename_node("a", "alpha")
print(g.groups())
print(g.connections())

Full graph API: rename / port lifecycle / metadata setters / group CRUD / queries — same surface across every SDK. See sdk/python/README.md for the complete signature list.

Bundled component catalog

The wheel ships ~270 pure-Rust + av-core templates. Heavier palettes (GPU, ML, browser, video, window events, ~6,700 API actors) install as .rflpack bundles.

print([t for t in reflow.template_list() if t.startswith("tpl_math_")])

reflow.load_pack("./reflow.pack.ml-0.2.0.rflpack")
infer = reflow.template_actor("tpl_ml_run_inference")

Subgraphs

sub = reflow.SubgraphBuilder(graph_export_json)
sub.register_actor("my_custom", MyCustom())
sub.fill_from_catalog()
net.register_actor("tpl_sub", sub.build())

Streams

stream = net.create_stream(buffer_size=64, content_type="image/jpeg")
producer.emit({"stream": stream})
for frame in stream:
    print(frame["kind"], len(frame.get("data", b"")))

See also

Go SDK

github.com/offbit-ai/reflow/sdk/go — Go 1.21+, links the runtime via cgo.

Unlike the Node / Python / JVM SDKs, the Go module doesn't bundle the native runtime. You install the Go module from source, then drop the matching libreflow_rt_capi.{so,dylib,dll} next to it.

Install

go get github.com/offbit-ai/reflow/sdk/go@v0.2.1

# After go get, run the bundled installer to fetch the matching native lib:
cd "$(go env GOMODCACHE)/github.com/offbit-ai/reflow/sdk/go@v0.2.1"
./scripts/install_lib.sh v0.2.1

install_lib.sh downloads the per-triple tarball from the sdk/go/v* GitHub Release and unpacks it into lib/<goos>_<goarch>/ and include/, where cgo finds it at compile time.

For repo-local development (you've cloned the monorepo and want to test against your local Rust changes), use scripts/link_dev_lib.sh instead — it symlinks target/<profile>/libreflow_rt_capi.* into the same sdk/go/{lib,include}/ layout.

Hello world

package main

import (
    "fmt"
    "time"
    reflow "github.com/offbit-ai/reflow/sdk/go"
)

type Doubler struct{ reflow.BaseActor }

func newDoubler() *Doubler {
    return &Doubler{BaseActor: reflow.BaseActor{
        ComponentName: "doubler",
        InportsList:   []string{"in"},
        OutportsList:  []string{"out"},
    }}
}

func (d *Doubler) Run(ctx *reflow.ActorContext) error {
    in := ctx.Input("in")
    if in == nil { return nil }
    n, _ := in.AsInteger()
    return ctx.Emit("out", reflow.MessageInteger(n*2))
}

func main() {
    net := reflow.NewNetwork()
    defer net.Close()
    _ = net.RegisterActor("tpl_doubler", newDoubler())
    _ = net.AddNode("a", "tpl_doubler", nil)
    _ = net.AddInitial("a", "in", map[string]any{"type": "Integer", "data": 21}, nil)
    _ = net.Start()
    time.Sleep(200 * time.Millisecond)

    fmt.Println("done")
}

Authoring graphs

g := reflow.NewGraph("demo", false)
defer g.Close()
_ = g.AddNode("a", "tpl_x", nil)
_ = g.AddNode("b", "tpl_y", nil)
_ = g.AddConnection("a", "out", "b", "in", nil)
_ = g.AddGroup("pipe", []string{"a", "b"}, map[string]any{"caption": "pipeline"})
_ = g.RenameNode("a", "alpha")

groupsJSON, _ := g.GroupsJSON()  // []byte; parse with encoding/json

The full graph API is mirrored — see sdk/go/README.md for the complete method list. Read-side methods all return []byte (JSON) so callers pick their own decoder.

Bundled component catalog

ids, _ := reflow.TemplateList()
actor, _ := reflow.TemplateActor("tpl_http_request")
_ = net.RegisterActor("tpl_http_request", actor)

Packs

templates, _ := reflow.LoadPack("./reflow.pack.ml-0.2.0.rflpack")
fmt.Println("loaded:", templates)
infer, _ := reflow.TemplateActor("tpl_ml_run_inference")

See Packs for the full pack workflow.

Subgraphs

b, _ := reflow.NewSubgraphBuilder(exportJSON)
_ = b.RegisterGoActor("my_custom", NewCustom())
_ = b.FillFromCatalog()
sg, _ := b.Build()
_ = net.RegisterActor("tpl_sub", sg)

Streams

s := reflow.NewStream(reflow.StreamOptions{BufferSize: 64, ContentType: "image/jpeg"})
producer.Emit("stream", reflow.MessageStream(s))
reader := s.Reader()
for {
    frame, err := reader.Recv()
    if err != nil { break }
    fmt.Println(frame.Kind, len(frame.Data))
}

See also

JVM SDK (Java + Kotlin)

ai.offbit:reflow — JDK 17+, single fat jar with native libs bundled as classpath resources for darwin / linux / windows.

Install

// Gradle (Kotlin DSL)
dependencies {
    implementation("ai.offbit:reflow:0.2.2")
}
<!-- Maven -->
<dependency>
    <groupId>ai.offbit</groupId>
    <artifactId>reflow</artifactId>
    <version>0.2.2</version>
</dependency>

A JNI loader inside the jar detects the host triple at startup, extracts the matching libreflow_rt_jvm.{dylib,so,dll} to java.io.tmpdir, and System.loads it. No separate native-lib install step.

Hello world (Kotlin DSL)

import ai.offbit.reflow.*

network {
    val doubler = actor {
        component = "doubler"
        inports = listOf("in")
        outports = listOf("out")
        onRun { ctx ->
            val n = parseIntegerInput(ctx.inputsJson(), "in")
            ctx.emit("out", Message.integer(n * 2))
            ctx.done()
        }
    }
    registerActor("tpl_doubler", doubler)
    addNode("d", "tpl_doubler")
    addInitial("d", "in", """{"type":"Integer","data":21}""")
    start()
}

Hello world (Java)

import ai.offbit.reflow.*;

class Doubler implements Actor {
    @Override public String component() { return "doubler"; }
    @Override public List<String> inports() { return List.of("in"); }
    @Override public List<String> outports() { return List.of("out"); }
    @Override public void run(ActorCallContext ctx) {
        long n = parseInteger(ctx.inputsJson(), "in");
        ctx.emit("out", Message.integer(n * 2));
        ctx.done();
    }
}

try (var net = new Network()) {
    net.registerActor("tpl_doubler", new Doubler());
    net.addNode("d", "tpl_doubler");
    net.addInitial("d", "in", "{\"type\":\"Integer\",\"data\":21}");
    net.start();
}

Authoring graphs

val g = Graph("demo")
g.addNode("a", "tpl_x")
 .addNode("b", "tpl_y")
 .addConnection("a", "out", "b", "in")
 .addGroup("pipe", "[\"a\",\"b\"]", "{\"caption\":\"pipeline\"}")
 .renameNode("a", "alpha")

println(g.groupsJson())
println(g.connectionsJson())

The full graph API is mirrored on Graph — see sdk/jvm/README.md for the complete method list. Query methods (*_json) return String; pair with Jackson, Moshi, or kotlinx.serialization to parse.

Bundled component catalog

long httpActor = Templates.templateActor("tpl_http_request");
net.registerActor("tpl_http_request", httpActor);
String allTemplates = Templates.templateListJson();

Packs

import ai.offbit.reflow.Packs

Packs.loadPack("./reflow.pack.ml-0.2.0.rflpack")
val infer = Templates.templateActor("tpl_ml_run_inference")
println(Packs.listPacks())

See Packs for the full pack workflow.

Subgraphs

SubgraphBuilder(graphExportJson).use { sub ->
    sub.registerActor("my_custom", MyCustom())
    sub.fillFromCatalog()
    val sg = sub.build()
    net.registerActor("tpl_sub", sg)
}

Streams

val stream = net.createStream(bufferSize = 64, contentType = "image/jpeg")
producer.emit(Message.stream(stream))
stream.reader().use { reader ->
    while (true) {
        val frame = reader.recv() ?: break
        println("${frame.kind} ${frame.data.size}")
    }
}

API reference

See also

C++ SDK

Header-only C++17 RAII wrapper over libreflow_rt_capi. Lives at sdk/cpp/ in the repo. No package manager today — pull the header directly + link the runtime library.

Requirements

  • C++17 compiler (clang ≥ 9, gcc ≥ 9, MSVC 2019+)
  • libreflow_rt_capi.{so,dylib,dll} — install via either the sdk/go/v* GitHub Release tarballs (pre-built per triple) or by building from the monorepo (cargo build -p reflow_rt_capi --release).

Install

# Drop the header tree under your third_party/.
git clone --branch sdk/go/v0.2.1 --depth 1 \
    https://github.com/offbit-ai/reflow third_party/reflow

# Grab the matching native lib for your platform.
VER=0.2.1
TRIPLE=aarch64-apple-darwin
curl -LO https://github.com/offbit-ai/reflow/releases/download/sdk/go/v$VER/reflow-rt-capi-$TRIPLE-v$VER.tar.gz
sudo tar -xzf reflow-rt-capi-$TRIPLE-v$VER.tar.gz -C /usr/local

CMake:

add_subdirectory(third_party/reflow/sdk/cpp)
target_link_libraries(myapp PRIVATE reflow::cpp)

If find_library(reflow_rt_capi) doesn't pick up the runtime automatically:

set(REFLOW_RT_CAPI_LIB "/usr/local/lib/libreflow_rt_capi.dylib")
add_subdirectory(third_party/reflow/sdk/cpp)

Hello world

#include <reflow/reflow.hpp>
#include <atomic>
#include <chrono>
#include <thread>

int main() {
    reflow::Network net;

    auto doubler = reflow::Actor::from_callback(
        "doubler", {"in"}, {"out"},
        [](reflow::Context& ctx) {
            auto in = ctx.input_json("in");
            if (!in) return;
            auto pos = in->find("\"data\":");
            if (pos == std::string::npos) return;
            int64_t n = std::stoll(in->substr(pos + 7));
            ctx.emit("out", reflow::Message::integer(n * 2));
        });

    std::atomic<int64_t> got{0};
    auto collector = reflow::Actor::from_callback(
        "collector", {"in"}, {},
        [&](reflow::Context& ctx) {
            if (auto m = ctx.take_input("in")) {
                auto j = m->as_json();
                auto pos = j.find("\"data\":");
                if (pos != std::string::npos) got = std::stoll(j.substr(pos + 7));
            }
        });

    net.register_actor("tpl_doubler", std::move(doubler));
    net.register_actor("tpl_collector", std::move(collector));
    net.add_node("a", "tpl_doubler");
    net.add_node("b", "tpl_collector");
    net.add_connection("a", "out", "b", "in");
    net.add_initial("a", "in", R"({"type":"Integer","data":21})");
    net.start();

    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    std::cout << "got: " << got.load() << "\n";  // → 42
    net.shutdown();
}

Authoring graphs

reflow::Graph g("demo");
g.add_node("a", "tpl_x");
g.add_node("b", "tpl_y");
g.add_connection("a", "out", "b", "in");
g.add_group("pipe", R"(["a","b"])", R"({"caption":"pipeline"})");
g.rename_node("a", "alpha");

if (auto node = g.get_node_json("alpha")) std::cout << *node << "\n";
std::cout << g.groups_json() << "\n";
std::cout << g.connections_json() << "\n";

std::optional<std::string> for nullable returns (get_node_json, get_connection_json); plain std::string everywhere else. Pick your own JSON parser — nlohmann/json, simdjson, RapidJSON, etc.

Error handling

Every C ABI call is checked. Non-OK status throws reflow::Error, which carries the original rfl_status and the runtime's last error string:

try {
    net.add_initial("missing-actor", "in", R"({"type":"Flow"})");
} catch (const reflow::Error& e) {
    std::cerr << e.what() << " status=" << e.status() << "\n";
}

Packs

auto templates = reflow::pack::load("./reflow.pack.ml-0.2.0.rflpack");
auto manifest  = reflow::pack::inspect_json("./reflow.pack.ml-0.2.0.rflpack");
auto loaded    = reflow::pack::list_json();

See Packs for the full pack workflow.

Subgraphs

reflow::SubgraphBuilder sub(graph_export_json);
sub.register_actor("my_custom", std::move(custom_actor));
sub.fill_from_catalog();
auto sg = sub.build();
net.register_actor("tpl_sub", std::move(sg));

ABI lockstep

The C++ wrapper has no version of its own — it tracks the libreflow_rt_capi it links against. Pull the header from the same tag as the runtime tarball you install. reflow::pack::abi_version() returns the runtime's pack-ABI hash so you can sanity-check at startup.

See also

Actor Packs

A .rflpack is a multi-platform native plugin bundle that publishes additional actor templates into a Reflow runtime at load time. Every SDK can load packs the same way, via a single loadPack(path) call.

The default SDK install is intentionally lightweight — ~270 templates covering data, animation, math, scene, HTTP, and basic media. Heavier palettes (GPU renderers, ML inference, browser automation, ~6,700 SaaS API actors) ship as packs so you only pay for what you use.

Why packs

Without packs, the only way to extend the SDK catalog is to recompile the runtime with new Cargo features and rebuild every per-platform native artifact. Packs make actor catalogs a runtime concern:

  • Smaller default install. No GPU drivers, no LiteRT, no Chromium dependencies in the base SDK.
  • Modular delivery. Ship third-party actor palettes as drop-in .rflpack files.
  • Faster iteration. New actors → new pack → users loadPack(...) without bumping the SDK.
  • Same lifecycle as bundled actors. templateActor("tpl_pack_owned_id") and graph references work identically once a pack is loaded.

Bundle format

A .rflpack is a zip archive:

mypack-0.2.0.rflpack
├── manifest.json
└── lib/
    ├── aarch64-apple-darwin/libmypack.dylib
    ├── x86_64-apple-darwin/libmypack.dylib
    ├── x86_64-unknown-linux-gnu/libmypack.so
    ├── aarch64-unknown-linux-gnu/libmypack.so
    └── x86_64-pc-windows-msvc/mypack.dll

manifest.json declares the pack name, version, supported triples, advertised template ids, and the runtime ABI version it was built against:

{
  "manifest_version": 1,
  "name": "offbit.ml",
  "version": "0.2.0",
  "reflow_pack_abi_version": 1380148208,
  "entrypoint": "reflow_pack_register",
  "targets": {
    "aarch64-apple-darwin":     { "file": "lib/aarch64-apple-darwin/libmypack.dylib" },
    "x86_64-pc-windows-msvc":   { "file": "lib/x86_64-pc-windows-msvc/mypack.dll" }
  },
  "templates": ["tpl_ml_run_inference", "tpl_cv_image_to_tensor", "..."]
}

The host validates the manifest, picks the dylib for the current triple, dlopens it, and asks the pack to register its templates with the runtime. Templates published by a pack are first-class — they participate in templateList(), subgraph resolution, network registration, and so on, identical to bundled templates.

First-party packs

Six are published on every pack-vX.Y.Z GitHub Release:

PackTemplatesPulls in
reflow.pack.browser1chromiumoxide
reflow.pack.video_encode1openh264
reflow.pack.ml12CV ops + LiteRT inference
reflow.pack.gpu6wgpu SDF / scene / 2D renderers
reflow.pack.window_events5keyboard / mouse / gamepad / touch / window
reflow.pack.api_services~6700generated Slack / Stripe / Jira / Notion / …

Per-pack template inventories live in sdk/packs/README.md.

Loading a pack

Every SDK exposes the same four entry points, named idiomatically:

OperationNodePythonGoJVMC++
LoadloadPack(path)load_pack(path)LoadPack(path)Packs.loadPack(path)reflow::pack::load(path)
Inspect manifestinspectPack(path)inspect_pack(path)InspectPack(path)Packs.inspectPack(path)reflow::pack::inspect_json(path)
List loadedlistPacks()list_packs()ListPacks()Packs.listPacks()reflow::pack::list_json()
Host ABI versionpackAbiVersion()pack_abi_version()PackABIVersion()Packs.packAbiVersion()reflow::pack::abi_version()

loadPack is idempotent — repeated calls with the same pack name are no-ops and return the previously published template ids.

import { loadPack, templateList, templateActor } from "@offbit-ai/reflow";

const ids = loadPack("./reflow.pack.ml-0.2.0.rflpack");
console.log(ids);                                           // ["tpl_ml_run_inference", …]
console.log(templateList().filter(t => t.startsWith("tpl_ml")));
const infer = templateActor("tpl_ml_run_inference");        // resolves through the pack

Either ship the .rflpack alongside your application binary or download it on first run from a GitHub Release:

VER=0.2.0
curl -LO https://github.com/offbit-ai/reflow/releases/download/pack-v$VER/reflow.pack.ml-$VER.rflpack

ABI lockstep

The pack handshake is toolchain-locked. Every pack stamps a reflow_pack_abi_version value at build time, computed from fnv1a(rustc_verbose_version || PACK_ABI_REVISION). The host loader refuses to dlopen a pack whose ABI doesn't match the SDK release the user installed.

Practically:

  • A pack-v0.2.0 release is paired with node-v0.2.0 / python-v0.2.1 / sdk/jvm/v0.2.0 / sdk/go/v0.2.0. CI builds them from the same workspace revision with the same rustc.
  • Mixing pack and SDK versions (e.g. pack-v0.2.0 with node-v0.3.0) errors at load with pack ABI X != host ABI Y.
  • Third-party packs follow the same rule: the pack author rebuilds when the runtime upgrades. We document reflow_pack_loader::REFLOW_PACK_ABI_VERSION so authors can lock against a specific runtime.

This is the trade-off chosen for v1: Arc<dyn Actor> over the C ABI requires layout agreement, which means same rustc + same reflow_actor crate. A callback-only ABI that survives toolchain mismatches is a future option (see reflow_pack_loader).

See also

Authoring a Pack

A pack is a Rust cdylib crate using reflow_pack_sdk. The SDK provides:

  • The #[reflow_pack] attribute macro that emits the C ABI entrypoints.
  • A safe PackHost API for registering template factories.
  • Re-exports of Actor, Message, ActorContext, etc.

You write Rust actors as you would for any Reflow runtime, then ship the resulting .rflpack from a CI workflow.

Skeleton

# my_pack/Cargo.toml
[package]
name = "my_pack"
version = "0.1.0"
edition = "2024"

[lib]
crate-type = ["cdylib"]

[dependencies]
reflow_pack_sdk = { version = "0.2.0", path = "../reflow/crates/reflow_pack_sdk" }
anyhow = "1"
flume = "0.11"
parking_lot = "0.12"
#![allow(unused)]
fn main() {
// my_pack/src/lib.rs
use reflow_pack_sdk::{
    reflow_pack, Actor, ActorBehavior, ActorContext, ActorLoad,
    ActorState, MemoryState, Message, PackHost, Port,
};

use parking_lot::Mutex;
use std::collections::HashMap;
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;

struct EchoActor {
    inports: Port,
    outports: Port,
    load: Arc<ActorLoad>,
}

impl EchoActor {
    fn new() -> Self {
        Self {
            inports: flume::bounded(16),
            outports: flume::bounded(16),
            load: Arc::new(ActorLoad::new(0)),
        }
    }
}

impl Actor for EchoActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|ctx: ActorContext| -> Pin<
            Box<
                dyn Future<Output = Result<HashMap<String, Message>, anyhow::Error>>
                    + Send + 'static,
            >,
        > {
            Box::pin(async move {
                let payload = ctx.get_payload().clone();
                let input = payload.get("input").cloned().unwrap_or(Message::Flow);
                let mut out = HashMap::new();
                out.insert("output".to_string(), input);
                Ok(out)
            })
        })
    }
    fn get_inports(&self)  -> Port { self.inports.clone() }
    fn get_outports(&self) -> Port { self.outports.clone() }
    fn inport_names(&self)  -> Vec<String> { vec!["input".into()] }
    fn outport_names(&self) -> Vec<String> { vec!["output".into()] }
    fn create_state(&self)  -> Arc<Mutex<dyn ActorState>> {
        Arc::new(Mutex::new(MemoryState::default()))
    }
    fn load_count(&self) -> Arc<ActorLoad> { Arc::clone(&self.load) }
    fn create_instance(&self) -> Arc<dyn Actor> { Arc::new(EchoActor::new()) }
}

#[reflow_pack]
fn register(host: &mut PackHost) {
    host.register("my.pack.echo", || Arc::new(EchoActor::new()));
}
}

The #[reflow_pack] macro expands register into the two C symbols the loader looks up: reflow_pack_abi_version (returns the host ABI hash) and reflow_pack_register (calls back into the host vtable to register every template factory).

Reflow.pack.toml

A small companion file that drives the reflow-pack CLI, which assembles the multi-platform .rflpack zip:

[pack]
name = "my.pack.echo"
version = "0.1.0"
description = "Echo actor that copies input → output"
authors = ["Your Name"]
license = "MIT"
templates = ["my.pack.echo"]

# Paths are relative to this file. CI populates one entry per built triple.
[targets.files]
aarch64-apple-darwin = "../../target/release/libmy_pack.dylib"

Build

# 1. Build the cdylib for each triple you want to ship.
cargo build --release -p my_pack --target aarch64-apple-darwin
cargo build --release -p my_pack --target x86_64-unknown-linux-gnu
# … plus any others you target

# 2. Build the packaging CLI.
cargo build --release -p reflow_pack_cli

# 3. Read the host ABI version. The pack must be stamped with the
#    same number, so build the CLI with the SAME rustc as the runtime
#    your users will load this pack into.
target/release/reflow-pack abi
# abi_version = 1380148208
# host_triple = aarch64-apple-darwin

# 4. Bundle.
REFLOW_PACK_ABI_VERSION=1380148208 target/release/reflow-pack build \
    --manifest my_pack/Reflow.pack.toml \
    --out-dir target/packs

# 5. Inspect.
target/release/reflow-pack inspect target/packs/my.pack.echo-0.1.0.rflpack

CI: ship a multi-platform pack

.github/workflows/publish-packs.yml is a concrete example. The pattern:

  1. Matrix-build the cdylib on all five supported runners (mac aarch64 / mac x86_64 / linux x86_64 / linux aarch64 / windows x86_64).
  2. Upload the per-triple cdylib as a GitHub Actions artifact.
  3. Assembly job downloads every artifact, generates a Reflow.pack.toml pointing at all of them, runs reflow-pack build to zip into a single .rflpack, attaches to a Release.

Pin dtolnay/rust-toolchain@stable everywhere so every cdylib in the bundle shares one ABI hash.

Distribution options

  • GitHub Release attachment — easiest; users curl the .rflpack.
  • Internal artifact registry — anything that serves files works (Artifactory, Nexus, S3, GCS).
  • Bundled with your application — drop the .rflpack next to your binary and loadPack(__dirname + "/x.rflpack") at startup.
  • Inside an npm / PyPI package — ship the .rflpack as a data file; your install script runs loadPack on first import.

ABI lockstep — what to communicate to users

A pack tied to runtime ABI version X only loads into SDK releases built against the same X. Make it easy on consumers:

  • Tag your pack with the same vX.Y.Z as the SDK release it targets.
  • Document the supported SDK versions in your README.
  • If you maintain multiple SDK targets (e.g. one for node-v0.2.0 and one for node-v0.3.0), publish two .rflpack files and label them clearly.

When the runtime bumps PACK_ABI_REVISION (vtable shape changes), all packs need to be rebuilt. Watch the reflow_pack_loader/build.rs constant for changes.

Troubleshooting

  • "pack ABI X != host ABI Y" — the pack was built with a different rustc or a different PACK_ABI_REVISION. Rebuild from the same toolchain as the host.
  • "pack has no build for triple T" — the manifest doesn't include the user's platform. Add the missing triple to [targets.files] and re-bundle.
  • "unable to execute patch / no NASM" — Linux cross-compile environments may need apt install patch nasm gcc-aarch64-linux-gnu g++-aarch64-linux-gnu. See publish-packs.yml for the working CI matrix.

See also

Architecture Overview

This document provides a high-level overview of Reflow's architecture, covering its core components, design principles, and system interactions.

System Architecture

Reflow follows a modular, actor-based architecture designed for scalability, reliability, and multi-language support.

graph TB
    subgraph "reflow_server"
        REST[REST API + WebSocket]
        ENG[Execution Engine]
        EB[EventBridge]
        TC[TraceCollector]
        ZIP[ZipSession]
        ZC[Zeal Converter]
    end

    subgraph "reflow_components"
        FC[Flow Control]
        TR[Transforms]
        INT[Integration]
        LG[Logic]
        MD[Media]
        API[API Actors x6,697]
    end

    subgraph "reflow_network"
        NET[Network]
        GR[Graph]
        MSG[Message System]
        CON[Connectors]
    end

    subgraph "External"
        ZEAL[Zeal IDE]
        CLIENT[HTTP / WS Clients]
    end

    CLIENT --> REST
    REST --> ENG
    ENG --> NET
    NET --> CON
    CON --> MSG

    ENG --> EB
    EB --> TC
    EB --> ZIP

    TC -->|HTTP traces| ZEAL
    ZIP -->|WebSocket events| ZEAL
    ZIP -->|Register templates| ZEAL

    ZC --> ENG

Core Components

1. Actor System (reflow_network)

The foundation of Reflow, implementing the actor model for concurrent computation:

  • Actors: Isolated units of computation
  • Messages: Immutable data passed between actors
  • Ports: Communication channels (input/output)
  • Network: Manages actor lifecycle and message routing

2. Script Runtime (reflow_script)

Multi-language execution environment supporting:

  • Deno Runtime: JavaScript/TypeScript execution
  • Python Engine: Python script execution (with optional Docker isolation)
  • WebAssembly: WASM plugin system via Extism
  • Script Context: Execution environment and state management

3. Component Library (reflow_components)

Pre-built, reusable workflow components organized by category:

  • Flow Control: ConditionalBranchActor, SwitchCaseActor, LoopActor
  • Transform: DataTransformActor, DataOperationsActor, inline JS evaluation via rquickjs
  • Integration: HttpRequestActor
  • Logic: RulesEngineActor
  • Media: ImageInputActor, AudioInputActor, VideoInputActor
  • API Actors (feature-gated): 6,697 pre-generated actors across 88 API services (Slack, GitHub, Stripe, etc.)

Script execution (JavaScript, Python, SQL) is handled externally by dynASB — this crate only contains native actors.

4. Execution Server (reflow_server)

The server wraps the engine and components into a deployable node:

  • ExecutionEngine: Creates isolated Network per execution, translates NetworkEvents into EngineEvents
  • EventBridge: Per-execution consumer task forwarding events to TraceCollector + ZipSession
  • ZipSession: Outbound WebSocket connection to Zeal IDE for real-time event streaming and template registration
  • TraceCollector: HTTP-based trace session submission to Zeal's TracesAPI with batched events
  • REST API: Axum-based HTTP + WebSocket API for headless workflow execution
  • Zeal Converter: Translates Zeal workflow format into Reflow graph format

5. Network Layer (reflow_network)

Handles execution and communication:

  • Message Routing: Efficient message delivery via flume channels
  • Graph Management: Workflow topology and execution
  • Connection Management: Inter-actor connectivity via Connector + ConnectionPoint
  • NetworkEvent Stream: ActorStarted, ActorCompleted, ActorFailed, MessageSent, NetworkIdle, NetworkShutdown

Design Principles

Actor Model

Reflow is built on the Actor Model of computation:

#![allow(unused)]
fn main() {
pub trait Actor {
    fn get_behavior(&self) -> ActorBehavior;
    fn get_inports(&self) -> Port;
    fn get_outports(&self) -> Port;
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>>;
}
}

Key Properties:

  • Isolation: No shared state between actors
  • Concurrency: Actors run concurrently
  • Message Passing: Communication via immutable messages
  • Location Transparency: Actors can be local or remote

Immutable Messages

All communication uses immutable message types:

#![allow(unused)]
fn main() {
pub enum Message {
    String(String),
    Integer(i64),
    Float(f64),
    Boolean(bool),
    Array(Vec<Message>),
    Object(HashMap<String, Message>),
    Binary(Vec<u8>),
    Null,
    Error(String),
}
}

Async-First Design

Built on Rust's async/await system using Tokio:

  • Non-blocking I/O operations
  • Efficient resource utilization
  • Scalable concurrent execution
  • Backpressure handling

Execution Model

Actor Lifecycle

stateDiagram-v2
    [*] --> Created
    Created --> Initialized
    Initialized --> Running
    Running --> Processing
    Processing --> Running
    Running --> Stopping
    Stopping --> Stopped
    Stopped --> [*]
    
    Processing --> Error
    Error --> Running
    Error --> Stopping
  1. Creation: Actor instance created with configuration
  2. Initialization: Resources allocated, connections established
  3. Running: Actor ready to process messages
  4. Processing: Executing behavior function on incoming messages
  5. Stopping: Graceful shutdown initiated
  6. Stopped: All resources cleaned up

Message Flow

sequenceDiagram
    participant S as Source Actor
    participant M as Message Bus
    participant T as Target Actor
    
    S->>M: Send Message
    M->>M: Route Message
    M->>T: Deliver Message
    T->>T: Process Message
    T->>M: Send Response
    M->>S: Deliver Response

Graph Execution

Workflows are executed as directed acyclic graphs (DAGs):

  • Topological Ordering: Ensures correct execution sequence
  • Parallel Execution: Independent branches run concurrently
  • Backpressure: Prevents resource exhaustion
  • Error Propagation: Failures are handled gracefully

Runtime Architecture

Native Runtime (Rust)

Direct Rust implementation for maximum performance:

#![allow(unused)]
fn main() {
struct NativeActor {
    behavior: Box<dyn Fn(ActorContext) -> Pin<Box<dyn Future<Output = Result<HashMap<String, Message>, Error>> + Send>>>,
    // ... other fields
}
}

Script Runtimes

Deno Runtime

  • Sandbox: Secure execution environment
  • Permissions: Fine-grained access control
  • TypeScript: Full TypeScript support
  • NPM: Package ecosystem access

Python Runtime

  • Isolation: Process-level or Docker isolation
  • Libraries: Full Python ecosystem support
  • Async: Async/await support
  • Error Handling: Exception propagation

WebAssembly Runtime

  • Portability: Cross-platform execution
  • Security: Sandboxed execution
  • Performance: Near-native speed
  • Multi-language: Support for multiple source languages

Memory Management

Ownership Model

Follows Rust's ownership principles:

  • Single Ownership: Each value has a single owner
  • Borrowing: Temporary access without ownership transfer
  • Lifetimes: Compile-time memory safety guarantees
  • Reference Counting: Shared ownership where needed

Message Serialization

Efficient serialization for message passing:

#![allow(unused)]
fn main() {
// Compressed serialization for performance
let compressed = compress_message(&message)?;
let serialized = bitcode::serialize(&compressed)?;

// Network transmission
send_over_network(serialized).await?;

// Deserialization
let message = bitcode::deserialize(&received_data)?;
let decompressed = decompress_message(&message)?;
}

Networking Architecture

Local Communication

graph LR
    A1[Actor 1] --> C1[Channel]
    C1 --> A2[Actor 2]
    A2 --> C2[Channel]
    C2 --> A3[Actor 3]

Local Channels:

  • Flume: High-performance async channels
  • Zero-copy: Direct memory access where possible
  • Backpressure: Flow control mechanisms

Distributed Communication

graph TB
    subgraph "Node 1"
        A1[Actor A]
        A2[Actor B]
    end
    
    subgraph "Network Layer"
        N1[Network Bridge]
        N2[Network Bridge]
    end
    
    subgraph "Node 2"
        A3[Actor C]
        A4[Actor D]
    end
    
    A1 --> N1
    N1 --> N2
    N2 --> A3

Network Features:

  • WebSocket: Real-time communication
  • Compression: Efficient data transfer
  • Encryption: Secure communication
  • Discovery: Automatic node discovery

Error Handling

Hierarchical Error Management

graph TD
    A[Actor Error] --> B[Network Error Handler]
    B --> C[Workflow Error Handler]
    C --> D[Application Error Handler]
    
    B --> E[Circuit Breaker]
    C --> F[Retry Logic]
    D --> G[Dead Letter Queue]

Error Strategies:

  • Isolation: Errors don't affect other actors
  • Propagation: Structured error reporting
  • Recovery: Automatic retry and fallback
  • Monitoring: Error tracking and alerting

Security Model

Sandboxing

Each runtime environment provides isolation:

  • Deno: V8 isolates with permission system
  • Python: Process isolation or containerization
  • WASM: Memory-safe execution environment
  • Native: Rust's memory safety guarantees

Permission System

Fine-grained access control:

#![allow(unused)]
fn main() {
pub struct Permissions {
    pub file_system: FileSystemPermissions,
    pub network: NetworkPermissions,
    pub environment: EnvironmentPermissions,
}
}

Performance Characteristics

Throughput

  • Message Rate: >1M messages/second (local)
  • Latency: <1ms (local), <10ms (network)
  • Memory: ~1KB per actor overhead
  • CPU: Scales with core count

Scalability

  • Horizontal: Distribute across machines
  • Vertical: Utilize all CPU cores
  • Elastic: Dynamic resource allocation
  • Backpressure: Graceful degradation under load

Configuration

Runtime Configuration

[actor_system]
thread_pool_size = 8
max_actors_per_node = 10000
message_buffer_size = 1000

[networking]
bind_address = "0.0.0.0:8080"
compression_enabled = true
encryption_enabled = true

[runtimes.deno]
permissions = ["--allow-net", "--allow-read"]
memory_limit = "512MB"

[runtimes.python]
use_docker = false
shared_environment = true

Observability Pipeline

Reflow's observability is built on an event pipeline that connects the execution engine to Zeal IDE:

sequenceDiagram
    participant N as Network
    participant E as ExecutionEngine
    participant EB as EventBridge
    participant TC as TraceCollector
    participant ZS as ZipSession
    participant Z as Zeal IDE

    N->>E: NetworkEvent (ActorCompleted, MessageSent, etc.)
    E->>E: Translate to EngineEvent (add duration, output_size)
    E->>EB: Send via flume channel
    EB->>TC: Forward event
    EB->>ZS: Forward event
    TC->>Z: Submit trace events (HTTP batch)
    ZS->>Z: Emit ZIP event (WebSocket)

EventBridge

One bridge task is spawned per execution. It drains the engine's event channel and forwards to both consumers:

  • TraceCollector: Buffers events (batch size 50), submits them as TraceEvents via Zeal's TracesAPI over HTTP
  • ZipSession: Translates EngineEvents to ZipExecutionEvents and pushes them over WebSocket in real-time

EngineEvent Types

The engine translates low-level NetworkEvents into rich events with timing and size metadata:

EventDescription
StartedExecution begun
NodeExecutingActor began processing (with component name)
ActorCompletedActor finished (with duration_ms, output_size, output_connections)
ActorFailedActor errored (with error message and connections)
MessageSentData transferred between actors (with size in bytes)
CompletedExecution finished (with duration_ms, nodes_executed, nodes_failed)
FailedExecution failed (with error and optional duration)

See Observability Architecture and Event Types for details.

Extension Points

Custom Actors

#![allow(unused)]
fn main() {
impl Actor for CustomActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|context| {
            Box::pin(async move {
                // Custom processing logic
                Ok(HashMap::new())
            })
        })
    }
}
}

Custom Runtimes

#![allow(unused)]
fn main() {
#[async_trait]
impl ScriptEngine for CustomEngine {
    async fn init(&mut self, config: &ScriptConfig) -> Result<()>;
    async fn call(&mut self, context: &ScriptContext) -> Result<HashMap<String, Message>>;
    async fn cleanup(&mut self) -> Result<()>;
}
}

Next Steps

For detailed information on specific components:

Actor Model

This document provides an in-depth look at how Reflow implements the Actor Model of computation.

Introduction

The Actor Model is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent computation. In Reflow, actors are isolated computational units that communicate exclusively through message passing.

Core Principles

1. Everything is an Actor

In Reflow's actor system:

  • Data processing units are actors
  • Message routers are actors
  • Database connections are actors
  • Network services are actors

2. Actors Communicate via Messages

#![allow(unused)]
fn main() {
// Messages are immutable and serializable
pub enum Message {
    String(String),
    Integer(i64),
    Float(f64),
    Boolean(bool),
    Array(Vec<Message>),
    Object(HashMap<String, Message>),
    Binary(Vec<u8>),
    Null,
    Error(String),
}
}

3. Actors Have Private State

#![allow(unused)]
fn main() {
pub trait ActorState: Send + Sync + 'static {
    fn as_any(&self) -> &dyn Any;
    fn as_mut_any(&mut self) -> &mut dyn Any;
}

#[derive(Default, Debug, Clone)]
pub struct MemoryState(pub HashMap<String, Value>);
}

4. Actors Process Messages Sequentially

Each actor processes one message at a time, ensuring thread safety without locks.

Actor Implementation

Actor Trait

#![allow(unused)]
fn main() {
pub trait Actor: Send + Sync + 'static {
    /// Defines how the actor processes messages
    fn get_behavior(&self) -> ActorBehavior;
    
    /// Access to input ports
    fn get_inports(&self) -> Port;
    
    /// Access to output ports
    fn get_outports(&self) -> Port;
    
    /// Create the actor's execution process
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>>;
    
    /// Load counting for backpressure (optional)
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> {
        Arc::new(Mutex::new(ActorLoad::new(0)))
    }
}
}

Actor Behavior

The behavior function defines how an actor responds to messages:

#![allow(unused)]
fn main() {
pub type ActorBehavior = Box<
    dyn Fn(ActorContext) -> Pin<Box<dyn Future<Output = Result<HashMap<String, Message>, anyhow::Error>> + Send + 'static>>
        + Send + Sync + 'static,
>;
}

Actor Context

The context provides access to the actor's environment:

#![allow(unused)]
fn main() {
pub struct ActorContext {
    pub payload: ActorPayload,
    pub outports: Port,
    pub state: Arc<Mutex<dyn ActorState>>,
    pub config: HashMap<String, Value>,
    load: Arc<Mutex<ActorLoad>>,
}

impl ActorContext {
    pub fn get_state(&self) -> Arc<Mutex<dyn ActorState>>;
    pub fn get_config(&self) -> &HashMap<String, Value>;
    pub fn get_payload(&self) -> &ActorPayload;
    pub fn get_outports(&self) -> Port;
    pub fn done(&self);
}
}

Actor Types

1. Native Actors

Written directly in Rust for maximum performance:

#![allow(unused)]
fn main() {
use reflow_network::actor::{Actor, ActorBehavior, ActorContext, Port, MemoryState};
use reflow_network::message::Message;
use std::collections::HashMap;

pub struct FilterActor {
    threshold: f64,
    inports: Port,
    outports: Port,
}

impl FilterActor {
    pub fn new(threshold: f64) -> Self {
        Self {
            threshold,
            inports: flume::unbounded(),
            outports: flume::unbounded(),
        }
    }
}

impl Actor for FilterActor {
    fn get_behavior(&self) -> ActorBehavior {
        let threshold = self.threshold;
        
        Box::new(move |context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                let mut results = HashMap::new();
                
                if let Some(Message::Float(value)) = payload.get("input") {
                    if *value > threshold {
                        results.insert("output".to_string(), Message::Float(*value));
                    }
                }
                
                Ok(results)
            })
        })
    }
    
    fn get_inports(&self) -> Port { self.inports.clone() }
    fn get_outports(&self) -> Port { self.outports.clone() }
    
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
        // Implementation details...
        todo!()
    }
}
}

2. Script Actors

Execute scripts in various languages:

#![allow(unused)]
fn main() {
use reflow_script::{ScriptActor, ScriptConfig, ScriptRuntime, ScriptEnvironment};

// JavaScript Actor
let js_config = ScriptConfig {
    environment: ScriptEnvironment::SYSTEM,
    runtime: ScriptRuntime::JavaScript,
    source: include_bytes!("script.js").to_vec(),
    entry_point: "process".to_string(),
    packages: None,
};

let js_actor = ScriptActor::new(js_config);
}

3. Component Actors

Pre-built components from the library:

#![allow(unused)]
fn main() {
use reflow_components::flow_control::ConditionalActor;
use reflow_components::data_operations::MapActor;

let conditional = ConditionalActor::new(|msg| {
    if let Message::Integer(n) = msg {
        *n > 0
    } else {
        false
    }
});

let mapper = MapActor::new(|msg| {
    if let Message::Integer(n) = msg {
        Message::Integer(n * 2)
    } else {
        msg.clone()
    }
});
}

Message Passing Semantics

Asynchronous Messaging

Messages are sent asynchronously without blocking:

#![allow(unused)]
fn main() {
// Send message without waiting
outport.send_async(message).await?;

// Receive message when available
let message = inport.recv_async().await?;
}

Message Ordering

  • Messages between the same pair of actors maintain order
  • No global ordering guarantees across different actor pairs
  • Use synchronization actors for coordination when needed

Message Delivery

  • At-most-once: Messages may be lost but never duplicated
  • Best-effort: System attempts delivery but doesn't guarantee it
  • Backpressure: Slow consumers cause senders to block

Actor Lifecycle Management

Creation and Initialization

#![allow(unused)]
fn main() {
// Create actor
let actor = MyActor::new(config);

// Initialize ports and state
let inports = actor.get_inports();
let outports = actor.get_outports();

// Start actor process
let process = actor.create_process();
tokio::spawn(process);
}

Message Processing Loop

#![allow(unused)]
fn main() {
pub fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
    let inports = self.get_inports();
    let behavior = self.get_behavior();
    let state = Arc::new(Mutex::new(MemoryState::default()));
    let outports = self.get_outports();
    
    Box::pin(async move {
        while let Ok(payload) = inports.1.recv_async().await {
            let context = ActorContext::new(
                payload,
                outports.clone(),
                state.clone(),
                HashMap::new(),
                Arc::new(Mutex::new(ActorLoad::new(0))),
            );
            
            match behavior(context).await {
                Ok(result) => {
                    if !result.is_empty() {
                        let _ = outports.0.send_async(result).await;
                    }
                },
                Err(e) => {
                    let error_msg = HashMap::from([
                        ("error".to_string(), Message::Error(e.to_string()))
                    ]);
                    let _ = outports.0.send_async(error_msg).await;
                }
            }
        }
    })
}
}

Termination

Actors terminate when:

  • Input ports are closed (no more messages)
  • Explicit shutdown signal
  • Unrecoverable error occurs

State Management

Actor State Types

#![allow(unused)]
fn main() {
// Simple memory state
let state = MemoryState::default();

// Custom state implementation
struct CounterState {
    count: AtomicU64,
}

impl ActorState for CounterState {
    fn as_any(&self) -> &dyn Any { self }
    fn as_mut_any(&mut self) -> &mut dyn Any { self }
}
}

State Persistence

#![allow(unused)]
fn main() {
// Access state in behavior
fn get_behavior(&self) -> ActorBehavior {
    Box::new(|context: ActorContext| {
        Box::pin(async move {
            let state = context.get_state();
            let mut state_guard = state.lock();
            
            // Read/modify state
            if let Some(memory_state) = state_guard.as_mut_any().downcast_mut::<MemoryState>() {
                memory_state.insert("counter", serde_json::json!(42));
            }
            
            Ok(HashMap::new())
        })
    })
}
}

Error Handling

Actor-Level Errors

#![allow(unused)]
fn main() {
// Return error from behavior
Err(anyhow::anyhow!("Processing failed: {}", reason))

// Handle errors in message processing
match behavior(context).await {
    Ok(result) => send_result(result).await,
    Err(e) => send_error(e).await,
}
}

Error Propagation

#![allow(unused)]
fn main() {
// Error message format
let error_message = HashMap::from([
    ("error".to_string(), Message::Error("Database connection failed".to_string())),
    ("code".to_string(), Message::Integer(500)),
    ("timestamp".to_string(), Message::String(Utc::now().to_rfc3339())),
]);
}

Supervision Strategies

#![allow(unused)]
fn main() {
// Supervisor actor monitors children
struct SupervisorActor {
    children: Vec<ActorRef>,
    restart_policy: RestartPolicy,
}

enum RestartPolicy {
    OneForOne,    // Restart only failed actor
    OneForAll,    // Restart all actors
    RestForOne,   // Restart failed and subsequent actors
}
}

Concurrency and Parallelism

Actor Isolation

  • Each actor runs in isolation
  • No shared mutable state
  • Communication only via messages
  • Thread-safe by design

Parallel Execution

#![allow(unused)]
fn main() {
// Multiple actors can run simultaneously
tokio::spawn(actor1.create_process());
tokio::spawn(actor2.create_process());
tokio::spawn(actor3.create_process());

// Actors on different CPU cores
let rt = tokio::runtime::Builder::new_multi_thread()
    .worker_threads(num_cpus::get())
    .build()?;
}

Load Balancing

#![allow(unused)]
fn main() {
// Round-robin message distribution
struct LoadBalancerActor {
    workers: Vec<Port>,
    current: AtomicUsize,
}

impl LoadBalancerActor {
    fn next_worker(&self) -> &Port {
        let index = self.current.fetch_add(1, Ordering::Relaxed) % self.workers.len();
        &self.workers[index]
    }
}
}

Performance Considerations

Memory Usage

  • Actors have minimal overhead (~1KB per actor)
  • Messages are reference-counted when possible
  • State is lazily allocated

Message Throughput

  • Local messages: >1M messages/second
  • Network messages: 10K-100K messages/second
  • Batch processing for high throughput

Backpressure Handling

#![allow(unused)]
fn main() {
// Check actor load before sending
let load = actor.load_count();
if load.lock().get() > MAX_LOAD {
    // Apply backpressure
    tokio::time::sleep(Duration::from_millis(10)).await;
}
}

Advanced Patterns

Actor Pooling

#![allow(unused)]
fn main() {
struct ActorPool<T: Actor> {
    actors: Vec<T>,
    distributor: LoadBalancerActor,
}

impl<T: Actor> ActorPool<T> {
    pub fn new(size: usize, factory: impl Fn() -> T) -> Self {
        let actors: Vec<T> = (0..size).map(|_| factory()).collect();
        // ... setup distributor
    }
}
}

Hot Swapping

#![allow(unused)]
fn main() {
// Replace actor behavior without stopping
actor.update_behavior(new_behavior).await?;

// Migrate state to new actor version
let old_state = old_actor.get_state();
new_actor.set_state(old_state).await?;
}

Circuit Breaker

#![allow(unused)]
fn main() {
struct CircuitBreakerActor {
    target: ActorRef,
    failure_count: AtomicU32,
    state: AtomicU8, // Open, Closed, HalfOpen
}

impl CircuitBreakerActor {
    fn should_allow_request(&self) -> bool {
        match self.state.load(Ordering::Relaxed) {
            0 => true,  // Closed
            1 => false, // Open
            2 => true,  // HalfOpen
            _ => false,
        }
    }
}
}

Testing Actors

Unit Testing

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_filter_actor() {
    let actor = FilterActor::new(5.0);
    let behavior = actor.get_behavior();
    
    // Create test context
    let payload = HashMap::from([
        ("input".to_string(), Message::Float(10.0))
    ]);
    
    let context = create_test_context(payload);
    let result = behavior(context).await.unwrap();
    
    assert!(result.contains_key("output"));
}
}

Integration Testing

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_actor_pipeline() {
    let source = SourceActor::new();
    let filter = FilterActor::new(5.0);
    let sink = SinkActor::new();
    
    // Connect actors
    connect_actors(&source, &filter).await;
    connect_actors(&filter, &sink).await;
    
    // Start pipeline
    let handles = vec![
        tokio::spawn(source.create_process()),
        tokio::spawn(filter.create_process()),
        tokio::spawn(sink.create_process()),
    ];
    
    // Test data flow
    // ... assertions
}
}

Best Practices

Actor Design

  1. Keep actors small and focused - Single responsibility principle
  2. Avoid blocking operations - Use async/await for I/O
  3. Handle errors gracefully - Don't let actors crash
  4. Design for failure - Expect message loss and actor failures

Message Design

  1. Keep messages immutable - Never modify messages after sending
  2. Use appropriate message sizes - Balance between batching and latency
  3. Include context - Messages should carry enough information
  4. Handle malformed messages - Validate input gracefully

State Management

  1. Minimize state - Less state means fewer bugs
  2. Make state serializable - Enable persistence and distribution
  3. Avoid shared state - Each actor owns its state
  4. Design for recovery - State should be reconstructible

Next Steps

Message Passing

This document details Reflow's message passing system, which is the primary communication mechanism between actors.

Message Types

Reflow uses a strongly-typed message system with built-in serialization support:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Serialize, Deserialize, Encode, Decode, PartialEq)]
pub enum Message {
    Flow,
    Event(EncodableValue),
    Boolean(bool),
    Integer(i64),
    Float(f64),
    String(Arc<String>),
    Object(Arc<EncodableValue>),
    Array(Arc<Vec<EncodableValue>>),
    Stream(Arc<Vec<u8>>),
    Encoded(Arc<Vec<u8>>),
    Optional(Option<Arc<EncodableValue>>),
    Any(Arc<EncodableValue>),
    Error(Arc<String>),
}
}

EncodableValue

Reflow uses EncodableValue as a wrapper for complex data types:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Serialize, Deserialize, Encode, Decode, PartialEq, Eq)]
pub struct EncodableValue {
    pub(crate) data: Vec<u8>,
}

impl EncodableValue {
    pub fn new<T: Encode>(value: &T) -> Self {
        Self {
            data: bitcode::encode(value),
        }
    }

    pub fn decode<'a, T: Decode<'a>>(&'a self) -> Option<T> {
        bitcode::decode(&self.data).ok()
    }
}
}

Message Conversion

#![allow(unused)]
fn main() {
use serde_json::Value;

// From JSON values
let msg = Message::from(serde_json::json!(42));

// To JSON values  
let json: Value = message.into();

// Type checking
if let Message::Integer(n) = message {
    println!("Number: {}", n);
}

// Working with EncodableValue - modern approach
let data = serde_json::json!({"key": "value"});
let encodable = EncodableValue::from(data);
let object_msg = Message::object(encodable);

// Create arrays with EncodableValue - modern approach
let array_items = vec![
    EncodableValue::from(serde_json::json!("hello")),
    EncodableValue::from(serde_json::json!(42)),
];
let array_msg = Message::array(array_items);

// Alternative: using helper methods for simple values
let string_msg = Message::string("hello world".to_string());
let int_msg = Message::integer(42);
let bool_msg = Message::boolean(true);
let float_msg = Message::float(3.14);
let error_msg = Message::error("Something went wrong".to_string());
}

Communication Channels

Ports

Ports are the communication endpoints for actors:

#![allow(unused)]
fn main() {
pub type Port = (
    flume::Sender<HashMap<String, Message>>,
    flume::Receiver<HashMap<String, Message>>,
);

// Actor payload format
pub type ActorPayload = HashMap<String, Message>;
}

Channel Properties

  • Asynchronous: Non-blocking send/receive operations
  • Bounded: Configurable buffer sizes for backpressure
  • Multi-producer, Single-consumer: Multiple senders, one receiver per port
  • Type-safe: Compile-time message type checking

Message Flow Patterns

Point-to-Point

Direct communication between two actors:

#![allow(unused)]
fn main() {
// Actor A sends to Actor B - using helper method
let message = HashMap::from([
    ("data".to_string(), Message::string("hello".to_string()))
]);
sender.send_async(message).await?;
}

Broadcast

One actor sends to multiple receivers:

#![allow(unused)]
fn main() {
// Using actor macro for broadcast
#[actor(
    BroadcastActor,
    inports::<100>(input),
    outports::<50>(output1, output2, output3)
)]
async fn broadcast_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    if let Some(input_msg) = payload.get("input") {
        // Broadcast to all output ports
        Ok([
            ("output1".to_owned(), input_msg.clone()),
            ("output2".to_owned(), input_msg.clone()),
            ("output3".to_owned(), input_msg.clone()),
        ].into())
    } else {
        Err(anyhow::anyhow!("No input to broadcast"))
    }
}

// Manual implementation for dynamic outputs
struct ManualBroadcastActor {
    inports: Port,
    outports: Port,
    outputs: Vec<flume::Sender<HashMap<String, Message>>>,
    load: Arc<Mutex<ActorLoad>>,
}

impl ManualBroadcastActor {
    async fn broadcast(&self, message: HashMap<String, Message>) -> Result<(), anyhow::Error> {
        for output in &self.outputs {
            output.send_async(message.clone()).await?;
        }
        Ok(())
    }
}
}

Fan-In (Merge)

Multiple actors send to one receiver:

#![allow(unused)]
fn main() {
struct MergeActor {
    inputs: Vec<flume::Receiver<HashMap<String, Message>>>,
    output: flume::Sender<HashMap<String, Message>>,
}

impl MergeActor {
    async fn merge_loop(&self) {
        use futures::stream::{FuturesUnordered, StreamExt};
        
        let mut streams: FuturesUnordered<_> = self.inputs
            .iter()
            .map(|rx| rx.recv_async())
            .collect();
            
        while let Some(result) = streams.next().await {
            if let Ok(message) = result {
                let _ = self.output.send_async(message).await;
            }
        }
    }
}
}

Serialization and Transport

Local Serialization

For local communication, messages use efficient in-memory representation:

#![allow(unused)]
fn main() {
// Zero-copy for simple types
let msg = Message::Integer(42); // No allocation

// Reference counting for complex types
let complex = Message::Object(data); // Rc<HashMap<String, Message>>
}

Network Serialization

For distributed communication:

#![allow(unused)]
fn main() {
use bitcode;
use flate2::Compression;

// Compress and serialize
let compressed = compress_message(&message, Compression::default())?;
let bytes = bitcode::serialize(&compressed)?;

// Send over network
network_send(bytes).await?;

// Receive and deserialize
let received = network_receive().await?;
let message = bitcode::deserialize(&received)?;
let decompressed = decompress_message(&message)?;
}

Message Routing

Router Actor

#![allow(unused)]
fn main() {
pub struct RouterActor {
    routes: HashMap<String, flume::Sender<HashMap<String, Message>>>,
    default_route: Option<flume::Sender<HashMap<String, Message>>>,
}

impl RouterActor {
    pub fn route_message(&self, key: &str, message: HashMap<String, Message>) -> Result<()> {
        if let Some(sender) = self.routes.get(key) {
            sender.try_send(message)?;
        } else if let Some(default) = &self.default_route {
            default.try_send(message)?;
        }
        Ok(())
    }
}
}

Content-Based Routing

#![allow(unused)]
fn main() {
impl RouterActor {
    fn route_by_content(&self, message: &HashMap<String, Message>) -> Option<&str> {
        // Route based on message content
        if let Some(Message::String(msg_type)) = message.get("type") {
            match msg_type.as_str() {
                "user_event" => Some("user_handler"),
                "system_event" => Some("system_handler"),
                "error" => Some("error_handler"),
                _ => None,
            }
        } else {
            None
        }
    }
}
}

Error Handling

Error Message Format

#![allow(unused)]
fn main() {
// Standard error message structure
let error_msg = HashMap::from([
    ("error".to_string(), Message::error("Processing failed".to_string())),
    ("code".to_string(), Message::integer(500)),
    ("source".to_string(), Message::string("database_actor".to_string())),
    ("timestamp".to_string(), Message::string(Utc::now().to_rfc3339())),
    ("details".to_string(), Message::object(error_details)),
]);
}

Dead Letter Queue

#![allow(unused)]
fn main() {
pub struct DeadLetterQueue {
    storage: Arc<Mutex<Vec<(String, HashMap<String, Message>)>>>,
    max_size: usize,
}

impl DeadLetterQueue {
    pub async fn store_failed_message(
        &self, 
        reason: String, 
        message: HashMap<String, Message>
    ) {
        let mut storage = self.storage.lock();
        if storage.len() >= self.max_size {
            storage.remove(0); // Remove oldest
        }
        storage.push((reason, message));
    }
}
}

Backpressure Management

Flow Control

#![allow(unused)]
fn main() {
pub struct FlowControlActor {
    input: flume::Receiver<HashMap<String, Message>>,
    output: flume::Sender<HashMap<String, Message>>,
    buffer_size: usize,
    current_load: Arc<AtomicUsize>,
}

impl FlowControlActor {
    async fn process_with_backpressure(&self) {
        while let Ok(message) = self.input.recv_async().await {
            // Check current load
            let load = self.current_load.load(Ordering::Relaxed);
            
            if load > self.buffer_size {
                // Apply backpressure - slow down
                tokio::time::sleep(Duration::from_millis(10)).await;
            }
            
            self.current_load.fetch_add(1, Ordering::Relaxed);
            
            // Process message
            if let Err(_) = self.output.try_send(message) {
                // Output buffer full, apply backpressure
                tokio::time::sleep(Duration::from_millis(1)).await;
            }
            
            self.current_load.fetch_sub(1, Ordering::Relaxed);
        }
    }
}
}

Message Ordering

Ordered Delivery

#![allow(unused)]
fn main() {
pub struct OrderedDeliveryActor {
    sequence_number: AtomicU64,
    expected_sequence: AtomicU64,
    buffer: Arc<Mutex<BTreeMap<u64, HashMap<String, Message>>>>,
}

impl OrderedDeliveryActor {
    fn add_sequence_number(&self, mut message: HashMap<String, Message>) -> HashMap<String, Message> {
        let seq = self.sequence_number.fetch_add(1, Ordering::Relaxed);
        message.insert("sequence".to_string(), Message::Integer(seq as i64));
        message
    }
    
    async fn deliver_in_order(&self, message: HashMap<String, Message>) {
        if let Some(Message::Integer(seq)) = message.get("sequence") {
            let seq = *seq as u64;
            let expected = self.expected_sequence.load(Ordering::Relaxed);
            
            if seq == expected {
                // Deliver immediately
                self.deliver_message(message).await;
                self.expected_sequence.fetch_add(1, Ordering::Relaxed);
                
                // Check buffer for next messages
                self.deliver_buffered_messages().await;
            } else {
                // Buffer out-of-order message
                self.buffer.lock().insert(seq, message);
            }
        }
    }
}
}

Performance Optimization

Message Batching

#![allow(unused)]
fn main() {
use crate::message::{Message, EncodableValue};

pub struct BatchingActor {
    batch_size: usize,
    batch_timeout: Duration,
    current_batch: Vec<HashMap<String, Message>>,
    input: flume::Receiver<HashMap<String, Message>>,
    output: flume::Sender<HashMap<String, Message>>,
}

impl BatchingActor {
    async fn process_with_batching(&mut self) {
        let mut interval = tokio::time::interval(self.batch_timeout);
        
        loop {
            tokio::select! {
                // Receive new message
                Ok(message) = self.input.recv_async() => {
                    self.current_batch.push(message);
                    
                    if self.current_batch.len() >= self.batch_size {
                        self.flush_batch().await;
                    }
                }
                
                // Timeout - flush partial batch
                _ = interval.tick() => {
                    if !self.current_batch.is_empty() {
                        self.flush_batch().await;
                    }
                }
            }
        }
    }
    
    async fn flush_batch(&mut self) {
        if !self.current_batch.is_empty() {
            // Convert to EncodableValue for proper serialization
            let batch_items: Vec<EncodableValue> = self.current_batch
                .drain(..)
                .map(|msg| EncodableValue::from(serde_json::to_value(msg).unwrap()))
                .collect();
            
            let batch = Message::Array(batch_items);
            
            let batch_message = HashMap::from([
                ("batch".to_string(), batch)
            ]);
            
            let _ = self.output.send_async(batch_message).await;
        }
    }
}
}

Zero-Copy Optimization

#![allow(unused)]
fn main() {
use bytes::Bytes;

// Use Bytes for zero-copy binary data
let data = Bytes::from(vec![1, 2, 3, 4]);
let message = Message::Binary(data.to_vec());

// Reference counting for large objects
use std::sync::Arc;

struct LargeData {
    content: Vec<u8>,
}

let large_data = Arc::new(LargeData { content: vec![0; 1000000] });
// Pass Arc around instead of cloning large data
}

Message Validation

Schema Validation

#![allow(unused)]
fn main() {
use serde_json::Value;

pub struct MessageValidator {
    schemas: HashMap<String, Value>, // JSON Schema
}

impl MessageValidator {
    pub fn validate_message(
        &self, 
        message_type: &str, 
        message: &HashMap<String, Message>
    ) -> Result<(), ValidationError> {
        if let Some(schema) = self.schemas.get(message_type) {
            let json_value: Value = message.clone().into();
            validate_json_schema(&json_value, schema)?;
        }
        Ok(())
    }
}
}

Type Safety

#![allow(unused)]
fn main() {
// Type-safe message builders
pub struct UserEventBuilder {
    user_id: Option<String>,
    event_type: Option<String>,
    timestamp: Option<String>,
}

impl UserEventBuilder {
    pub fn user_id(mut self, id: String) -> Self {
        self.user_id = Some(id);
        self
    }
    
    pub fn event_type(mut self, event_type: String) -> Self {
        self.event_type = Some(event_type);
        self
    }
    
    pub fn build(self) -> Result<HashMap<String, Message>, BuildError> {
        let user_id = self.user_id.ok_or(BuildError::MissingUserId)?;
        let event_type = self.event_type.ok_or(BuildError::MissingEventType)?;
        
        Ok(HashMap::from([
            ("user_id".to_string(), Message::string(user_id)),
            ("event_type".to_string(), Message::string(event_type)),
            ("timestamp".to_string(), Message::string(Utc::now().to_rfc3339())),
        ]))
    }
}
}

Testing Message Passing

Mock Channels

#![allow(unused)]
fn main() {
pub struct MockChannel {
    sent_messages: Arc<Mutex<Vec<HashMap<String, Message>>>>,
    responses: Arc<Mutex<VecDeque<HashMap<String, Message>>>>,
}

impl MockChannel {
    pub fn new() -> Self {
        Self {
            sent_messages: Arc::new(Mutex::new(Vec::new())),
            responses: Arc::new(Mutex::new(VecDeque::new())),
        }
    }
    
    pub fn expect_message(&self, message: HashMap<String, Message>) {
        self.responses.lock().push_back(message);
    }
    
    pub fn verify_sent(&self, expected: &HashMap<String, Message>) -> bool {
        self.sent_messages.lock().contains(expected)
    }
}
}

Integration Testing

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_message_pipeline() {
    let (tx1, rx1) = flume::unbounded();
    let (tx2, rx2) = flume::unbounded();
    
    // Create test actors
    let source = TestSourceActor::new(tx1);
    let processor = TestProcessorActor::new(rx1, tx2);
    let sink = TestSinkActor::new(rx2);
    
    // Start actors
    tokio::spawn(source.run());
    tokio::spawn(processor.run());
    tokio::spawn(sink.run());
    
    // Test message flow
    let test_message = HashMap::from([
        ("data".to_string(), Message::string("test".to_string()))
    ]);
    
    source.send(test_message.clone()).await;
    
    // Verify message received
    let received = sink.receive_next().await;
    assert_eq!(received.get("data"), test_message.get("data"));
}
}

Best Practices

Message Design

  1. Keep messages immutable - Never modify after creation
  2. Use appropriate granularity - Not too fine, not too coarse
  3. Include enough context - Messages should be self-contained
  4. Design for evolution - Use versioned message formats

Performance

  1. Batch when possible - Reduce overhead
  2. Use appropriate data types - Binary for large data
  3. Implement backpressure - Prevent resource exhaustion
  4. Monitor message rates - Track performance metrics

Error Handling

  1. Use structured errors - Include error codes and context
  2. Implement dead letter queues - Don't lose failed messages
  3. Design for retry - Make operations idempotent
  4. Log message failures - Enable debugging

Next Steps

Graph System Architecture

Reflow's graph system provides a comprehensive flow-based programming (FBP) foundation for building visual workflow editors, data processing pipelines, and complex computational graphs. The system supports real-time validation, automatic layout, performance analysis, and both native Rust and WebAssembly implementations.

Core Concepts

Graph Structure

A Reflow graph consists of:

  • Nodes: Processing units that represent actors or components
  • Connections: Data flow paths between node ports
  • Ports: Input/output endpoints with typed interfaces
  • Initial Information Packets (IIPs): Static data injected into the graph
  • Groups: Logical collections of related nodes
#![allow(unused)]
fn main() {
use reflow_network::graph::{Graph, GraphNode, GraphConnection, GraphEdge, PortType};
use std::collections::HashMap;

// Create a new graph
let mut graph = Graph::new("MyWorkflow", false, None);

// Add nodes
graph.add_node("source", "DataSource", None);
graph.add_node("processor", "DataProcessor", None);
graph.add_node("sink", "DataSink", None);

// Connect nodes
graph.add_connection("source", "output", "processor", "input", None);
graph.add_connection("processor", "output", "sink", "input", None);
}

Port Type System

Reflow uses a sophisticated type system to ensure data compatibility between connected nodes:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq)]
pub enum PortType {
    Any,                              // Accepts any data type
    Flow,                            // Control flow signals
    Event,                           // Event-driven data
    Boolean,                         // Boolean values
    Integer,                         // Integer numbers
    Float,                          // Floating-point numbers
    String,                         // Text data
    Object(String),                 // Structured objects with schema
    Array(Box<PortType>),          // Arrays of typed elements
    Stream,                        // Streaming data
    Encoded,                       // Binary encoded data
    Option(Box<PortType>),         // Optional values
}
}

Type Compatibility

The system automatically validates type compatibility when connections are made:

#![allow(unused)]
fn main() {
// These connections are valid
graph.add_connection("int_source", "out", "float_sink", "in", None); // Integer → Float
graph.add_connection("any_source", "out", "string_sink", "in", None); // Any → String
graph.add_connection("data", "out", "stream", "in", None);            // Any → Stream

// This would be invalid and rejected
// graph.add_connection("string_source", "out", "int_sink", "in", None); // String ↛ Integer
}

Graph Operations

Node Management

#![allow(unused)]
fn main() {
// Add node with metadata
let metadata = HashMap::from([
    ("x".to_string(), json!(100)),
    ("y".to_string(), json!(200)),
    ("description".to_string(), json!("Processes incoming data"))
]);
graph.add_node("processor", "DataProcessor", Some(metadata));

// Update node metadata
graph.set_node_metadata("processor", HashMap::from([
    ("color".to_string(), json!("#ff0000"))
]));

// Remove node (also removes all connections)
graph.remove_node("processor");
}

Connection Management

#![allow(unused)]
fn main() {
// Add connection with metadata
let conn_metadata = HashMap::from([
    ("weight".to_string(), json!(0.8)),
    ("priority".to_string(), json!("high"))
]);
graph.add_connection("source", "data", "sink", "input", Some(conn_metadata));

// Get connection details
if let Some(connection) = graph.get_connection("source", "data", "sink", "input") {
    println!("Connection: {:?}", connection);
}

// Remove specific connection
graph.remove_connection("source", "data", "sink", "input");
}

Initial Information Packets (IIPs)

IIPs allow you to inject static data into the graph at startup:

#![allow(unused)]
fn main() {
use serde_json::json;

// Add configuration data
graph.add_initial(
    json!({"database_url": "postgresql://localhost/mydb"}),
    "database_connector",
    "config",
    None
);

// Add initial data with index for array ports
graph.add_initial_index(
    json!("input_file.txt"),
    "file_reader",
    "filenames",
    0,
    None
);
}

Graph Ports

Expose internal node ports as graph-level interfaces:

#![allow(unused)]
fn main() {
// Add input port to graph
graph.add_inport(
    "data_input",           // External port name
    "processor",            // Internal node
    "input",               // Internal port
    PortType::Any,         // Port type
    None                   // Metadata
);

// Add output port to graph
graph.add_outport(
    "processed_data",      // External port name
    "processor",           // Internal node
    "output",              // Internal port
    PortType::Object("ProcessedData".to_string()),
    None
);
}

Graph Validation

Automatic Validation

The graph system performs continuous validation:

#![allow(unused)]
fn main() {
// Validate entire graph
let validation_result = graph.validate_flow()?;

if !validation_result.cycles.is_empty() {
    println!("Cycles detected: {:?}", validation_result.cycles);
}

if !validation_result.orphaned_nodes.is_empty() {
    println!("Orphaned nodes: {:?}", validation_result.orphaned_nodes);
}

for mismatch in validation_result.port_mismatches {
    println!("Port mismatch: {}", mismatch);
}
}

Cycle Detection

Advanced cycle detection with path tracking:

#![allow(unused)]
fn main() {
// Detect first cycle
if let Some(cycle) = graph.detect_cycles() {
    println!("Cycle found: {:?}", cycle);
}

// Comprehensive cycle analysis
let cycle_analysis = graph.analyze_cycles();
println!("Total cycles: {}", cycle_analysis.total_cycles);
println!("Nodes in cycles: {:?}", cycle_analysis.nodes_in_cycles);
}

Performance Analysis

Parallelism Detection

Identify opportunities for parallel execution:

#![allow(unused)]
fn main() {
let parallelism = graph.analyze_parallelism();

// Parallel branches that can execute simultaneously
for branch in parallelism.parallel_branches {
    println!("Parallel branch: {:?}", branch.nodes);
}

// Pipeline stages for sequential execution
for stage in parallelism.pipeline_stages {
    println!("Stage {}: {:?}", stage.level, stage.nodes);
}
}

Bottleneck Analysis

Find performance bottlenecks:

#![allow(unused)]
fn main() {
let bottlenecks = graph.detect_bottlenecks();

for bottleneck in bottlenecks {
    match bottleneck {
        Bottleneck::HighDegree(node) => {
            println!("High-degree bottleneck at node: {}", node);
        }
        Bottleneck::SequentialChain(chain) => {
            println!("Sequential chain that could be parallelized: {:?}", chain);
        }
    }
}
}

Resource Analysis

Estimate execution requirements:

#![allow(unused)]
fn main() {
let analysis = graph.analyze_for_runtime();

println!("Estimated execution time: {:.2}s", analysis.estimated_execution_time);
println!("Resource requirements: {:?}", analysis.resource_requirements);

for suggestion in analysis.optimization_suggestions {
    match suggestion {
        OptimizationSuggestion::ParallelizableChain { nodes } => {
            println!("Consider parallelizing: {:?}", nodes);
        }
        OptimizationSuggestion::RedundantNode { node, reason } => {
            println!("Redundant node {}: {}", node, reason);
        }
        OptimizationSuggestion::ResourceBottleneck { resource, severity } => {
            println!("Resource bottleneck in {}: {:.1}%", resource, severity * 100.0);
        }
        OptimizationSuggestion::DataTypeOptimization { from, to, suggestion } => {
            println!("Optimize {} → {}: {}", from, to, suggestion);
        }
    }
}
}

Graph Layout

Automatic Layout

The system provides intelligent automatic layout:

#![allow(unused)]
fn main() {
// Calculate optimal positions
let positions = graph.calculate_layout();

for (node_id, position) in positions {
    println!("Node {}: x={:.1}, y={:.1}", node_id, position.x, position.y);
}

// Apply layout to graph metadata
graph.auto_layout()?;
}

Manual Positioning

Set custom node positions:

#![allow(unused)]
fn main() {
// Set specific position
graph.set_node_position("processor", 150.0, 100.0)?;

// Set position with custom dimensions and anchor
let metadata = HashMap::from([
    ("position".to_string(), json!({"x": 200, "y": 150})),
    ("dimensions".to_string(), json!({
        "width": 120,
        "height": 80,
        "anchor": {"x": 0.5, "y": 0.5}  // Center anchor
    }))
]);
graph.set_node_metadata("custom_node", metadata);
}

Event System

Real-time Updates

Subscribe to graph changes:

#![allow(unused)]
fn main() {
use reflow_network::graph::GraphEvents;

// Graph creates event channel automatically
let (sender, receiver) = graph.event_channel;

// Listen for events
while let Ok(event) = receiver.recv() {
    match event {
        GraphEvents::AddNode(node_data) => {
            println!("Node added: {:?}", node_data);
        }
        GraphEvents::AddConnection(conn_data) => {
            println!("Connection added: {:?}", conn_data);
        }
        GraphEvents::RemoveNode(node_data) => {
            println!("Node removed: {:?}", node_data);
        }
        // ... handle other events
        _ => {}
    }
}
}

Event Types

Complete list of graph events:

  • AddNode / RemoveNode / RenameNode / ChangeNode
  • AddConnection / RemoveConnection / ChangeConnection
  • AddInitial / RemoveInitial
  • AddGroup / RemoveGroup / RenameGroup / ChangeGroup
  • AddInport / RemoveInport / RenameInport / ChangeInport
  • AddOutport / RemoveOutport / RenameOutport / ChangeOutport
  • ChangeProperties
  • StartTransaction / EndTransaction / Transaction

Serialization

Export Format

Graphs can be serialized to JSON for storage and interchange:

#![allow(unused)]
fn main() {
// Export to JSON-compatible format
let export = graph.export();
let json_string = serde_json::to_string_pretty(&export)?;

// Load from JSON
let loaded_graph = Graph::load(export, Some(metadata));
}

Export Structure

{
  "caseSensitive": false,
  "properties": {
    "name": "MyWorkflow",
    "description": "A sample workflow"
  },
  "processes": {
    "source": {
      "id": "source",
      "component": "DataSource",
      "metadata": {"x": 0, "y": 0}
    }
  },
  "connections": [
    {
      "from": {"nodeId": "source", "portId": "output"},
      "to": {"nodeId": "sink", "portId": "input"},
      "metadata": {}
    }
  ],
  "inports": {},
  "outports": {},
  "groups": []
}

WebAssembly Support

Browser Integration

The graph system compiles to WebAssembly for browser usage:

import { Graph } from 'reflow-network';

// Create graph in browser
const graph = new Graph("WebWorkflow", false, {});

// Add nodes and connections
graph.addNode("input", "InputNode", {x: 0, y: 0});
graph.addNode("output", "OutputNode", {x: 200, y: 0});
graph.addConnection("input", "out", "output", "in", {});

// Subscribe to events
graph.subscribe((event) => {
    console.log("Graph event:", event);
});

// Export for persistence
const exported = graph.toJSON();
localStorage.setItem('workflow', JSON.stringify(exported));

TypeScript Support

Full TypeScript definitions are generated:

interface GraphNode {
    id: string;
    component: string;
    metadata?: Map<string, any>;
}

interface GraphConnection {
    from: GraphEdge;
    to: GraphEdge;
    metadata?: Map<string, any>;
    data?: any;
}

type PortType = 
  | { type: "flow" }
  | { type: "event" }
  | { type: "boolean" }
  | { type: "integer" }
  | { type: "float" }
  | { type: "string" }
  | { type: "object", value: string }
  | { type: "array", value: PortType }
  | { type: "stream" }
  | { type: "encoded" }
  | { type: "any" }
  | { type: "option", value: PortType };

Graph History

Undo/Redo System

Track changes for undo/redo functionality:

#![allow(unused)]
fn main() {
// Create graph with history tracking
let (mut graph, mut history) = Graph::with_history();

// Make changes
graph.add_node("test", "TestNode", None);
graph.add_connection("test", "out", "sink", "in", None);

// Undo last change
if let Some(event) = history.undo() {
    // Apply inverse operation
    history.apply_inverse(&mut graph, event)?;
}

// Redo change
if let Some(event) = history.redo() {
    // Reapply operation
    history.apply_event(&mut graph, event)?;
}
}

Advanced Features

Subgraph Analysis

Extract and analyze subgraphs:

#![allow(unused)]
fn main() {
// Get reachable subgraph from a node
if let Some(subgraph) = graph.get_reachable_subgraph("start_node") {
    let analysis = graph.analyze_subgraph(&subgraph);

    println!("Subgraph nodes: {}", analysis.node_count);
    println!("Max depth: {}", analysis.max_depth);
    println!("Is cyclic: {}", analysis.is_cyclic);
    println!("Branching factor: {:.2}", analysis.branching_factor);
}
}

Graph Traversal

Efficient traversal algorithms:

#![allow(unused)]
fn main() {
// Depth-first traversal
graph.traverse_depth_first("start_node", |node| {
    println!("Visiting node: {}", node.id);
})?;

// Breadth-first traversal
graph.traverse_breadth_first("start_node", |node| {
    println!("Processing: {} ({})", node.id, node.component);
})?;
}

Node Groups

Organize nodes into logical groups:

#![allow(unused)]
fn main() {
// Create group
graph.add_group("data_processing", vec!["filter".to_string(), "transform".to_string()], None);

// Add node to existing group
graph.add_to_group("data_processing", "validator");

// Remove from group
graph.remove_from_group("data_processing", "validator");
}

Best Practices

Performance Optimization

  1. Use indexed operations: The graph uses internal indices for O(1) lookups
  2. Batch modifications: Group related changes to minimize event overhead
  3. Validate incrementally: Use targeted validation for better performance
  4. Cache analysis results: Store expensive analysis results when graph is stable

Memory Management

  1. Clean up connections: Always remove connections before removing nodes
  2. Limit history size: Use with_history_and_limit() for bounded memory usage
  3. Dispose of event listeners: Unsubscribe from events when no longer needed

Error Handling

  1. Check return values: Most operations return Result types
  2. Validate before execution: Use validation methods before running workflows
  3. Handle cycles gracefully: Implement cycle detection in your workflow runtime
  4. Monitor resource usage: Track memory and CPU usage for large graphs

Integration Examples

Visual Editor Integration

#![allow(unused)]
fn main() {
// In a visual editor, sync UI with graph events
graph.subscribe(|event| {
    match event {
        GraphEvents::AddNode(data) => ui.add_node_widget(data),
        GraphEvents::RemoveNode(data) => ui.remove_node_widget(data.id),
        GraphEvents::AddConnection(data) => ui.draw_connection(data),
        _ => {}
    }
});
}

Workflow Execution

#![allow(unused)]
fn main() {
// Convert graph to executable network
let network = Network::from_graph(&graph)?;

// Execute with runtime
let runtime = Runtime::new();
runtime.execute(network).await?;
}

Next Steps

Distributed Networks

Reflow's distributed network system enables bi-directional communication between separate Reflow instances, allowing you to build scalable, multi-node workflows while maintaining the familiar actor-based programming model.

Overview

The distributed network architecture extends Reflow's local actor model to support remote communication across network boundaries. This enables:

  • Cross-Network Actor Communication: Actors in one Reflow instance can send messages to actors in remote instances
  • Network-Transparent Operation: Remote actors appear as local actors in your workflows
  • Bi-directional Message Flow: Full duplex communication between distributed nodes
  • Automatic Discovery: Networks can discover and register with each other automatically
  • Conflict Resolution: Smart handling of actor name conflicts across networks

Architecture Components

┌─────────────────────────────────────────────────────────────────────┐
│                    Distributed Reflow Network                      │
├─────────────────────────────────────────────────────────────────────┤
│  Instance A (Server)           │  Instance B (Client)               │
│ ┌─────────────────────────────┐ │ ┌─────────────────────────────────┐ │
│ │ Local Network               │ │ │ Local Network                   │ │
│ │ ├─ Actor A1 ─┐              │ │ │ ├─ Actor B1 ─┐                  │ │
│ │ ├─ Actor A2 ─┤              │ │ │ ├─ Actor B2 ─┤                  │ │
│ │ └─ Actor A3 ─┘              │ │ │ └─ Actor B3 ─┘                  │ │
│ └─────────────────────────────┘ │ └─────────────────────────────────┘ │
│            │                    │                    │                │
│ ┌─────────────────────────────┐ │ ┌─────────────────────────────────┐ │
│ │ Network Bridge              │◄─┤ │ Network Bridge                  │ │
│ │ ├─ Discovery Service        │ │ │ ├─ Discovery Service             │ │
│ │ ├─ Message Router           │ │ │ ├─ Message Router                │ │
│ │ ├─ Connection Manager       │ │ │ ├─ Connection Manager            │ │
│ │ └─ Remote Actor Proxy       │ │ │ └─ Remote Actor Proxy            │ │
│ └─────────────────────────────┘ │ └─────────────────────────────────┘ │
│            │                    │                    │                │
│ ┌─────────────────────────────┐ │ ┌─────────────────────────────────┐ │
│ │ Transport Layer             │◄─┤ │ Transport Layer                 │ │
│ │ ├─ WebSocket/TCP Server     │ │ │ ├─ WebSocket/TCP Client          │ │
│ │ ├─ Protocol Handler         │ │ │ ├─ Protocol Handler              │ │
│ │ └─ Serialization            │ │ │ └─ Serialization                 │ │
│ └─────────────────────────────┘ │ └─────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

Core Components

  1. DistributedNetwork: Main orchestrator that combines local networks with distributed communication
  2. NetworkBridge: Handles all cross-network communication and actor registration
  3. DiscoveryService: Automatic network discovery and registration
  4. MessageRouter: Routes messages between local and remote actors
  5. RemoteActorProxy: Local representatives of remote actors
  6. TransportLayer: WebSocket/TCP communication infrastructure

Basic Setup

Creating a Distributed Network

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::{DistributedNetwork, DistributedConfig};
use reflow_network::network::NetworkConfig;

// Configure the distributed network
let config = DistributedConfig {
    network_id: "main_workflow_engine".to_string(),
    instance_id: "server_001".to_string(),
    bind_address: "0.0.0.0".to_string(),
    bind_port: 8080,
    discovery_endpoints: vec![
        "http://discovery.example.com:3000".to_string()
    ],
    auth_token: Some("secure_token".to_string()),
    max_connections: 100,
    heartbeat_interval_ms: 30000,
    local_network_config: NetworkConfig::default(),
};

// Create and start the distributed network
let mut distributed_network = DistributedNetwork::new(config).await?;
distributed_network.start().await?;
}

Registering Local Actors

#![allow(unused)]
fn main() {
use your_actors::DataProcessorActor;

// Register actors that will be available to remote networks
distributed_network.register_local_actor(
    "data_processor",
    DataProcessorActor::new(),
    Some(HashMap::from([
        ("capability".to_string(), serde_json::Value::String("data_processing".to_string())),
        ("version".to_string(), serde_json::Value::String("1.0.0".to_string())),
    ]))
)?;
}

Connecting to Remote Networks

#![allow(unused)]
fn main() {
// Connect to another network
distributed_network.connect_to_network("192.168.1.100:8080").await?;

// Register a remote actor for local use
distributed_network.register_remote_actor(
    "remote_validator",      // Remote actor ID
    "validation_network"     // Remote network ID
).await?;
}

Actor Communication Patterns

Direct Remote Messaging

#![allow(unused)]
fn main() {
use reflow_network::message::Message;

// Send message to remote actor
distributed_network.send_to_remote_actor(
    "validation_network",    // Target network
    "remote_validator",      // Target actor
    "input",                 // Target port
    Message::String("validate this data".to_string().into())
).await?;
}

Workflow Integration

Remote actors integrate seamlessly into local workflows:

#![allow(unused)]
fn main() {
// Get local network handle
let local_network = distributed_network.get_local_network();
let mut network = local_network.write();

// Add local actor
network.add_node("local_collector", "data_collector", None)?;

// Add remote actor (appears as local)
network.add_node("remote_processor", "remote_validator@validation_network", None)?;

// Connect them in a workflow
network.add_connection(Connector {
    from: ConnectionPoint {
        actor: "local_collector".to_string(),
        port: "output".to_string(),
        ..Default::default()
    },
    to: ConnectionPoint {
        actor: "remote_processor".to_string(),
        port: "input".to_string(),
        ..Default::default()
    },
})?;
}

Network Discovery

Automatic Discovery

The discovery service can automatically find and register remote networks:

#![allow(unused)]
fn main() {
// Enable automatic discovery
let config = DistributedConfig {
    // ... other config
    discovery_endpoints: vec![
        "http://service-discovery.local:3000".to_string(),
        "http://backup-discovery.local:3000".to_string(),
    ],
    // ...
};
}

Manual Network Registration

#![allow(unused)]
fn main() {
// Manually connect to specific networks
let networks_to_connect = vec![
    "analytics.company.com:8080",
    "ml-pipeline.company.com:8080",
    "data-warehouse.company.com:8080",
];

for endpoint in networks_to_connect {
    match distributed_network.connect_to_network(endpoint).await {
        Ok(_) => println!("Connected to {}", endpoint),
        Err(e) => eprintln!("Failed to connect to {}: {}", endpoint, e),
    }
}
}

Conflict Resolution

When multiple networks have actors with the same name, Reflow provides several resolution strategies:

Automatic Aliasing

#![allow(unused)]
fn main() {
// Register remote actor with automatic conflict resolution
let alias = distributed_network.register_remote_actor_with_strategy(
    "data_processor",                    // Remote actor name (conflicts with local)
    "analytics_network",                 // Remote network
    ConflictResolutionStrategy::AutoAlias // Strategy
).await?;

println!("Remote actor available as: {}", alias);
// Output: "Remote actor available as: analytics_network_data_processor"
}

Manual Aliasing

#![allow(unused)]
fn main() {
// Provide custom aliases for clarity
distributed_network.register_remote_actor_with_strategy(
    "validator",
    "security_network",
    ConflictResolutionStrategy::ManualAlias("security_validator".to_string())
).await?;
}

Security Considerations

Authentication

#![allow(unused)]
fn main() {
let config = DistributedConfig {
    // Use authentication tokens
    auth_token: Some("your_secure_token_here".to_string()),
    // ... other config
};
}

Network Isolation

#![allow(unused)]
fn main() {
// Restrict which networks can connect
let config = DistributedConfig {
    // Only allow specific discovery endpoints
    discovery_endpoints: vec![
        "https://trusted-discovery.company.com:3000".to_string()
    ],
    max_connections: 10, // Limit concurrent connections
    // ... other config
};
}

Monitoring and Health Checks

Connection Status

#![allow(unused)]
fn main() {
// Check network health
let bridge_status = distributed_network.get_bridge_status().await?;
println!("Connected networks: {}", bridge_status.connected_networks.len());

for (network_id, status) in &bridge_status.connected_networks {
    println!("  {}: {:?}", network_id, status);
}
}

Heartbeat Monitoring

#![allow(unused)]
fn main() {
let config = DistributedConfig {
    heartbeat_interval_ms: 15000, // 15 second heartbeats
    // ... other config
};
}

Error Handling

Connection Failures

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::DistributedError;

match distributed_network.connect_to_network("unreachable:8080").await {
    Ok(_) => println!("Connected successfully"),
    Err(DistributedError::ConnectionTimeout) => {
        eprintln!("Connection timed out - network may be down");
    },
    Err(DistributedError::AuthenticationFailed) => {
        eprintln!("Authentication failed - check token");
    },
    Err(e) => eprintln!("Other error: {}", e),
}
}

Message Delivery Failures

#![allow(unused)]
fn main() {
// Messages automatically retry with backoff
match distributed_network.send_to_remote_actor(
    "target_network", "target_actor", "input", message
).await {
    Ok(_) => println!("Message sent successfully"),
    Err(e) => {
        eprintln!("Failed to send message: {}", e);
        // Message will be retried automatically
    }
}
}

Performance Considerations

Connection Pooling

#![allow(unused)]
fn main() {
let config = DistributedConfig {
    max_connections: 50, // Adjust based on load
    // ... other config
};
}

Message Batching

Messages are automatically batched for efficiency, but you can tune batching behavior:

#![allow(unused)]
fn main() {
// Large messages are automatically compressed
let large_data = Message::Object(/* large JSON object */);
distributed_network.send_to_remote_actor(
    "target_network", "target_actor", "bulk_input", large_data
).await?;
}

Best Practices

Network Design

  1. Use Descriptive Network IDs: Choose meaningful names like analytics_cluster instead of network1
  2. Plan for Conflicts: Use descriptive actor names to minimize naming conflicts
  3. Group Related Services: Co-locate related actors in the same network for efficiency
  4. Design for Failure: Always handle network partitions and connection failures gracefully

Actor Organization

#![allow(unused)]
fn main() {
// Good: Descriptive, specific names
distributed_network.register_local_actor("customer_data_validator", validator, None)?;
distributed_network.register_local_actor("payment_processor", processor, None)?;

// Avoid: Generic names likely to conflict
// distributed_network.register_local_actor("validator", validator, None)?;
// distributed_network.register_local_actor("processor", processor, None)?;
}

Resource Management

#![allow(unused)]
fn main() {
// Always clean up connections
struct DistributedWorkflow {
    network: DistributedNetwork,
}

impl Drop for DistributedWorkflow {
    fn drop(&mut self) {
        // Gracefully shutdown connections
        if let Err(e) = tokio::task::block_in_place(|| {
            tokio::runtime::Handle::current().block_on(self.network.shutdown())
        }) {
            eprintln!("Error during cleanup: {}", e);
        }
    }
}
}

Troubleshooting

Common Issues

  1. Connection Refused: Check firewall settings and ensure target network is running
  2. Authentication Failed: Verify auth tokens match between networks
  3. Actor Not Found: Ensure remote actor is registered and network is connected
  4. Message Timeouts: Check network latency and increase timeout values if needed

Debug Logging

Enable detailed logging for troubleshooting:

#![allow(unused)]
fn main() {
use tracing_subscriber;

// Enable debug logging
tracing_subscriber::fmt()
    .with_max_level(tracing::Level::DEBUG)
    .init();
}

Health Check Endpoint

Networks automatically expose health endpoints:

# Check network health
curl http://your-network:8080/health

# Get network status
curl http://your-network:8080/status

PeerMesh — Server-Side Distributed Orchestration

When running as a Reflow server node connected to Zeal, the PeerMesh manages peer-to-peer connections for distributed execution. It creates one DistributedNetwork per execution and responds to orchestration commands from Zeal.

Architecture

#![allow(unused)]
fn main() {
pub struct PeerMesh {
    networks: RwLock<HashMap<String, DistributedNetwork>>,
    node_id: String,
    bind_address: String,
    base_port: u16,
}
}

The PeerMesh:

  • Creates a DistributedNetwork per execution, each binding on an incrementing port
  • Responds to subgraph.assign commands from Zeal (via ZipSession) to take ownership of subgraph execution
  • Responds to peer.connect commands to establish peer-to-peer links between nodes
  • Tears down per-execution networks on completion

Integration with Zeal

When Zeal orchestrates a distributed workflow:

  1. Zeal sends subgraph.assign to each Reflow node via the ZIP WebSocket
  2. The PeerMesh creates a DistributedNetwork for the assigned execution
  3. Zeal sends peer.connect to establish connections between nodes
  4. The PeerMesh calls connect_peer() to link networks via WebSocket bridges
  5. Remote actors are registered as RemoteActorProxy instances in the local network
  6. On execution completion, teardown_execution() cleans up the distributed network
#![allow(unused)]
fn main() {
// PeerMesh responds to Zeal commands
peer_mesh.connect_peer(execution_id, peer_address).await?;
peer_mesh.register_remote_actor(execution_id, actor_id, network_id).await?;
peer_mesh.teardown_execution(execution_id).await;
}

Distributed Composition Planning

For workflows spanning multiple Reflow nodes, the DistributedComposition system plans execution across network boundaries:

#![allow(unused)]
fn main() {
pub struct DistributedGraphComposition {
    pub local_sources: Vec<GraphSource>,
    pub remote_sources: Vec<RemoteGraphConfig>,
    pub local_connections: Vec<CompositionConnection>,
    pub distributed_connections: Vec<DistributedConnection>,
    pub execution_targets: HashMap<String, String>,  // graph → network_id
}
}

The DistributedNamespaceResolver maps processes to their home networks using qualified names like {network_id}/{namespace}/{process}:

#![allow(unused)]
fn main() {
let mut resolver = DistributedNamespaceResolver::new();
resolver.register_local_graph("data_pipeline", &graph)?;
resolver.register_remote_graph("ml_pipeline", "ml_node_1", &remote_graph)?;

// Find edges that cross network boundaries
let cross_edges = resolver.find_cross_network_connections(&connections)?;
}

The planner produces a DistributedCompositionPlan with:

  • local_composition — the graph to execute on this node
  • proxy_actorsProxyActorSpec entries for actors that proxy to remote networks
  • cross_network_edges — connections requiring proxy bridges
  • remote_executions — graphs delegated to other nodes

Next Steps

SubgraphActor

The SubgraphActor wraps an entire inner Network as a single Actor, allowing a graph to be embedded inside another graph as a composable unit. This is the foundation of hierarchical workflow composition in reflow_network.

Architecture

graph TB
    subgraph "Parent Network"
        EXT_IN[External Inport] --> SA[SubgraphActor]
        SA --> EXT_OUT[External Outport]
    end

    subgraph "SubgraphActor (inner Network)"
        IN_A[Actor A] --> IN_B[Actor B]
        IN_B --> OB[OutportBridge]
    end

    EXT_IN -.->|inport_map| IN_A
    OB -.->|external_sender| EXT_OUT

The parent network treats the SubgraphActor as an opaque actor with external inports and outports. Internally, it manages a full Network with its own actors, connectors, and message routing.

SubgraphActor Struct

#![allow(unused)]
fn main() {
pub struct SubgraphActor {
    inner_network: Arc<Mutex<Network>>,
    inport_map: HashMap<String, (String, String)>,   // ext_port → (actor_id, port_name)
    outport_map: HashMap<String, (String, String)>,   // ext_port → (actor_id, port_name)
    inports: Port,
    outports: Port,
    load: Arc<ActorLoad>,
    shutdown_tx: Arc<tokio::sync::watch::Sender<bool>>,
    shutdown_rx: tokio::sync::watch::Receiver<bool>,
}
}
  • inner_network — the wrapped Network containing all inner actors and connections
  • inport_map / outport_map — maps external port names to (inner_actor_id, inner_port_name) tuples
  • inports / outports — the external ports exposed to the parent network
  • shutdown_tx / shutdown_rx — tokio watch channel for graceful termination

Constructing from GraphExport

The primary constructor from_graph_export() builds a SubgraphActor from a serialized graph:

  1. Creates a new inner Network
  2. Registers all actors from the actors map
  3. Adds nodes and internal connections from the graph export
  4. Inport mapping — maps each external inport to (inner_node_id, inner_port_name) for direct message injection
  5. Outport bridging — for each external outport, creates an OutportBridge actor inside the inner network, connected to the source actor's port via a standard Connector
#![allow(unused)]
fn main() {
let actors: HashMap<String, Arc<dyn Actor>> = /* component instances */;
let subgraph = SubgraphActor::from_graph_export(graph_export, actors)?;
// subgraph implements Actor — add it to a parent Network as any other actor
}

OutportBridge

The OutportBridge is a lightweight internal actor that forwards messages from an inner actor's outport to the SubgraphActor's external outport channel:

#![allow(unused)]
fn main() {
struct OutportBridge {
    external_sender: flume::Sender<Packet>,  // → SubgraphActor outport
    external_port_name: String,
    inner_port_name: String,
    inports: Port,
    outports: Port,
    load: Arc<ActorLoad>,
}
}

Bridge actors are registered inside the inner network with generated IDs (__outport_bridge_{ext_port}) and connected to source actors via standard Connectors. The bridge's create_process() receives packets on its inport, extracts the message by inner_port_name, wraps it with external_port_name, and sends it via external_sender.

Boundary Mapping

Inbound flow — The create_process() loop receives packets on external inports, looks up the target (actor_id, port) in inport_map, and calls network.send_to_actor() to inject the message into the inner network.

Outbound flow — Inner actors send messages through normal Connectors to OutportBridge actors. Each bridge extracts the message and sends it via external_sender, making it appear on the SubgraphActor's external outport.

Actor Trait Implementation

The SubgraphActor implements Actor:

  • get_behavior() — returns a no-op closure; all routing is handled in create_process()
  • get_inports() / get_outports() — return the external port pairs
  • create_process() — starts the inner network, then runs an inbound routing loop using tokio::select! with the shutdown signal
  • shutdown() — signals the routing loop to stop and shuts down the inner network

Lifecycle

  1. create_process() starts the inner network (initializing all inner actors, connectors, and OutportBridge actors) and begins the inbound routing loop
  2. The inner network runs its actors concurrently
  3. shutdown() sends true on the watch channel, breaking the routing loop, and calls network.shutdown()

Next Steps

Multi-Graph Composition

Reflow's multi-graph composition system enables automatic discovery and intelligent composition of multiple graph files into unified, executable workflows. This system transforms complex multi-graph projects into seamlessly integrated workflows through workspace discovery and intelligent stitching.

Overview

The multi-graph composition architecture provides:

  • Automatic Workspace Discovery: Recursively finds all *.graph.json and *.graph.yaml files in your project
  • Folder-Based Namespacing: Uses directory structure as natural namespaces for organization
  • Smart Auto-Connections: Automatically detects compatible interfaces between graphs
  • Dependency Resolution: Resolves inter-graph dependencies and execution ordering
  • One-Command Composition: Single command transforms entire workspace into executable workflow
  • Interface Analysis: Analyzes graph interfaces for compatibility and suggests connections

Architecture Components

┌─────────────────────────────────────────────────────────────────────┐
│                    Workspace Discovery System                      │
├─────────────────────────────────────────────────────────────────────┤
│ File Discovery Layer                                               │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ *.graph.json    │ *.graph.yaml    │ Pattern Matching │ Filters  │ │
│ │ (data_flow)     │ (ml_pipeline)   │ (glob patterns)  │ (exclude)│ │
│ │ - 3 processes   │ - 5 processes   │ - depth limits   │ - test/  │ │
│ └─────────────────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────────┤
│ Namespace & Analysis Layer                                          │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Namespace Mgr   │ Interface Analysis │ Dependency Res │ Auto-Connect │
│ │ • Folder-based  │ • Exposed ports    │ • Graph deps   │ • Port match │
│ │ • Conflict res  │ • Required ports   │ • Order deps   │ • Confidence │ │
│ │ • Custom rules  │ • Compatibility   │ • Validation   │ • Heuristics │ │
│ └─────────────────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────────┤
│ Unified Network Instance                                            │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ data/           │ ml/            │ monitoring/    │ shared/      │ │
│ │ ├─ ingestion/   │ ├─ training/   │ ├─ metrics     │ ├─ logging  │ │
│ │ │  └─ collector │ │  └─ trainer   │ ├─ alerts     │ ├─ auth     │ │
│ │ └─ processing/  │ └─ inference/  │ └─ dashboard   │ └─ config   │ │
│ │    └─ transformer│   └─ predictor│                │              │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

Core Components

  1. WorkspaceDiscovery: Discovers and loads all graph files in a workspace
  2. GraphLoader: Loads and validates individual graph files
  3. GraphComposer: Orchestrates composition of multiple graphs
  4. NamespaceManager: Manages namespaces and resolves conflicts
  5. DependencyResolver: Analyzes and resolves graph dependencies
  6. InterfaceAnalyzer: Detects compatible interfaces for auto-connections

Workspace Structure

Multi-graph workspaces organize graph files using directory structure as namespaces:

workspace/
├── data/
│   ├── ingestion/
│   │   ├── api_collector.graph.json      → namespace: data/ingestion
│   │   └── file_reader.graph.yaml        → namespace: data/ingestion
│   ├── processing/
│   │   ├── cleaner.graph.json            → namespace: data/processing
│   │   ├── transformer.graph.json        → namespace: data/processing
│   │   └── validator.graph.yaml          → namespace: data/processing
│   └── storage/
│       ├── database_writer.graph.json    → namespace: data/storage
│       └── cache_manager.graph.yaml      → namespace: data/storage
├── ml/
│   ├── training/
│   │   ├── model_trainer.graph.json      → namespace: ml/training
│   │   └── feature_engineer.graph.yaml   → namespace: ml/training
│   ├── inference/
│   │   ├── predictor.graph.json          → namespace: ml/inference
│   │   └── batch_scorer.graph.json       → namespace: ml/inference
│   └── evaluation/
│       └── model_evaluator.graph.yaml    → namespace: ml/evaluation
├── monitoring/
│   ├── metrics.graph.json                → namespace: monitoring
│   ├── alerts.graph.yaml                 → namespace: monitoring
│   └── dashboard.graph.json              → namespace: monitoring
└── shared/
    ├── logging.graph.yaml                 → namespace: shared
    ├── auth.graph.json                    → namespace: shared
    └── config.graph.json                  → namespace: shared

Basic Usage

Simple Workspace Discovery

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::{WorkspaceDiscovery, WorkspaceConfig};

// Configure workspace discovery
let config = WorkspaceConfig {
    root_path: PathBuf::from("./my_workspace"),
    graph_patterns: vec![
        "**/*.graph.json".to_string(),
        "**/*.graph.yaml".to_string(),
    ],
    excluded_paths: vec![
        "**/node_modules/**".to_string(),
        "**/target/**".to_string(),
        "**/test/**".to_string(),
    ],
    max_depth: Some(8),
    namespace_strategy: NamespaceStrategy::FolderStructure,
    ..WorkspaceConfig::default()
};

// Discover all graphs in workspace
let discovery = WorkspaceDiscovery::new(config);
let workspace = discovery.discover_workspace().await?;

println!("🎉 Discovered {} graphs across {} namespaces", 
    workspace.graphs.len(), 
    workspace.namespaces.len()
);
}

Automatic Composition

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{GraphComposer, GraphComposition};

// Create composer and auto-compose workspace
let mut composer = GraphComposer::new();
let composition = GraphComposition::from_workspace(workspace)?;

// Compose into single executable graph
let unified_graph = composer.compose_graphs(composition).await?;

// The unified graph can now be executed as a single workflow
let mut network = Network::new(NetworkConfig::default());
let graph = Graph::load(unified_graph, None);
// Use the composed graph...
}

Graph Dependencies and Interfaces

Declaring Dependencies

Graphs can declare explicit dependencies on other graphs:

{
  "caseSensitive": false,
  "properties": {
    "name": "ml_trainer",
    "namespace": "ml/training",
    "version": "1.0.0"
  },
  "processes": {
    "feature_engineer": {
      "component": "FeatureEngineerActor",
      "metadata": {}
    },
    "model_trainer": {
      "component": "ModelTrainerActor",
      "metadata": {}
    }
  },
  "connections": [
    {
      "from": { "nodeId": "feature_engineer", "portId": "Output" },
      "to": { "nodeId": "model_trainer", "portId": "Input" }
    }
  ],
  "inports": {
    "training_data": {
      "nodeId": "feature_engineer",
      "portId": "Input"
    }
  },
  "outports": {
    "trained_model": {
      "nodeId": "model_trainer",
      "portId": "Output"
    }
  },
  
  "graphDependencies": [
    {
      "graphName": "data_transformer",
      "namespace": "data/processing",
      "versionConstraint": ">=1.0.0",
      "required": true,
      "description": "Requires clean data from transformer"
    }
  ],
  "externalConnections": [
    {
      "connectionId": "transformer_to_trainer",
      "targetGraph": "data_transformer",
      "targetNamespace": "data/processing",
      "fromProcess": "normalizer",
      "fromPort": "Output",
      "toProcess": "feature_engineer",
      "toPort": "Input",
      "description": "Use cleaned data for training"
    }
  ],
  "providedInterfaces": {
    "trained_model_output": {
      "interfaceId": "trained_model_output",
      "processName": "model_trainer",
      "portName": "Output",
      "dataType": "TrainedModel",
      "description": "Trained ML model"
    }
  },
  "requiredInterfaces": {
    "clean_data_input": {
      "interfaceId": "clean_data_input",
      "processName": "feature_engineer",
      "portName": "Input",
      "dataType": "CleanedDataRecord",
      "description": "Clean data from processing pipeline",
      "required": true
    }
  }
}

Interface-Based Connections

Graphs can connect via defined interfaces rather than direct process connections:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::GraphConnectionBuilder;

// Build connections between discovered graphs
let mut connection_builder = GraphConnectionBuilder::new(workspace);

// Connect using interfaces (recommended)
connection_builder
    .connect_interface(
        "data_transformer",     // Source graph
        "clean_data_output",    // Source interface
        "ml_trainer",           // Target graph
        "clean_data_input"      // Target interface
    )?
    .connect_interface(
        "ml_trainer",
        "trained_model_output",
        "ml_predictor",
        "model_input"
    )?;

let connections = connection_builder.build();
}

Namespace Management

Folder-Based Namespacing

By default, directory structure becomes namespace hierarchy:

#![allow(unused)]
fn main() {
// File: data/processing/transformer.graph.json
// Namespace: "data/processing"
// Qualified name: "data/processing/transformer"

// File: ml/training/trainer.graph.json  
// Namespace: "ml/training"
// Qualified name: "ml/training/trainer"
}

Custom Namespace Strategies

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::NamespaceStrategy;

// Semantic-based namespacing
let config = WorkspaceConfig {
    namespace_strategy: NamespaceStrategy::custom(
        "semantic_based", 
        Some(serde_json::json!({
            "keywords": {
                "ml": ["model", "train", "predict", "feature"],
                "data": ["ingest", "collect", "process", "clean"],
                "monitoring": ["metric", "alert", "dashboard", "log"]
            }
        }))
    )?,
    // ... other config
};
}

Conflict Resolution

When graphs have conflicting names, the system provides several resolution strategies:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::NamespaceConflictPolicy;

let namespace_manager = GraphNamespaceManager::new(NamespaceConflictPolicy::AutoResolve);

// Automatic resolution generates unique names:
// "data_processor" -> "data_processor" (first)
// "data_processor" -> "data_processor_1" (second)
// "data_processor" -> "data_processor_2" (third)
}

Advanced Features

Workspace Configuration

# workspace.config.yaml
workspace:
  root_path: "./my_project"
  
  graph_patterns:
    - "**/*.graph.json"
    - "**/*.graph.yaml"
  
  excluded_paths:
    - "**/node_modules/**"
    - "**/target/**" 
    - "**/.git/**"
    - "**/test/**"
  
  max_depth: 8
  
  namespace_strategy:
    type: "folder_structure"
  
  auto_connect: true
  dependency_resolution: "automatic"

composer:
  enable_auto_connections: true
  connection_confidence_threshold: 0.75
  validate_before_compose: true
  output_path: "./workspace.composed.graph.json"

Auto-Connection Discovery

The system can automatically suggest connections between compatible graph interfaces:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::InterfaceAnalyzer;

let analyzer = InterfaceAnalyzer::new();
let suggestions = analyzer.analyze_workspace(&workspace).await?;

for suggestion in suggestions.auto_connections {
    println!("🔗 Suggested connection: {} -> {}",
        suggestion.from_interface, suggestion.to_interface);
    println!("   Confidence: {:.2}", suggestion.confidence);
    println!("   Reason: {}", suggestion.reasoning);
}
}

Dependency Resolution

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::DependencyResolver;

let resolver = DependencyResolver::new();
let ordered_graphs = resolver.resolve_dependencies(&workspace.graphs)?;

println!("📊 Dependency Resolution Order:");
for (i, graph) in ordered_graphs.iter().enumerate() {
    println!("  {}. {}", i + 1, graph.get_name());
}
}

Programmatic API

Workspace Discovery API

#![allow(unused)]
fn main() {
// Programmatic workspace discovery
let mut discovery = WorkspaceDiscovery::new(config);

// Custom filtering
discovery.add_filter(|path: &Path| -> bool {
    // Only include graphs with "production" in the name
    path.to_string_lossy().contains("production")
});

// Custom namespace generation
discovery.set_namespace_generator(|path: &Path| -> String {
    // Custom logic for namespace generation
    if path.to_string_lossy().contains("critical") {
        format!("critical/{}", path.parent().unwrap().file_name().unwrap().to_string_lossy())
    } else {
        path.parent().unwrap().to_string_lossy().to_string()
    }
});

let workspace = discovery.discover_workspace().await?;
}

Dynamic Graph Loading

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::GraphSource;

// Load graphs from different sources
let sources = vec![
    GraphSource::JsonFile("./graphs/processor.graph.json".to_string()),
    GraphSource::NetworkApi("http://config-server/graphs/ml_model".to_string()),
    GraphSource::JsonContent(json_string),
];

let loader = GraphLoader::new();
let graphs = loader.load_multiple_graphs(sources).await?;
}

Custom Graph Composition

#![allow(unused)]
fn main() {
// Custom composition logic
let composition = GraphComposition {
    sources: workspace.graph_sources(),
    connections: vec![
        CompositionConnection {
            from: CompositionEndpoint {
                process: "data/processing/cleaner".to_string(),
                port: "Output".to_string(),
                index: None,
            },
            to: CompositionEndpoint {
                process: "ml/training/trainer".to_string(),
                port: "Input".to_string(),
                index: None,
            },
            metadata: Some(HashMap::from([
                ("priority".to_string(), serde_json::Value::String("high".to_string())),
            ])),
        }
    ],
    shared_resources: vec![
        SharedResource {
            name: "logger".to_string(),
            component: "LoggerActor".to_string(),
            metadata: Some(HashMap::from([
                ("level".to_string(), serde_json::Value::String("info".to_string())),
            ])),
        }
    ],
    properties: HashMap::from([
        ("name".to_string(), serde_json::Value::String("workspace_composition".to_string())),
        ("version".to_string(), serde_json::Value::String("1.0.0".to_string())),
    ]),
    case_sensitive: Some(false),
    metadata: None,
};
}

Command Line Interface

Discovery Commands

# Discover all graphs in workspace
reflow workspace discover --path ./my_project

# Output discovery results
reflow workspace discover --path ./my_project --output workspace.json

# Analyze workspace structure and dependencies
reflow workspace analyze --path ./my_project --output analysis.json

# List discovered graphs and namespaces
reflow workspace list --path ./my_project --format table

Composition Commands

# Auto-compose with high confidence connections
reflow workspace compose \
    --path ./my_project \
    --auto-connect \
    --confidence-threshold 0.8 \
    --validate \
    --output workspace.composed.graph.json

# Use configuration file
reflow workspace compose --config workspace.config.yaml

# Validate workspace before composition
reflow workspace validate --path ./my_project

Export Commands

# Export enhanced graph schemas
reflow workspace export --path ./my_project --enhanced --output enhanced_graphs/

# Generate workspace documentation
reflow workspace docs --path ./my_project --output docs/

Best Practices

Graph Organization

  1. Use Descriptive Namespaces: Organize graphs logically by function, not just technology
  2. Define Clear Interfaces: Use provided/required interfaces for loose coupling
  3. Minimize Dependencies: Reduce inter-graph dependencies for flexibility
  4. Version Your Graphs: Include version information for dependency management

Directory Structure

my_project/
├── core/                    # Core business logic graphs
│   ├── user_management/
│   ├── order_processing/
│   └── payment_handling/
├── integrations/            # External system integrations
│   ├── crm_sync/
│   ├── analytics_export/
│   └── notification_service/
├── pipelines/              # Data processing pipelines
│   ├── etl/
│   ├── ml_training/
│   └── reporting/
└── utilities/              # Shared utility graphs
    ├── logging/
    ├── monitoring/
    └── configuration/

Interface Design

{
  "providedInterfaces": {
    "user_data": {
      "interfaceId": "user_data",
      "processName": "user_processor",
      "portName": "Output",
      "dataType": "UserRecord",
      "description": "Processed user data with validation",
      "required": false,
      "metadata": {
        "schema_version": "1.2.0",
        "format": "json",
        "compression": "none"
      }
    }
  },
  "requiredInterfaces": {
    "raw_user_input": {
      "interfaceId": "raw_user_input",
      "processName": "user_processor", 
      "portName": "Input",
      "dataType": "RawUserData",
      "description": "Raw user data for processing",
      "required": true,
      "metadata": {
        "max_size": "10MB",
        "format": "json"
      }
    }
  }
}

Error Handling

Discovery Errors

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::DiscoveryError;

match discovery.discover_workspace().await {
    Ok(workspace) => {
        // Process workspace
    },
    Err(DiscoveryError::GlobError(e)) => {
        eprintln!("Pattern matching error: {}", e);
    },
    Err(DiscoveryError::LoadError(path, e)) => {
        eprintln!("Failed to load {}: {}", path.display(), e);
    },
    Err(DiscoveryError::ValidationError(e)) => {
        eprintln!("Graph validation failed: {}", e);
    },
    Err(e) => eprintln!("Discovery error: {}", e),
}
}

Composition Errors

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::CompositionError;

match composer.compose_graphs(composition).await {
    Ok(graph) => {
        // Use composed graph
    },
    Err(CompositionError::DependencyError(e)) => {
        eprintln!("Dependency resolution failed: {}", e);
    },
    Err(CompositionError::NamespaceError(e)) => {
        eprintln!("Namespace conflict: {}", e);
    },
    Err(e) => eprintln!("Composition error: {}", e),
}
}

Performance Considerations

Large Workspaces

#![allow(unused)]
fn main() {
// Optimize for large workspaces
let config = WorkspaceConfig {
    max_depth: Some(6),  // Limit directory traversal depth
    excluded_paths: vec![
        "**/node_modules/**".to_string(),
        "**/target/**".to_string(),
        "**/.git/**".to_string(),
        "**/build/**".to_string(),
        "**/dist/**".to_string(),
    ],
    // ... other config
};
}

Parallel Loading

#![allow(unused)]
fn main() {
// Discovery automatically parallelizes graph loading
let workspace = discovery.discover_workspace().await?;
// Graphs are loaded concurrently for better performance
}

Caching

#![allow(unused)]
fn main() {
// Enable caching for repeated discoveries
let config = WorkspaceConfig {
    cache_discoveries: true,
    cache_ttl_seconds: Some(300), // 5 minutes
    // ... other config
};
}

Distributed Composition

When a multi-graph workspace spans multiple Reflow nodes, the DistributedGraphComposition system extends local composition with cross-network awareness.

DistributedGraphComposition

#![allow(unused)]
fn main() {
pub struct DistributedGraphComposition {
    pub local_sources: Vec<GraphSource>,
    pub remote_sources: Vec<RemoteGraphConfig>,
    pub local_connections: Vec<CompositionConnection>,
    pub distributed_connections: Vec<DistributedConnection>,
    pub properties: HashMap<String, Value>,
    pub execution_targets: HashMap<String, String>,  // graph → network_id
}
}

Remote sources describe graphs fetched from other networks:

#![allow(unused)]
fn main() {
pub struct RemoteGraphConfig {
    pub network_id: String,
    pub graph_name: String,
    pub execution_target: Option<String>,  // where the graph executes
}
}

Distributed Connections

Cross-network connections use DistributedEndpoints with an optional network_id (None = local):

#![allow(unused)]
fn main() {
pub struct DistributedConnection {
    pub from: DistributedEndpoint,
    pub to: DistributedEndpoint,
    pub metadata: Option<HashMap<String, Value>>,
}

pub struct DistributedEndpoint {
    pub network_id: Option<String>,  // None = local
    pub process: String,             // "namespace/process"
    pub port: String,
    pub index: Option<usize>,
}
}

Namespace Resolution

The DistributedNamespaceResolver maps every process to its home network using qualified names {network_id}/{namespace}/{process}:

#![allow(unused)]
fn main() {
let mut resolver = DistributedNamespaceResolver::new("local");
resolver.register_local_graph("data_pipeline", &graph)?;
resolver.register_remote_graph("gpu_cluster", "ml", &remote_graph)?;

// Detect connections that cross network boundaries
let cross_edges = resolver.find_cross_network_connections(&connections)?;
}

Each CrossNetworkEdge records from_network, to_network, and the port details, plus a proxy_actor_name() method that generates a name like "ml/trainer@gpu_cluster".

Composition Planning

plan_distributed_composition() produces a DistributedCompositionPlan:

#![allow(unused)]
fn main() {
pub struct DistributedCompositionPlan {
    pub local_composition: GraphComposition,
    pub proxy_actors: Vec<ProxyActorSpec>,
    pub cross_network_edges: Vec<CrossNetworkEdge>,
    pub remote_executions: HashMap<String, String>,
}
}

The planner:

  1. Identifies cross-network edges from distributed connections
  2. Creates ProxyActorSpec entries for each unique remote target
  3. Rewrites local connections to route through proxy actors
  4. Builds the local composition with proxy-aware wiring
  5. Tracks which graphs are delegated to remote nodes

Each proxy spec describes a local stand-in for a remote actor:

#![allow(unused)]
fn main() {
pub struct ProxyActorSpec {
    pub proxy_name: String,         // e.g., "ml/trainer@gpu_cluster"
    pub remote_network_id: String,
    pub remote_actor_id: String,
}
}

At execution time, execute_distributed_plan() materializes proxy specs into RemoteActorProxy instances (30s forward timeout) that bridge messages through the NetworkBridge WebSocket layer.

Next Steps

dynASB — Lightweight Actor FaaS

Reflow integrates with dynASB as a lightweight Function-as-a-Service (FaaS) backend for remote script actor execution. Script actors are deployed as functions into dynASB microVMs and communicate with Reflow over WebSocket JSON-RPC.

Overview

sequenceDiagram
    participant R as Reflow Engine
    participant C as DynASBClient
    participant D as dynASB API
    participant VM as microVM
    participant WS as WebSocket

    R->>C: deploy(name, runtime, code)
    C->>D: POST /api/v1/functions
    D->>VM: Boot microVM
    C->>D: GET /api/v1/functions/{id} (poll)
    D-->>C: status: Ready

    R->>C: create_actor(func, metadata)
    C->>WS: Connect ws://{ws_url}/{function_id}
    C-->>R: WebSocketScriptActor

    R->>WS: JSON-RPC "process" call
    WS->>VM: Execute handler
    VM-->>WS: Result
    WS-->>R: Response

    R->>C: undeploy(function_id)
    C->>D: POST /api/v1/functions/{id}/undeploy
    D->>VM: Shutdown

Configuration

#![allow(unused)]
fn main() {
pub struct DynASBConfig {
    pub api_url: String,    // e.g., "http://localhost:8080"
    pub ws_url: String,     // e.g., "ws://localhost:8080/ws"
    pub redis_url: String,  // For actor state persistence
}
}

Deployment Lifecycle

1. Deploy

The DynASBClient deploys script code to dynASB via its REST API:

#![allow(unused)]
fn main() {
let client = DynASBClient::new(DynASBConfig {
    api_url: "http://localhost:8080".into(),
    ws_url: "ws://localhost:8080/ws".into(),
    redis_url: "redis://localhost:6379".into(),
});

let func = client.deploy(
    "my_transform",             // function name
    "javascript",               // runtime
    "export function handler(input) { ... }", // code
    "handler",                  // entry point
    None,                       // optional dependencies
    Some(30),                   // timeout_seconds
).await?;
}

This sends a POST /api/v1/functions request. dynASB boots a microVM for the function and returns a DynASBFunction handle:

#![allow(unused)]
fn main() {
pub struct DynASBFunction {
    pub function_id: String,
    pub name: String,
    pub runtime: String,
    pub status: DeploymentStatus,
    pub deployment_time_ms: u64,
    pub vm_id: Option<String>,
}
}

2. Health Check & Readiness

After deployment, the microVM needs time to boot. Poll until ready:

#![allow(unused)]
fn main() {
let status = client.wait_until_ready(
    &func.function_id,
    Duration::from_secs(60),   // timeout
    Duration::from_millis(500), // poll interval
).await?;
}

This calls GET /api/v1/functions/{id} in a loop, checking the DeploymentStatus:

StatusMeaning
DeployingFunction deployed, microVM booting
ReadyVM ready, health check passed
UnhealthyHealth check failed or function errored
StoppingFunction is being undeployed
StoppedFunction has been removed

Terminal states (Unhealthy, Stopped) cause wait_until_ready to return an error immediately.

3. Create Actor

Once ready, create a WebSocketScriptActor that communicates with the function over JSON-RPC 2.0:

#![allow(unused)]
fn main() {
let actor = client.create_actor(&func, script_metadata).await?;
// actor implements Actor — register it in the network
}

Under the hood this:

  1. Constructs a WebSocket URL: {ws_url}/{function_id}
  2. Creates a WebSocketRpcClient pointing to that URL
  3. Wraps it in a WebSocketScriptActor with the script metadata and Redis URL for state persistence

The actor communicates with the microVM using JSON-RPC 2.0 "process" method calls — the same protocol used by all WebSocket script actors in Reflow.

4. Undeploy

When done, remove the function:

#![allow(unused)]
fn main() {
client.undeploy(&func.function_id).await?;
// or undeploy everything:
client.undeploy_all().await?;
}

This sends POST /api/v1/functions/{id}/undeploy to shut down the microVM.

5. Automatic Cleanup

The DynASBClient implements Drop — if any functions remain deployed when the client is dropped, a background task spawns to undeploy them all:

#![allow(unused)]
fn main() {
impl Drop for DynASBClient {
    fn drop(&mut self) {
        if !self.deployed.is_empty() {
            let ids: Vec<String> = self.deployed.keys().cloned().collect();
            let api_url = self.config.api_url.clone();
            let http = self.http.clone();
            tokio::spawn(async move {
                for id in ids {
                    let url = format!("{}/api/v1/functions/{}/undeploy", api_url, id);
                    let _ = http.post(&url).send().await;
                }
            });
        }
    }
}
}

Deployment Metadata

deployment_metadata() produces a metadata map for injection into GraphNode metadata, prefixed with dynasb.:

#![allow(unused)]
fn main() {
let meta = client.deployment_metadata(&func.function_id);
// Keys: dynasb.function_id, dynasb.name, dynasb.runtime,
//       dynasb.status, dynasb.vm_id, dynasb.deployment_time_ms
}

This allows the execution engine and observability pipeline to track which actors are running on dynASB microVMs.

Integration with Reflow

dynASB serves as the remote execution backend for script actors that need isolation or dedicated resources. The integration point is the WebSocketScriptActor, which is the same actor type used for all WebSocket-based script runtimes — dynASB simply provides the deployment and lifecycle management layer on top.

[Script Discovery] → [DynASBClient.deploy()] → [microVM boots]
                   → [DynASBClient.create_actor()] → [WebSocketScriptActor]
                   → [Network registers actor] → [JSON-RPC "process" calls]
                   → [DynASBClient.undeploy()] → [microVM shuts down]

Next Steps

AssetDB — Entity-Component-System Data Store

AssetDB is Reflow's persistent, queryable world state. It replaces in-memory actor state for anything that needs to be shared across systems, inspected by tools, or survive workflow restarts.

Design Principles

  1. Entities are data, not actors. An entity's physics, camera, material — these are JSON components in the DB, not Rust structs.
  2. Systems are DAG actors. The DAG wires which systems run, on which entities, in what order.
  3. Any tool can query. Zeal editor, debug inspector, Python scripts, unit tests — all read/write the same DB. Not coupled to Reflow.
  4. Explicit over magic. The DAG shows the flow. No hidden subscriptions (except opt-in :bind).

Entity-Component Model

An entity is a name prefix. A component is a type suffix. The ID convention is entity:component.

player:transform    → { "position": [0, 1, 0], "rotation": [0, 0, 0, 1] }
player:rigidbody    → { "bodyType": "dynamic", "mass": 80 }
player:collider     → { "shape": "capsule", "radius": 0.3, "height": 1.8 }
player:mesh         → <binary 24-byte stride>
player:material     → { "albedo": [0.8, 0.2, 0.1], "roughness": 0.5 }

sun:light           → { "type": "directional", "color": [1, 1, 0.9] }
main:camera         → { "mode": "thirdPerson", "target": "player", "fov": 60 }

API

Put / Get (entity-style)

#![allow(unused)]
fn main() {
let db = AssetDB::open("./game.db")?;    // FileBackend (native)
let db = AssetDB::in_memory()?;           // MemoryBackend (wasm/testing)

// Put binary data (mesh, texture)
db.put("snake:mesh", &mesh_bytes, json!({"stride": 24}))?;

// Put JSON data (transform, material, config)
db.put_json("player:transform", json!({"position": [0, 1, 0]}), json!({}))?;

// Get
let asset = db.get("snake:mesh")?;
let data: Vec<u8> = asset.data;
let meta: Value = asset.entry.metadata;

// Check existence
db.has("player:rigidbody");  // true/false
}

Component Access (ECS-style)

#![allow(unused)]
fn main() {
// Set component on entity
db.set_component_json("player", "transform", json!({...}), json!({}))?;

// Get component
let tf = db.get_component("player", "transform")?;

// List all components on an entity
db.components_of("player")?;           // → ["transform", "rigidbody", "mesh"]

// Find entities with specific components
db.entities_with(&["rigidbody", "transform"])?;  // → ["player", "enemy_1"]

// Entity snapshot (all components as JSON)
db.entity_snapshot("player")?;
// → { "transform": {...}, "rigidbody": {...}, "material": {...} }

// Spawn from template
db.spawn_from("crate_template", "crate_42")?;

// Destroy entity (removes all components)
db.destroy_entity("crate_42")?;
}

Tags

#![allow(unused)]
fn main() {
db.tag("sword:mesh", &["weapon", "melee"])?;

db.query_dsl(&json!({"tags": ["weapon"]}))?;              // has ANY tag
db.query_dsl(&json!({"tags": {"$all": ["weapon", "melee"]}}))?;  // has ALL tags
}

Query DSL

Queries describe the shape of what you're looking for. The config IS the query.

{ "type": "mesh", "tags": ["snake"], "$sort": "newest", "$limit": 5 }
{ "name": { "$contains": "body" }, "metadata.stride": 24 }
{ "type": { "$in": ["mesh", "texture"] }, "size": { "$gt": 10000 } }
{ "tags": { "$all": ["weapon", "melee"] } }

Operators: $gt, $gte, $lt, $lte, $in, $contains, $startsWith, $all, $between, $not, $exists

Control keys: $sort (newest/oldest/largest/smallest/name), $limit

Storage Backends

BackendTargetPersistenceUse case
FileBackendNativeDirectory on diskDesktop, CI
MemoryBackendAnyNone (RAM)Testing
IndexedDbBackendWasmBrowser IndexedDBWeb editor
S3Backend (planned)AnyS3 bucketCloud workflows

All backends implement the StorageBackend trait:

#![allow(unused)]
fn main() {
pub trait StorageBackend: Send + Sync {
    fn read_manifest(&self) -> Result<Vec<AssetEntry>>;
    fn write_manifest(&self, entries: &[AssetEntry]) -> Result<()>;
    fn read_blob(&self, hash: &str) -> Result<Vec<u8>>;
    fn write_blob(&self, hash: &str, data: &[u8]) -> Result<()>;
    fn blob_exists(&self, hash: &str) -> bool;
    fn delete_blob(&self, hash: &str) -> Result<()>;
}
}

Compression

Binary blobs are transparently LZ4-compressed (via lz4_flex, pure Rust, wasm-safe). Compression is selective:

  • Mesh/animation blobs: compressed (~15-25% savings)
  • JSON components: compressed (~60-80% savings)
  • Textures/audio/video: stored raw (already compressed)
  • Small blobs (<256 bytes): stored raw

get() always returns uncompressed data. Callers never see compression.

Content Addressing

Blobs are stored by content hash. Identical data → same hash → same blob. Re-importing the same asset costs zero additional storage. put() is an upsert — same entity ID overwrites, but if the content hasn't changed, no disk write occurs.

System Actors

Systems read components, process, write results back. The DAG determines execution order and which entities each system operates on.

SystemTemplateReadsWrites
ScenePhysicsSystemtpl_scene_physicsrigidbody, collider, transformtransform, velocity
SceneCameraSystemtpl_scene_cameracameracamera_matrices
SceneLightCollectortpl_scene_light_collectorlight, transformpacked GPU buffer
SceneMaterialSystemtpl_scene_materialmaterialpacked GPU buffer
TweenSystemtpl_tween_systemtweentarget property
TimelineSystemtpl_timeline_systemtimelinetarget properties
StateMachineSystemtpl_state_machine_systemstate_machine, triggersstate_machine
BehaviorSystemtpl_behavior_systembehaviortarget properties
LayoutSyncSystemtpl_layout_syncdom, style, transform, bindtriggers, computed

Entity Selectors

Every system accepts explicit entity targeting:

{ "entity": "player" }                           // single entity
{ "entities": ["sun", "torch", "lamp"] }         // explicit list
{ "selector": { "tags": ["enemy"] } }            // query-based

The entity_id inport allows dynamic selection per-tick from the DAG.

Layout Sync — DOM / Layout Tree Integration

The layout sync system bridges AssetDB with the DOM (browser) or a native layout tree. The layout tree is the source of truth at initialization — AssetDB observes and drives it.

Flow

Startup:
  DOM / Layout Tree ──hydrate()──→ AssetDB

Each tick:
  Layout ──poll_events()──→ :triggers components
                               ↓
                        Systems process (behavior, tween, state machine)
                               ↓
  AssetDB ──sync()────────→ Layout (DOM updates)

Inline queries:
  @layout(entity:property) ──→ resolved directly from layout backend
                                (never writes to AssetDB)

LayoutBackend Trait

Pluggable backend for different platforms:

#![allow(unused)]
fn main() {
pub trait LayoutBackend: Send + Sync {
    fn hydrate(&self, db: &Arc<AssetDB>) -> Result<()>;
    fn sync(&self, db: &Arc<AssetDB>) -> Result<()>;
    fn poll_events(&self, db: &Arc<AssetDB>) -> Result<()>;
    fn query(&self, entity: &str, property: &str) -> Option<f64>;
    fn query_string(&self, entity: &str, property: &str) -> Option<String>;
    fn hit_test(&self, x: f64, y: f64) -> Option<String>;
    fn parent_of(&self, entity: &str) -> Option<String>;
    fn children_of(&self, entity: &str) -> Option<Vec<String>>;
}
}
BackendTargetDescription
HeadlessLayoutBackendNative / TestingIn-memory layout nodes
DomLayoutBackendWasm / Browserweb-sys DOM access

Queryable Properties

PropertyTypeDescription
x, yf64Bounding box position
width, heightf64Bounding box dimensions
scrollX, scrollYf64Scroll offset
scrollProgressf64Normalized scroll 0..1
inViewportf641.0 if visible, 0.0 if not
opacityf64Computed opacity
parentWidth, parentHeightf64Parent dimensions
viewportWidth, viewportHeightf64Viewport dimensions
tagStringElement tag name
textStringText content

@layout() Variables

Behavior rules and expressions can reference layout-computed values inline without writing to AssetDB:

{
  "entity": "hero",
  "component": "behavior",
  "data": {
    "rules": [{
      "name": "parallax",
      "target": "transform.position.y",
      "expr": "scroll * -200",
      "vars": {
        "scroll": "@layout(page:scrollProgress)"
      }
    }]
  }
}

The @layout(entity:property) prefix tells the variable resolver to query the layout backend directly. The value is resolved at evaluation time — ephemeral, never stored.

Two-Way Binding

Entities with a :bind component get automatic bi-directional sync. No explicit DAG wiring needed for the data flow itself.

Full Bind

{ "entity": "slider", "component": "bind", "data": true }

Syncs all standard properties: transform, style, value, scroll.

Selective Bind

{
  "entity": "input_field",
  "component": "bind",
  "data": {
    "value": true,
    "scroll": true,
    "transform": false,
    "style": false
  }
}

Only syncs the properties you opt into.

What Happens Each Tick

For bound entities, the LayoutSyncSystem automatically:

DirectionWhatWhen
PullLayout position/scroll/value → AssetDBBefore systems run
PushAssetDB transform/style changes → LayoutAfter systems run

Traceability

All bind operations include metadata.source = "layout_pull" so you can trace them in Reflow's tracing system. The bound_changes outport emits every pull/push operation for the DAG inspector:

[
  {"entity": "slider", "direction": "pull", "property": "transform"},
  {"entity": "slider", "direction": "push"}
]

Bind vs Explicit Wiring

ApproachWhen to use
:bind componentStandard form inputs, draggable elements, scroll-driven animations. Set it and forget it.
Explicit DAG wiringComplex logic between layout and state. Custom pull/push conditions. Multi-step transformations.

Both are visible in the DAG inspector. Bind is convenience — it doesn't bypass the system, it just saves you from wiring the obvious.

DOM Component Schemas

:dom — Element definition

{
  "tag": "button",
  "text": "Click me",
  "parent": "nav",
  "width": 120,
  "height": 40
}

:style — Visual properties

{
  "opacity": 1.0,
  "backgroundColor": "#007bff",
  "borderRadius": "8px",
  "transform": "scale(1.0)"
}

:transform — Position/rotation/scale

{
  "position": [100, 200, 0],
  "rotation": [0, 0, 0],
  "scale": [1, 1, 1]
}

:triggers — Events (consumed each tick)

["pointerEnter", "pointerDown", "scroll"]

Written by poll_events() or external input systems. Read and cleared by StateMachineSystem / BehaviorSystem.

:bind — Two-way sync toggle

true

or selective:

{ "transform": true, "value": true, "scroll": false }

Example: Animated Button

Set up an interactive button with hover/press states and tween animations, all driven by data:

#![allow(unused)]
fn main() {
let db = get_or_create_db("./app.db")?;

// Element
db.set_component_json("btn", "dom", json!({
    "tag": "button", "text": "Submit"
}), json!({}))?;

db.set_component_json("btn", "transform", json!({
    "position": [0, 0, 0], "scale": [1, 1, 1]
}), json!({}))?;

db.set_component_json("btn", "style", json!({
    "opacity": 1.0, "backgroundColor": "#007bff"
}), json!({}))?;

// State machine
db.put_json("btn:state_machine", json!({
    "current": "idle",
    "states": {
        "idle": { "onEnter": { "tween": "btn_scale_normal" } },
        "hover": { "onEnter": { "tween": "btn_scale_up" } },
        "pressed": { "onEnter": { "tween": "btn_scale_down" } }
    },
    "transitions": [
        { "from": "idle", "to": "hover", "trigger": "pointerEnter" },
        { "from": "hover", "to": "idle", "trigger": "pointerLeave" },
        { "from": "hover", "to": "pressed", "trigger": "pointerDown" },
        { "from": "pressed", "to": "hover", "trigger": "pointerUp" }
    ]
}), json!({}))?;

// Tweens
db.put_json("btn_scale_up:tween", json!({
    "target": "btn:transform.scale",
    "from": [1, 1, 1], "to": [1.05, 1.05, 1.05],
    "duration": 0.15, "easing": "easeOutCubic",
    "state": "paused"
}), json!({}))?;

db.put_json("btn_scale_down:tween", json!({
    "target": "btn:transform.scale",
    "from": [1.05, 1.05, 1.05], "to": [0.95, 0.95, 0.95],
    "duration": 0.1, "easing": "easeOutCubic",
    "state": "paused"
}), json!({}))?;

db.put_json("btn_scale_normal:tween", json!({
    "target": "btn:transform.scale",
    "from": [0.95, 0.95, 0.95], "to": [1, 1, 1],
    "duration": 0.2, "easing": "easeOutCubic",
    "state": "paused"
}), json!({}))?;

// Bind for auto sync
db.set_component_json("btn", "bind", json!(true), json!({}))?;
}

DAG:

IntervalTrigger(16ms) → LayoutSync(phase: "both", entity: "btn")
                      → StateMachineSystem(entity: "btn")
                      → TweenSystem(entity: "btn")

The button responds to hover/press with smooth scale animations. No Rust code for the interaction logic — it's all data in the AssetDB.

Observability Overview

Reflow provides a comprehensive observability framework that enables deep introspection into distributed actor networks. The observability system captures detailed execution traces, performance metrics, and data flow patterns across all components in your system.

Key Features

🔍 Comprehensive Event Tracing

  • Actor Lifecycle: Track creation, startup, execution, completion, and failures
  • Message Flow: Monitor all message passing between actors with detailed metadata
  • Data Flow Tracing: NEW - Automatic tracing of data flow between connected actors
  • State Changes: Capture state transitions with diff support for time-travel debugging
  • Network Events: Monitor distributed network operations and health

📊 Real-time Monitoring

  • Live Event Streaming: WebSocket-based real-time event notifications
  • Performance Metrics: CPU usage, memory consumption, throughput measurements
  • Custom Dashboards: Build monitoring interfaces using the WebSocket API
  • Alerting: Set up custom alerts based on event patterns and thresholds

🗄️ Flexible Storage

  • SQLite: Embedded database perfect for development and small deployments
  • PostgreSQL: Production-ready backend with ACID guarantees and concurrent access
  • Memory: High-performance in-memory storage for testing and temporary analysis

🌐 Distributed Tracing

  • Cross-Network Visibility: Trace execution across multiple network instances
  • Causality Tracking: Maintain event dependency chains across distributed components
  • Span Integration: Compatible with OpenTelemetry and Jaeger for unified observability

Architecture Overview

graph TB
    subgraph "Client Applications"
        A1[Actor Network 1]
        A2[Actor Network 2] 
        A3[Actor Network N]
    end
    
    subgraph "Tracing Infrastructure"
        TC[TracingClient]
        WS[WebSocket Protocol]
        TS[Tracing Server]
    end
    
    subgraph "Storage Layer"
        SQLite[(SQLite)]
        Postgres[(PostgreSQL)]
        Memory[(Memory)]
    end
    
    subgraph "Analysis & Monitoring"
        RT[Real-time Dashboard]
        HQ[Historical Queries]
        AL[Alerting]
    end
    
    A1 -->|Events| TC
    A2 -->|Events| TC
    A3 -->|Events| TC
    TC -->|BatchedEvents| WS
    WS --> TS
    TS --> SQLite
    TS --> Postgres
    TS --> Memory
    TS -->|Live Events| RT
    TS -->|Query Results| HQ
    TS -->|Notifications| AL

Event Types

Core Actor Events

  • ActorCreated: Actor instance creation with configuration
  • ActorStarted: Actor begins execution
  • ActorCompleted: Successful actor completion
  • ActorFailed: Actor error with detailed error information

Communication Events

  • MessageSent: Message transmission between actors
  • MessageReceived: Message reception confirmation
  • DataFlow: Automatic data flow tracing between connected actors
  • PortConnected: Port connection establishment
  • PortDisconnected: Port disconnection

System Events

  • StateChanged: Actor state modifications with diffs
  • NetworkEvent: Distributed network operations

Integration Patterns

Automatic Integration

The tracing framework integrates automatically with Reflow networks:

#![allow(unused)]
fn main() {
use reflow_network::{Network, NetworkConfig};
use reflow_network::tracing::TracingConfig;

// Enable tracing with minimal configuration
let tracing_config = TracingConfig {
    server_url: "ws://localhost:8080".to_string(),
    enabled: true,
    ..Default::default()
};

let network_config = NetworkConfig {
    tracing: tracing_config,
    ..Default::default()
};

let network = Network::new(network_config);
// All actor operations are now automatically traced!
}

Manual Event Recording

For custom events and detailed control:

#![allow(unused)]
fn main() {
use reflow_tracing_protocol::{TraceEvent, TracingIntegration};

// Record custom events
if let Some(tracing) = global_tracing() {
    tracing.trace_actor_created("custom_actor").await?;
    tracing.trace_data_flow(
        "source_actor", "output",
        "target_actor", "input",
        "CustomMessage", 1024
    ).await?;
}
}

Data Flow Tracing

The latest enhancement to the observability framework provides automatic data flow tracing:

Automatic Capture

  • Zero Configuration: Works out-of-the-box with existing actor networks
  • Connector Integration: Captures data flow at the connector level for accuracy
  • Bidirectional Tracking: Traces both source and destination information
  • Performance Metadata: Includes message size, type, and timing information

Rich Context

#![allow(unused)]
fn main() {
// Data flow events automatically include:
DataFlow {
    from_actor: "data_processor",
    from_port: "output",
    to_actor: "analytics_engine", 
    to_port: "input",
    message_type: "ProcessedData",
    size_bytes: 2048,
    timestamp: "2025-01-07T06:00:00Z",
    causality_chain: [...],
    performance_metrics: {...}
}
}

Use Cases

Development & Debugging

  • Execution Visualization: See exactly how data flows through your system
  • Performance Profiling: Identify bottlenecks and optimization opportunities
  • Error Investigation: Trace error propagation through actor networks
  • State Debugging: Time-travel debugging with state diffs

Production Monitoring

  • Health Monitoring: Track system health and detect anomalies
  • Performance Monitoring: Monitor throughput, latency, and resource usage
  • Capacity Planning: Analyze usage patterns for scaling decisions
  • Incident Response: Rapid diagnosis of production issues

Analytics & Optimization

  • Usage Patterns: Understand how your system is actually used
  • Performance Optimization: Data-driven optimization decisions
  • Architecture Evolution: Make informed architectural changes
  • Compliance: Maintain audit trails for regulatory requirements

Getting Started

  1. Quick Start Guide - Get tracing running in 5 minutes
  2. Architecture Deep Dive - Understand the technical details
  3. Configuration Guide - Customize for your environment
  4. Deployment Guide - Production deployment patterns

Next Steps

Observability Quick Start

Get Reflow's observability framework running in under 5 minutes. This guide will walk you through setting up tracing for a simple actor network and viewing the results.

Prerequisites

  • Rust 1.85 or later
  • Basic familiarity with Reflow actors

Step 1: Start the Tracing Server

First, start the reflow_tracing server:

# From the project root
cd examples/tracing_integration
./scripts/start_server.sh

This starts the tracing server on ws://127.0.0.1:8080 with SQLite storage.

Step 2: Create a Simple Traced Network

Create a new Rust project or add to an existing one:

# Cargo.toml
[dependencies]
reflow_network = { path = "../../crates/reflow_network" }
reflow_actor = { path = "../../crates/reflow_actor" }
reflow_tracing_protocol = { path = "../../crates/reflow_tracing_protocol" }
tokio = { version = "1.0", features = ["full"] }
anyhow = "1.0"
tracing = "0.1"
tracing-subscriber = "0.3"

Create a simple actor network with tracing enabled:

// src/main.rs
use anyhow::Result;
use reflow_network::{Network, NetworkConfig};
use reflow_network::tracing::{TracingConfig, init_global_tracing};
use reflow_actor::{Actor, ActorBehavior, ActorContext, Port, ActorLoad, ActorConfig};
use std::collections::HashMap;
use std::sync::Arc;
use std::time::Duration;
use parking_lot::Mutex;

// Simple data processor actor
#[derive(Clone)]
struct DataProcessor {
    name: String,
}

impl DataProcessor {
    fn new(name: &str) -> Self {
        Self { name: name.to_string() }
    }
}

impl Actor for DataProcessor {
    fn get_behavior(&self) -> ActorBehavior {
        let name = self.name.clone();
        Box::new(move |context: ActorContext| {
            let actor_name = name.clone();
            Box::pin(async move {
                println!("🎬 {} processing messages", actor_name);
                
                let mut results = HashMap::new();
                for (port, message) in context.get_payload() {
                    println!("📨 {} received on {}: {:?}", actor_name, port, message);
                    
                    // Simulate processing
                    tokio::time::sleep(Duration::from_millis(100)).await;
                    
                    let output = reflow_actor::message::Message::string(
                        format!("Processed by {}: {:?}", actor_name, message)
                    );
                    results.insert("output".to_string(), output);
                }
                
                Ok(results)
            })
        })
    }

    fn get_inports(&self) -> Port { flume::unbounded() }
    fn get_outports(&self) -> Port { flume::unbounded() }
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> { 
        Arc::new(Mutex::new(ActorLoad::new(0))) 
    }
    
    fn create_process(&self, _config: ActorConfig) -> std::pin::Pin<Box<dyn std::future::Future<Output = ()> + Send + 'static>> {
        Box::pin(async {})
    }
    
    fn shutdown(&self) {}
}

#[tokio::main]
async fn main() -> Result<()> {
    // Initialize logging
    tracing_subscriber::fmt::init();
    
    println!("🚀 Starting traced actor network");
    
    // Step 1: Configure tracing
    let tracing_config = TracingConfig {
        server_url: "ws://127.0.0.1:8080".to_string(),
        batch_size: 10,
        batch_timeout: Duration::from_millis(500),
        enable_compression: false,
        enabled: true,
        retry_config: reflow_network::tracing::RetryConfig {
            max_retries: 3,
            initial_delay: Duration::from_millis(100),
            max_delay: Duration::from_secs(5),
            backoff_multiplier: 2.0,
        },
    };
    
    // Step 2: Initialize global tracing
    init_global_tracing(tracing_config.clone())?;
    println!("✅ Global tracing initialized");
    
    // Step 3: Create network with tracing enabled
    let network_config = NetworkConfig {
        tracing: tracing_config,
        ..Default::default()
    };
    
    let mut network = Network::new(network_config);
    println!("✅ Network created with tracing");
    
    // Step 4: Register actors
    let processor1 = DataProcessor::new("processor1");
    let processor2 = DataProcessor::new("processor2");
    
    network.register_actor("processor", processor1)?;
    network.register_actor("formatter", processor2)?;
    
    // Step 5: Add nodes to network
    network.add_node("proc1", "processor", None)?;
    network.add_node("proc2", "formatter", None)?;
    
    // Step 6: Start the network (automatic tracing begins here)
    network.start()?;
    println!("✅ Network started - tracing active");
    
    // Step 7: Send some messages (these will be automatically traced)
    println!("📨 Sending test messages...");
    
    for i in 1..=3 {
        let message = reflow_actor::message::Message::string(
            format!("Test message {}", i)
        );
        network.send_to_actor("proc1", "input", message)?;
        tokio::time::sleep(Duration::from_millis(300)).await;
    }
    
    // Step 8: Execute actors directly for more detailed tracing
    let result = network.execute_actor(
        "proc2",
        HashMap::from([
            ("input".to_string(), reflow_actor::message::Message::string("Direct execution test".to_string()))
        ])
    ).await?;
    
    println!("✅ Direct execution result: {:?}", result);
    
    // Step 9: Let the system run to generate traces
    tokio::time::sleep(Duration::from_secs(2)).await;
    
    // Step 10: Manual tracing API demonstration
    if let Some(tracing) = reflow_network::tracing::global_tracing() {
        println!("🔍 Demonstrating manual tracing API...");
        
        tracing.trace_actor_created("manual_actor").await?;
        tracing.trace_message_sent(
            "manual_actor", 
            "output", 
            "ManualMessage", 
            256
        ).await?;
        
        println!("✅ Manual events recorded");
    }
    
    // Graceful shutdown
    println!("🛑 Shutting down...");
    network.shutdown();
    tokio::time::sleep(Duration::from_millis(500)).await;
    
    println!("🎉 Quick start complete! Check the tracing server for events.");
    println!("💡 Next: Run the monitoring client to see live events:");
    println!("   cargo run --bin monitoring_client");
    
    Ok(())
}

Step 3: Run Your Application

cargo run

You should see output like:

🚀 Starting traced actor network
✅ Global tracing initialized
✅ Network created with tracing
✅ Network started - tracing active
📨 Sending test messages...
🎬 processor1 processing messages
📨 processor1 received on input: String("Test message 1")
...
🎉 Quick start complete! Check the tracing server for events.

Step 4: View Live Events

In another terminal, run the monitoring client:

cd examples/tracing_integration
cargo run --bin monitoring_client

You'll see real-time trace events:

🔍 Monitoring live trace events...
📊 Connected to tracing server at ws://127.0.0.1:8080

[2025-01-07T06:00:00Z] ActorCreated: processor1
[2025-01-07T06:00:00Z] ActorCreated: processor2  
[2025-01-07T06:00:00Z] MessageSent: processor1 -> output (String, 256 bytes)
[2025-01-07T06:00:00Z] DataFlow: processor1:output -> processor2:input (String, 256 bytes)
[2025-01-07T06:00:01Z] ActorCompleted: processor1
...

What Just Happened?

Your simple actor network generated several types of trace events:

  1. Actor Creation: When actors were instantiated
  2. Message Sending: When messages were sent between actors
  3. Data Flow: Automatic tracing of data flowing between connected actors
  4. Actor Completion: When actors finished processing

All of this happened automatically - the tracing framework integrated seamlessly with your existing Reflow network.

Exploring the Data

SQLite Database

The trace data is stored in examples/tracing_integration/data/traces.db. You can explore it directly:

sqlite3 examples/tracing_integration/data/traces.db
.tables
SELECT * FROM trace_events LIMIT 5;

Query API

Use the monitoring client with query options:

# Get last 10 events
cargo run --bin monitoring_client -- --query --limit 10

# Filter by actor
cargo run --bin monitoring_client -- --actor-ids processor1

# Filter by event type
cargo run --bin monitoring_client -- --event-types ActorCreated,MessageSent

Next Steps

🎯 Learn More About Event Types

⚙️ Customize Your Setup

🚀 Production Deployment

🔧 Integration

Troubleshooting

Connection Issues

If the client can't connect to the tracing server:

# Check if server is running
curl -I http://127.0.0.1:8080
# or
telnet 127.0.0.1 8080

No Events Appearing

  • Ensure enabled: true in your TracingConfig
  • Check that init_global_tracing() was called before network operations
  • Verify the server URL is correct

Performance Impact

For production systems, consider:

  • Increasing batch_size to reduce network overhead
  • Enabling compression with enable_compression: true
  • Using PostgreSQL backend for better concurrent performance

Get help in our troubleshooting guide or check the architecture documentation for deeper understanding.

Observability Architecture

Reflow's observability is built on an event pipeline that translates low-level network events into rich engine events and forwards them to Zeal IDE for visibility, tracing, and replay.

System Components

1. ExecutionEngine

The engine creates an isolated Network per workflow execution and translates NetworkEvents (from reflow_network) into enriched EngineEvents with timing, size, and connection metadata.

The engine maintains a HashMap<String, Instant> to track per-actor start times, computing duration_ms when actors complete.

2. EventBridge

The bridge connects the engine's per-execution event channel to two consumers:

#![allow(unused)]
fn main() {
pub struct EventBridge {
    trace_collector: Option<Arc<TraceCollector>>,
    zip_session: Option<Arc<ZipSession>>,
}
}

One bridge task is spawned per execution via bridge.attach(workflow_id, execution_id, event_rx). The task:

  1. Begins a trace session via TraceCollector
  2. Drains the flume::Receiver<EngineEvent> channel
  3. Forwards each event to both TraceCollector and ZipSession
  4. Tracks terminal state (success/failure based on Completed and Failed events)
  5. Completes the trace session when the channel closes (sender dropped)

3. TraceCollector

Submits per-node execution data to Zeal's TracesAPI over HTTP. Manages trace session lifecycle:

#![allow(unused)]
fn main() {
pub struct TraceCollector {
    traces_api: tokio::sync::Mutex<TracesAPI>,
    sessions: tokio::sync::Mutex<HashMap<String, ActiveSession>>,
    batch_size: usize,  // default: 50
}
}

Each ActiveSession tracks:

  • session_id — returned by Zeal on session creation
  • start_time — for duration calculation
  • nodes_completed / nodes_failed — aggregate counters
  • total_data_processed — bytes processed across all nodes
  • pending_events — buffered TraceEvents awaiting flush

Events are flushed when the buffer reaches batch_size (50) or when the session completes.

4. ZipSession

Manages the outbound connection to Zeal IDE:

  • Template Registration: Registers all actor templates (native + API) on startup
  • WebSocket Channel: Opens a tokio-tungstenite connection to /ws/zip
  • Event Translation: Converts EngineEvents to ZipExecutionEvents using zeal-sdk helpers
  • Event Emission: Pushes ZIP events as JSON text frames over WebSocket
#![allow(unused)]
fn main() {
pub struct ZipSession {
    config: ZipSessionConfig,
    client: ZealClient,
    ws: ZipWebSocket,       // Mutex<Option<WsSink>>
    engine: Arc<ExecutionEngine>,
    shutdown: Arc<Notify>,
}
}

Event Flow

sequenceDiagram
    participant N as Network
    participant E as ExecutionEngine
    participant EB as EventBridge
    participant TC as TraceCollector
    participant ZS as ZipSession
    participant Z as Zeal IDE

    N->>E: NetworkEvent (ActorStarted, ActorCompleted, MessageSent, NetworkIdle)
    E->>E: Translate to EngineEvent (enrich with duration_ms, output_size)
    E->>EB: flume channel
    par TraceCollector
        EB->>TC: process_event()
        TC->>TC: Buffer TraceEvent (batch_size=50)
        TC->>Z: POST /api/traces/sessions/{id}/events
    and ZipSession
        EB->>ZS: emit_engine_event()
        ZS->>ZS: translate_event() → ZipExecutionEvent
        ZS->>Z: WebSocket text frame (JSON)
    end

NetworkEvent → EngineEvent Translation

The engine's run_execution() loop translates each NetworkEvent into an EngineEvent:

NetworkEventEngineEventTypeEnrichments
ActorStarted { actor_id }(records start time)Stored in timing map
ActorCompleted { actor_id, output }ActorCompletedduration_ms, output_size, output_connections
ActorFailed { actor_id, error }ActorFailederror, output_connections
MessageSent { from, to, data }MessageSentsize (serialized bytes)
NetworkIdle / NetworkShutdownCompletedduration_ms, nodes_executed, nodes_failed

The engine waits for NetworkIdle or NetworkShutdown before emitting the Completed event — it does not fire prematurely after network.start().

EngineEvent → ZIP Event Translation

The ZipSession::translate_event() method maps engine events to Zeal SDK types:

EngineEventTypeZipExecutionEventOptions
StartedExecutionStartedworkflow_id, execution_id
NodeExecutingNodeExecutinginput connections
ActorCompletedNodeCompletedNodeCompletedOptions { duration, output_size }
ActorFailedNodeFailedNodeError { message, code, stack }
CompletedExecutionCompletedExecutionSummary { success_count, error_count }
FailedExecutionFailedExecutionError { message }, ExecutionFailedOptions { duration }

MessageSent and NetworkIdle have no ZIP mapping and are silently dropped.

EngineEvent → TraceEvent Translation

The TraceCollector::process_event() method maps engine events to zeal-sdk trace types:

EngineEventTypeTraceEventTypeTraceData
NodeExecutingInputdata_type: "lifecycle", preview: {"status": "executing"}
ActorCompletedOutputdata_type: "application/json", size, duration, preview
ActorFailedErrorTraceError { message }
MessageSentOutputdata_type: "message", size, from/to preview

Trace Session Lifecycle

stateDiagram-v2
    [*] --> Created: POST /api/traces/sessions
    Created --> Active: Events flowing
    Active --> Active: Buffer events (batch_size=50)
    Active --> Flushing: Buffer full
    Flushing --> Active: POST events batch
    Active --> Completing: Channel closed
    Completing --> Done: POST complete with SessionSummary
    Done --> [*]

The SessionSummary submitted on completion includes:

  • total_nodes — nodes completed + failed
  • successful_nodes / failed_nodes
  • total_duration — wall clock ms
  • total_data_processed — bytes across all nodes

Configuration

The observability pipeline is activated when zeal_url is set in ServerConfig:

#![allow(unused)]
fn main() {
let config = ServerConfig {
    zeal_url: Some("http://localhost:3000".to_string()),
    // ...
};
}

When zeal_url is None, no EventBridge is created and executions run without observability forwarding. The REST API still works for headless execution.

Graceful Degradation

  • If the WebSocket connection fails during ZipSession::start(), a warning is logged but the session continues (traces still work via HTTP)
  • If TraceCollector fails to begin a session, an error is logged but execution continues
  • Individual event forwarding failures are logged at debug level and do not interrupt execution

Next Steps

Event Types Reference

This reference covers the event types in Reflow's observability pipeline: low-level NetworkEvents from the actor runtime, enriched EngineEvents from the execution engine, and the ZIP events sent to Zeal IDE.

EngineEvent Structure

All engine events share a common structure:

#![allow(unused)]
fn main() {
pub struct EngineEvent {
    pub workflow_id: String,
    pub execution_id: String,
    pub event_type: EngineEventType,
    pub timestamp: u64,
    pub data: serde_json::Value,
}
}

EngineEventType

Started

Emitted when an execution begins.

#![allow(unused)]
fn main() {
EngineEventType::Started
}

ZIP mapping: ZipExecutionEvent::ExecutionStarted

NodeExecuting

Emitted when an actor begins processing. Generated from NetworkEvent::ActorStarted.

#![allow(unused)]
fn main() {
EngineEventType::NodeExecuting {
    node_id: String,     // Actor/node identifier
    component: String,   // Component type name
}
}

ZIP mapping: ZipExecutionEvent::NodeExecuting

ActorCompleted

Emitted when an actor finishes successfully. Generated from NetworkEvent::ActorCompleted. The engine computes duration_ms by comparing the ActorStarted timestamp stored in a HashMap<String, Instant>.

#![allow(unused)]
fn main() {
EngineEventType::ActorCompleted {
    actor_id: String,
    component: String,
    duration_ms: Option<u64>,          // Time from ActorStarted → ActorCompleted
    output_size: Option<u64>,          // Serialized output size in bytes
    output_connections: Vec<String>,   // IDs of outbound connections
}
}

ZIP mapping: ZipExecutionEvent::NodeCompleted with NodeCompletedOptions { duration, output_size }

Trace mapping: TraceEventType::Output with TraceData { size, data_type: "application/json", preview, duration }

ActorFailed

Emitted when an actor errors. Generated from NetworkEvent::ActorFailed.

#![allow(unused)]
fn main() {
EngineEventType::ActorFailed {
    actor_id: String,
    component: String,
    error: String,                     // Error message
    output_connections: Vec<String>,   // Outbound connections (for error routing)
}
}

ZIP mapping: ZipExecutionEvent::NodeFailed with NodeError { message, code, stack }

Trace mapping: TraceEventType::Error with TraceError { message }

MessageSent

Emitted when data flows between actors. Generated from NetworkEvent::MessageSent.

#![allow(unused)]
fn main() {
EngineEventType::MessageSent {
    from_node: String,
    from_port: String,
    to_node: String,
    to_port: String,
    size: usize,    // Serialized message size in bytes
}
}

ZIP mapping: None (silently dropped)

Trace mapping: TraceEventType::Output with TraceData { data_type: "message", size, preview: { to_node, to_port } }

NetworkIdle

Emitted when the network has no more messages to process. Used internally to trigger the Completed event.

#![allow(unused)]
fn main() {
EngineEventType::NetworkIdle
}

ZIP mapping: None

Completed

Emitted when the execution finishes. Generated after NetworkIdle or NetworkShutdown. Includes aggregate statistics.

#![allow(unused)]
fn main() {
EngineEventType::Completed {
    duration_ms: u64,       // Total execution wall-clock time
    nodes_executed: u32,    // Total actors that ran
    nodes_failed: u32,      // Actors that failed
}
}

ZIP mapping: ZipExecutionEvent::ExecutionCompleted with:

#![allow(unused)]
fn main() {
ExecutionCompletedOptions {
    summary: Some(ExecutionSummary {
        success_count: nodes_executed - nodes_failed,
        error_count: nodes_failed,
        warning_count: 0,
    }),
}
}

Failed

Emitted when the execution fails at the engine level (not an individual actor failure).

#![allow(unused)]
fn main() {
EngineEventType::Failed {
    error: String,
    duration_ms: Option<u64>,
}
}

ZIP mapping: ZipExecutionEvent::ExecutionFailed with:

#![allow(unused)]
fn main() {
ExecutionError { message, code: None, node_id: None }
ExecutionFailedOptions { duration }
}

NetworkEvent (Source Events)

These are the raw events from reflow_network that the engine translates:

NetworkEventDescription
ActorStarted { actor_id }Actor process began (records start time in HashMap)
ActorCompleted { actor_id, output }Actor finished (triggers EngineEventType::ActorCompleted)
ActorFailed { actor_id, error }Actor errored (triggers EngineEventType::ActorFailed)
MessageSent { from_actor, from_port, to_actor, to_port, data }Data transferred between actors
NetworkIdleNo pending messages (triggers completion check)
NetworkShutdownNetwork stopped

TraceEvent (Zeal TracesAPI)

Events submitted to Zeal's TracesAPI via HTTP:

#![allow(unused)]
fn main() {
pub struct TraceEvent {
    pub timestamp: i64,
    pub node_id: String,
    pub port_id: Option<String>,
    pub event_type: TraceEventType,   // Input, Output, Error
    pub data: TraceData,
    pub duration: Option<Duration>,
    pub metadata: Option<Value>,
    pub error: Option<TraceError>,
}

pub struct TraceData {
    pub size: usize,
    pub data_type: String,
    pub preview: Option<Value>,
    pub full_data: Option<Value>,
}
}

TraceEventType

TypeUsed For
InputNodeExecuting — actor began processing
OutputActorCompleted, MessageSent — data produced
ErrorActorFailed — error occurred

ZipExecutionEvent (Zeal WebSocket)

Events sent over the ZIP WebSocket to Zeal in real-time:

EventDescriptionKey Fields
ExecutionStartedWorkflow beganworkflow_id, execution_id
NodeExecutingActor began processingworkflow_id, node_id, input_connections
NodeCompletedActor finishedworkflow_id, node_id, output_connections, duration, output_size
NodeFailedActor erroredworkflow_id, node_id, error message
ExecutionCompletedWorkflow finishedduration, nodes_executed, summary
ExecutionFailedWorkflow failederror, duration

All ZIP events are created using zeal-sdk helper functions (create_execution_started_event, create_node_completed_event, etc.) and serialized as JSON text frames.

Event Lifecycle Example

For a workflow with two actors (A → B):

1. EngineEventType::Started
2. EngineEventType::NodeExecuting { node_id: "A", component: "tpl_http_request" }
3. EngineEventType::ActorCompleted { actor_id: "A", duration_ms: Some(150), output_size: Some(2048) }
4. EngineEventType::MessageSent { from: "A", to: "B", size: 2048 }
5. EngineEventType::NodeExecuting { node_id: "B", component: "tpl_data_transformer" }
6. EngineEventType::ActorCompleted { actor_id: "B", duration_ms: Some(10), output_size: Some(512) }
7. EngineEventType::NetworkIdle
8. EngineEventType::Completed { duration_ms: 165, nodes_executed: 2, nodes_failed: 0 }

This generates:

  • 6 ZIP WebSocket events (Started, NodeExecuting x2, NodeCompleted x2, ExecutionCompleted)
  • 6 TraceEvents (Input x2, Output x3 including MessageSent, Output for completion)
  • 1 trace session begin + 1 trace session complete with summary

Next Steps

Data Flow Tracing

Data Flow Tracing is a core component of Reflow's observability framework, providing automatic and comprehensive tracking of data movement between actors in your network. This feature gives you unprecedented visibility into how information flows through your system.

Overview

Traditional actor monitoring focuses on individual actor behavior - creation, completion, and failures. Data Flow Tracing extends this by capturing the connections between actors, providing insights into:

  • Message Routing: How messages travel through your actor network
  • Data Lineage: Complete paths of data transformation
  • Performance Bottlenecks: Where data flow slows down or gets congested
  • System Dependencies: Which actors depend on which data sources

How Data Flow Tracing Works

Automatic Capture

Data Flow Tracing operates at the connector level, intercepting messages as they flow between actors:

#![allow(unused)]
fn main() {
// Automatic tracing in connector implementation
impl Connector {
    pub async fn send_message(&self, message: Message) -> Result<()> {
        // Send the actual message
        self.channel.send(message.clone()).await?;
        
        // Automatically record data flow event
        if let Some(tracing) = global_tracing() {
            tracing.trace_data_flow(
                &self.from_actor, &self.from_port,
                &self.to_actor, &self.to_port,
                message.type_name(), message.size_bytes()
            ).await?;
        }
        
        Ok(())
    }
}
}

This approach provides several advantages:

  • Zero Configuration: Works immediately with existing actor networks
  • Complete Coverage: Captures all message flows without missing any
  • Accurate Timing: Records actual transmission times
  • Minimal Overhead: Efficient implementation with batching

Event Structure

Data Flow events contain rich metadata about the message transfer:

#![allow(unused)]
fn main() {
pub struct DataFlowEvent {
    // Standard event fields
    event_id: EventId,
    timestamp: DateTime<Utc>,
    event_type: TraceEventType::DataFlow {
        to_actor: String,    // Destination actor
        to_port: String,     // Destination port
    },
    actor_id: String,        // Source actor (from_actor)
    
    // Data flow specific information
    data: TraceEventData {
        port: Some("output".to_string()),  // Source port
        message: Some(MessageSnapshot {
            message_type: "SensorReading".to_string(),
            size_bytes: 256,
            checksum: "sha256:abc123...",
            serialized_data: vec![], // Optional data capture
        }),
        performance_metrics: PerformanceMetrics {
            execution_time_ns: 1_500_000,  // 1.5ms transfer time
            queue_depth: 3,                // Destination queue depth
            throughput_msgs_per_sec: 1000.0,
            memory_usage_bytes: 512,       // Memory for message processing
            cpu_usage_percent: 2.5,
        },
        custom_attributes: HashMap::from([
            ("source_actor", json!("sensor_reader")),
            ("source_port", json!("data")),
            ("destination_actor", json!("data_processor")),
            ("destination_port", json!("input")),
            ("message_id", json!("msg_12345")),
            ("protocol", json!("memory_channel")),
            ("compression", json!("none")),
        ]),
        ..Default::default()
    },
}
}

Use Cases

1. Data Lineage Tracking

Track how data flows and transforms through your entire pipeline:

graph LR
    A[Sensor Reader] -->|SensorReading| B[Data Validator]
    B -->|ValidatedReading| C[Data Transformer]
    C -->|ProcessedData| D[Analytics Engine]
    D -->|Insights| E[Dashboard]
    
    style A fill:#e1f5fe
    style E fill:#f3e5f5

Query for complete data lineage:

#![allow(unused)]
fn main() {
// Find all data flow for a specific message
let query = TraceQuery {
    event_types: Some(vec![TraceEventType::DataFlow { 
        to_actor: "*".to_string(), 
        to_port: "*".to_string() 
    }]),
    custom_filter: Some("message_id = 'msg_12345'"),
    ..Default::default()
};

let lineage = tracing_client.query_traces(query).await?;
}

2. Performance Analysis

Identify bottlenecks in your data processing pipeline:

#![allow(unused)]
fn main() {
// Query for slow data transfers
let slow_transfers = TraceQuery {
    event_types: Some(vec![TraceEventType::DataFlow { 
        to_actor: "*".to_string(), 
        to_port: "*".to_string() 
    }]),
    performance_filter: Some("execution_time_ns > 10000000"), // > 10ms
    time_range: Some((Utc::now() - Duration::hours(1), Utc::now())),
    ..Default::default()
};
}

3. System Dependency Mapping

Understand which actors depend on which data sources:

-- Find most active data flows
SELECT 
    source_actor,
    destination_actor,
    COUNT(*) as message_count,
    AVG(execution_time_ns) as avg_transfer_time,
    SUM(size_bytes) as total_bytes
FROM data_flow_events 
WHERE timestamp > NOW() - INTERVAL '1 hour'
GROUP BY source_actor, destination_actor
ORDER BY message_count DESC;

4. Real-time Monitoring

Monitor data flow in real-time for operational awareness:

#![allow(unused)]
fn main() {
// Subscribe to data flow events for specific actors
let filters = SubscriptionFilters {
    actor_ids: Some(vec!["critical_processor".to_string()]),
    event_types: Some(vec![TraceEventType::DataFlow { 
        to_actor: "*".to_string(), 
        to_port: "*".to_string() 
    }]),
    ..Default::default()
};

tracing_client.subscribe(filters).await?;
}

Configuration

Enabling Data Flow Tracing

Data Flow Tracing is enabled automatically when you enable the observability framework:

#![allow(unused)]
fn main() {
let tracing_config = TracingConfig {
    server_url: "ws://localhost:8080".to_string(),
    enabled: true,                    // Enables all tracing including data flow
    batch_size: 50,                  // Batch size for data flow events
    batch_timeout: Duration::from_millis(1000),
    enable_compression: true,         // Recommended for data flow events
    ..Default::default()
};
}

Selective Tracing

For high-throughput systems, you might want to selectively trace certain data flows:

#![allow(unused)]
fn main() {
// Custom connector with selective tracing
impl SelectiveConnector {
    pub async fn send_message(&self, message: Message) -> Result<()> {
        self.channel.send(message.clone()).await?;
        
        // Only trace certain message types or conditions
        if should_trace_message(&message) {
            if let Some(tracing) = global_tracing() {
                tracing.trace_data_flow(
                    &self.from_actor, &self.from_port,
                    &self.to_actor, &self.to_port,
                    message.type_name(), message.size_bytes()
                ).await?;
            }
        }
        
        Ok(())
    }
}

fn should_trace_message(message: &Message) -> bool {
    // Trace based on message type, size, or other criteria
    match message.type_name() {
        "CriticalAlert" => true,        // Always trace alerts
        "DebugInfo" => false,           // Never trace debug info
        "DataUpdate" if message.size_bytes() > 1024 => true, // Large updates only
        _ => rand::random::<f64>() < 0.1, // Sample 10% of other messages
    }
}
}

Sampling Configuration

For extremely high-throughput scenarios, implement sampling:

#![allow(unused)]
fn main() {
pub struct DataFlowSampler {
    sample_rate: f64,      // 0.0 to 1.0
    always_trace: Vec<String>, // Actor names to always trace
    never_trace: Vec<String>,  // Actor names to never trace
}

impl DataFlowSampler {
    pub fn should_trace(&self, from_actor: &str, to_actor: &str) -> bool {
        if self.never_trace.contains(&from_actor.to_string()) ||
           self.never_trace.contains(&to_actor.to_string()) {
            return false;
        }
        
        if self.always_trace.contains(&from_actor.to_string()) ||
           self.always_trace.contains(&to_actor.to_string()) {
            return true;
        }
        
        rand::random::<f64>() < self.sample_rate
    }
}
}

Advanced Features

Message Content Capture

For debugging purposes, you can optionally capture message content:

#![allow(unused)]
fn main() {
let event = TraceEvent::data_flow_with_content(
    from_actor, from_port,
    to_actor, to_port,
    message_type, size_bytes,
    Some(message.serialize()?) // Optional content capture
);
}

⚠️ Security Warning: Be careful when capturing message content in production. Ensure no sensitive data is included.

Custom Metadata

Add custom metadata to data flow events:

#![allow(unused)]
fn main() {
// Enhanced data flow tracing with custom metadata
pub async fn trace_enhanced_data_flow(
    tracing: &TracingIntegration,
    from_actor: &str, from_port: &str,
    to_actor: &str, to_port: &str,
    message: &Message,
    custom_metadata: HashMap<String, serde_json::Value>
) -> Result<()> {
    let mut event = TraceEvent::data_flow(
        from_actor.to_string(), from_port.to_string(),
        to_actor.to_string(), to_port.to_string(),
        message.type_name(), message.size_bytes()
    );
    
    // Add custom metadata
    event.data.custom_attributes.extend(custom_metadata);
    
    // Add message-specific metadata
    event.data.custom_attributes.insert(
        "message_priority".to_string(), 
        json!(message.priority())
    );
    event.data.custom_attributes.insert(
        "message_correlation_id".to_string(), 
        json!(message.correlation_id())
    );
    
    tracing.record_event(event).await
}
}

Causality Tracking

Link data flow events to their triggering events:

#![allow(unused)]
fn main() {
pub async fn trace_causally_linked_data_flow(
    tracing: &TracingIntegration,
    triggering_event_id: EventId,
    from_actor: &str, from_port: &str,
    to_actor: &str, to_port: &str,
    message: &Message
) -> Result<()> {
    let mut event = TraceEvent::data_flow(
        from_actor.to_string(), from_port.to_string(),
        to_actor.to_string(), to_port.to_string(),
        message.type_name(), message.size_bytes()
    );
    
    // Link to triggering event
    event.causality.parent_event_id = Some(triggering_event_id);
    event.causality.dependency_chain.push(triggering_event_id);
    
    tracing.record_event(event).await
}
}

Performance Considerations

Overhead Analysis

Data Flow Tracing introduces minimal overhead:

  • Memory: ~200 bytes per event
  • CPU: ~0.1ms per event (including serialization)
  • Network: Batched transmission reduces network calls
  • Storage: ~1KB per event when stored

Optimization Strategies

  1. Batching: Use larger batch sizes for high-throughput scenarios
  2. Compression: Enable compression for network transmission
  3. Sampling: Sample events rather than capturing every one
  4. Filtering: Use selective tracing based on criticality
  5. Async Processing: All tracing operations are non-blocking

Monitoring Performance Impact

Monitor the tracing system's own performance:

#![allow(unused)]
fn main() {
// Monitor tracing overhead
let tracing_metrics = global_tracing()
    .unwrap()
    .get_performance_metrics()
    .await?;

println!("Events per second: {}", tracing_metrics.events_per_second);
println!("Average latency: {}ms", tracing_metrics.avg_latency_ms);
println!("Memory usage: {}MB", tracing_metrics.memory_usage_mb);
}

Visualization and Analysis

Data Flow Diagrams

Generate visual representations of your data flow:

#![allow(unused)]
fn main() {
// Generate data flow graph for the last hour
let flow_data = tracing_client.query_data_flows(
    TraceQuery {
        time_range: Some((Utc::now() - Duration::hours(1), Utc::now())),
        ..Default::default()
    }
).await?;

let graph = DataFlowGraph::from_events(&flow_data);
graph.render_to_file("data_flow_diagram.svg")?;
}

Real-time Dashboard

Build real-time monitoring dashboards:

// WebSocket connection for real-time data flow monitoring
const ws = new WebSocket('ws://tracing-server:8080');

ws.onmessage = (event) => {
    const traceEvent = JSON.parse(event.data);
    if (traceEvent.event_type.DataFlow) {
        updateDataFlowVisualization(traceEvent);
    }
};

Troubleshooting

Common Issues

No Data Flow Events Appearing:

  • Verify tracing is enabled: enabled: true
  • Check that actors are connected via standard connectors
  • Ensure global tracing is initialized before network operations

Too Many Events:

  • Implement sampling: reduce sample_rate
  • Use selective tracing for specific actors only
  • Increase batch_size to reduce network overhead

Performance Impact:

  • Enable compression: enable_compression: true
  • Use PostgreSQL backend for better concurrent performance
  • Consider async event processing

Debugging Data Flow Issues

Use data flow tracing to debug connectivity and performance issues:

#![allow(unused)]
fn main() {
// Debug missing data flows
let missing_flows = TraceQuery {
    actor_filter: Some("source_actor".to_string()),
    event_types: Some(vec![TraceEventType::MessageSent]),
    time_range: Some((start_time, end_time)),
    ..Default::default()
};

let sent_messages = tracing_client.query_traces(missing_flows).await?;

// Check if corresponding DataFlow events exist
for sent_event in sent_messages {
    let corresponding_flow = find_data_flow_for_message(&sent_event).await?;
    if corresponding_flow.is_none() {
        println!("Missing data flow for message: {:?}", sent_event);
    }
}
}

Best Practices

  1. Start Simple: Begin with default settings and tune based on your needs
  2. Monitor Overhead: Keep an eye on the performance impact of tracing
  3. Use Sampling: For high-throughput systems, sample rather than trace everything
  4. Secure Sensitive Data: Never trace sensitive message content
  5. Regular Cleanup: Set up automatic cleanup of old trace data
  6. Correlate Events: Use causality tracking to link related events
  7. Custom Metadata: Add domain-specific metadata for better insights

Data Flow Tracing provides unprecedented visibility into your actor network's communication patterns. Use it to understand, debug, and optimize your distributed systems with confidence.

Configuration

Reflow's observability framework provides flexible configuration options to suit different deployment scenarios and performance requirements.

Basic Configuration

TracingConfig Structure

#![allow(unused)]
fn main() {
use reflow_network::tracing::TracingConfig;
use std::time::Duration;

let config = TracingConfig {
    server_url: "ws://localhost:8080".to_string(),
    batch_size: 50,
    batch_timeout: Duration::from_millis(1000),
    enable_compression: true,
    enabled: true,
    retry_config: RetryConfig {
        max_retries: 3,
        initial_delay: Duration::from_millis(500),
        max_delay: Duration::from_secs(5),
        backoff_multiplier: 2.0,
    },
};
}

Configuration Parameters

ParameterTypeDefaultDescription
server_urlString"ws://localhost:8080"WebSocket URL of the tracing server
batch_sizeusize50Number of events to batch before sending
batch_timeoutDuration1000msMaximum time to wait before sending incomplete batch
enable_compressionbooltrueEnable gzip compression for network transmission
enabledbooltrueGlobal enable/disable switch for tracing
retry_configRetryConfigSee belowConfiguration for retry logic

Retry Configuration

#![allow(unused)]
fn main() {
pub struct RetryConfig {
    pub max_retries: u32,           // Maximum retry attempts
    pub initial_delay: Duration,    // Initial delay before first retry
    pub max_delay: Duration,        // Maximum delay between retries
    pub backoff_multiplier: f64,    // Exponential backoff multiplier
}
}

Environment-Based Configuration

Using Environment Variables

# Basic tracing configuration
export REFLOW_TRACING_ENABLED=true
export REFLOW_TRACING_SERVER_URL="ws://tracing-server:8080"
export REFLOW_TRACING_BATCH_SIZE=100
export REFLOW_TRACING_BATCH_TIMEOUT_MS=2000

# Compression and retry settings
export REFLOW_TRACING_COMPRESSION=true
export REFLOW_TRACING_MAX_RETRIES=5
export REFLOW_TRACING_INITIAL_DELAY_MS=1000
export REFLOW_TRACING_MAX_DELAY_MS=30000

Configuration from Environment

#![allow(unused)]
fn main() {
use reflow_network::tracing::TracingConfig;

let config = TracingConfig::from_env().unwrap_or_default();
}

File-Based Configuration

TOML Configuration

# tracing.toml
[tracing]
enabled = true
server_url = "ws://localhost:8080"
batch_size = 50
batch_timeout_ms = 1000
enable_compression = true

[tracing.retry]
max_retries = 3
initial_delay_ms = 500
max_delay_ms = 5000
backoff_multiplier = 2.0

[tracing.filters]
# Optional: Configure event filtering
actor_patterns = ["sensor_*", "processor_*"]
exclude_actors = ["debug_*", "test_*"]
event_types = ["ActorCreated", "DataFlow", "ActorFailed"]

Loading from File

#![allow(unused)]
fn main() {
use reflow_network::tracing::TracingConfig;

let config = TracingConfig::from_file("tracing.toml")?;
}

Performance Tuning

High-Throughput Scenarios

For systems with high message throughput:

#![allow(unused)]
fn main() {
let config = TracingConfig {
    batch_size: 200,                // Larger batches
    batch_timeout: Duration::from_millis(5000), // Longer timeout
    enable_compression: true,       // Reduce network overhead
    retry_config: RetryConfig {
        max_retries: 5,            // More resilient
        initial_delay: Duration::from_millis(100),
        max_delay: Duration::from_secs(30),
        backoff_multiplier: 1.5,   // Gentler backoff
    },
    ..Default::default()
};
}

Low-Latency Requirements

For real-time monitoring needs:

#![allow(unused)]
fn main() {
let config = TracingConfig {
    batch_size: 1,                  // Send immediately
    batch_timeout: Duration::from_millis(10), // Very short timeout
    enable_compression: false,      // Reduce CPU overhead
    retry_config: RetryConfig {
        max_retries: 1,            // Fast failure
        initial_delay: Duration::from_millis(50),
        max_delay: Duration::from_millis(500),
        backoff_multiplier: 2.0,
    },
    ..Default::default()
};
}

Memory-Constrained Environments

For embedded or resource-limited deployments:

#![allow(unused)]
fn main() {
let config = TracingConfig {
    batch_size: 10,                 // Small batches
    batch_timeout: Duration::from_millis(500),
    enable_compression: true,       // Save memory in transit
    retry_config: RetryConfig {
        max_retries: 2,            // Limit retry overhead
        initial_delay: Duration::from_millis(1000),
        max_delay: Duration::from_secs(10),
        backoff_multiplier: 2.0,
    },
    ..Default::default()
};
}

Event Filtering

Actor-Based Filtering

#![allow(unused)]
fn main() {
use reflow_network::tracing::{TracingConfig, EventFilter};

let filter = EventFilter::new()
    .include_actors(&["critical_*", "payment_*"])
    .exclude_actors(&["debug_*", "test_*"])
    .include_event_types(&[
        TraceEventType::ActorCreated,
        TraceEventType::ActorFailed,
        TraceEventType::DataFlow { to_actor: "*".to_string(), to_port: "*".to_string() }
    ]);

let config = TracingConfig::default()
    .with_filter(filter);
}

Sampling Configuration

#![allow(unused)]
fn main() {
use reflow_network::tracing::SamplingStrategy;

// Sample 10% of all events
let config = TracingConfig::default()
    .with_sampling(SamplingStrategy::Percentage(10.0));

// Sample every 5th event
let config = TracingConfig::default()
    .with_sampling(SamplingStrategy::EveryNth(5));

// Adaptive sampling based on load
let config = TracingConfig::default()
    .with_sampling(SamplingStrategy::Adaptive {
        base_rate: 10.0,
        max_rate: 100.0,
        load_threshold: 0.8,
    });
}

Security Configuration

TLS/SSL Configuration

#![allow(unused)]
fn main() {
use reflow_network::tracing::{TracingConfig, TlsConfig};

let tls_config = TlsConfig {
    ca_cert_path: Some("ca-cert.pem".to_string()),
    client_cert_path: Some("client-cert.pem".to_string()),
    client_key_path: Some("client-key.pem".to_string()),
    verify_hostname: true,
};

let config = TracingConfig {
    server_url: "wss://secure-tracing-server:8443".to_string(),
    tls_config: Some(tls_config),
    ..Default::default()
};
}

Authentication

#![allow(unused)]
fn main() {
use reflow_network::tracing::AuthConfig;

let auth_config = AuthConfig::ApiKey {
    key: "your-api-key".to_string(),
    header: "X-API-Key".to_string(),
};

let config = TracingConfig {
    auth_config: Some(auth_config),
    ..Default::default()
};
}

Dynamic Configuration

Runtime Configuration Updates

#![allow(unused)]
fn main() {
use reflow_network::tracing;

// Get current global configuration
let current_config = tracing::get_global_config();

// Update configuration at runtime
let updated_config = current_config
    .with_batch_size(100)
    .with_compression(false);

tracing::update_global_config(updated_config)?;
}

Configuration Monitoring

#![allow(unused)]
fn main() {
use reflow_network::tracing::ConfigWatcher;

// Watch for configuration file changes
let watcher = ConfigWatcher::new("tracing.toml")?;
watcher.on_change(|new_config| {
    println!("Configuration updated: {:?}", new_config);
    tracing::update_global_config(new_config)
})?;
}

Validation and Testing

Configuration Validation

#![allow(unused)]
fn main() {
use reflow_network::tracing::TracingConfig;

let config = TracingConfig::default();

// Validate configuration
if let Err(e) = config.validate() {
    eprintln!("Invalid configuration: {}", e);
    return Err(e);
}

// Test connection
if config.test_connection().await.is_err() {
    eprintln!("Cannot connect to tracing server");
}
}

Configuration Examples

#![allow(unused)]
fn main() {
// Development configuration
let dev_config = TracingConfig {
    server_url: "ws://localhost:8080".to_string(),
    batch_size: 10,
    batch_timeout: Duration::from_millis(100),
    enable_compression: false,
    enabled: true,
    ..Default::default()
};

// Production configuration
let prod_config = TracingConfig {
    server_url: "wss://tracing.prod.company.com:443".to_string(),
    batch_size: 100,
    batch_timeout: Duration::from_millis(2000),
    enable_compression: true,
    enabled: true,
    tls_config: Some(TlsConfig::default()),
    auth_config: Some(AuthConfig::from_env()?),
    ..Default::default()
};

// Testing configuration (disabled)
let test_config = TracingConfig {
    enabled: false,
    ..Default::default()
};
}

Best Practices

  1. Start Conservative: Begin with small batch sizes and short timeouts, then tune based on observed performance.

  2. Monitor Overhead: Track the performance impact of tracing and adjust configuration accordingly.

  3. Use Environment Variables: Make configuration environment-specific without code changes.

  4. Enable Compression: For network-constrained environments, compression typically provides significant benefits.

  5. Configure Retries: Set appropriate retry parameters based on your network reliability.

  6. Filter Strategically: Use event filtering to reduce overhead while maintaining necessary observability.

  7. Secure Connections: Always use TLS in production environments.

  8. Test Configuration: Validate configuration in development and staging environments.

  9. Document Settings: Maintain clear documentation of configuration choices and their rationale.

  10. Version Configuration: Track configuration changes alongside code changes.

Storage Backends

Reflow's observability framework supports multiple storage backends to accommodate different operational requirements, from development and testing to large-scale production deployments.

Overview

The tracing system provides a pluggable storage architecture that allows you to choose the most appropriate backend for your needs:

  • Memory Storage: Fast, ephemeral storage for development and testing
  • SQLite Storage: Lightweight, embedded database for small to medium deployments
  • PostgreSQL Storage: Robust, scalable database for production environments
  • ClickHouse Storage: High-performance analytical database for massive scale
  • Custom Storage: Implement your own storage adapter

Memory Storage

When to Use

  • Development and testing environments
  • Temporary trace analysis
  • Systems with limited persistence requirements
  • Quick prototyping and debugging

Configuration

#![allow(unused)]
fn main() {
use reflow_tracing::storage::MemoryStorage;

let storage = MemoryStorage::new();
}

Features

  • Ultra-fast: No disk I/O overhead
  • Zero configuration: Works out of the box
  • Bounded capacity: Configurable memory limits
  • Automatic cleanup: LRU eviction when capacity is reached
#![allow(unused)]
fn main() {
use reflow_tracing::storage::{MemoryStorage, MemoryConfig};

let config = MemoryConfig {
    max_traces: 10_000,
    max_events_per_trace: 1_000,
    max_memory_mb: 256,
    eviction_policy: EvictionPolicy::LRU,
};

let storage = MemoryStorage::with_config(config);
}

Limitations

  • No persistence: Data lost on restart
  • Memory bound: Limited by available RAM
  • Single process: No sharing between instances
  • No complex queries: Basic filtering only

SQLite Storage

When to Use

  • Small to medium production deployments
  • Single-node applications
  • Applications requiring persistence without database administration
  • Development environments with persistence needs

Configuration

#![allow(unused)]
fn main() {
use reflow_tracing::storage::SqliteStorage;

let storage = SqliteStorage::new("traces.db").await?;
}

Features

  • Persistent: Data survives restarts
  • ACID transactions: Data integrity guarantees
  • Full SQL support: Complex queries and analysis
  • Embedded: No separate database server required
  • Backup friendly: Single file for easy backups
#![allow(unused)]
fn main() {
use reflow_tracing::storage::{SqliteStorage, SqliteConfig};

let config = SqliteConfig {
    database_path: "traces.db".to_string(),
    journal_mode: JournalMode::WAL,
    synchronous: SynchronousMode::Normal,
    cache_size_mb: 64,
    busy_timeout_ms: 5000,
    max_connections: 10,
};

let storage = SqliteStorage::with_config(config).await?;
}

Performance Tuning

#![allow(unused)]
fn main() {
// Optimize for write performance
let fast_config = SqliteConfig {
    journal_mode: JournalMode::WAL,      // Write-Ahead Logging
    synchronous: SynchronousMode::Normal, // Balanced durability/speed
    cache_size_mb: 128,                  // Larger cache
    busy_timeout_ms: 10000,              // Handle contention
    ..Default::default()
};

// Optimize for read performance
let read_config = SqliteConfig {
    cache_size_mb: 256,                  // Very large cache
    temp_store: TempStore::Memory,       // In-memory temp tables
    mmap_size_mb: 512,                   // Memory-mapped I/O
    ..Default::default()
};
}

Limitations

  • Single writer: Write concurrency limited
  • File size: Large databases can become unwieldy
  • Network access: No remote access without additional tools

PostgreSQL Storage

When to Use

  • Production environments with multiple instances
  • High-concurrency applications
  • Applications requiring advanced SQL features
  • Distributed systems
  • Long-term data retention requirements

Configuration

#![allow(unused)]
fn main() {
use reflow_tracing::storage::PostgresStorage;

let storage = PostgresStorage::new("postgresql://user:pass@localhost/traces").await?;
}

Features

  • High concurrency: Excellent multi-client performance
  • ACID compliance: Strong consistency guarantees
  • Advanced SQL: Window functions, CTEs, advanced analytics
  • JSON support: Native support for trace event JSON
  • Partitioning: Time-based table partitioning
  • Replication: Built-in streaming replication
#![allow(unused)]
fn main() {
use reflow_tracing::storage::{PostgresStorage, PostgresConfig};

let config = PostgresConfig {
    connection_url: "postgresql://user:pass@localhost/traces".to_string(),
    max_connections: 20,
    min_connections: 5,
    connection_timeout_ms: 5000,
    idle_timeout_ms: 600000,
    max_lifetime_ms: 1800000,
    schema_name: "tracing".to_string(),
    enable_partitioning: true,
    partition_interval: PartitionInterval::Daily,
};

let storage = PostgresStorage::with_config(config).await?;
}

Schema Setup

-- Create dedicated schema
CREATE SCHEMA IF NOT EXISTS tracing;

-- Create partitioned tables
CREATE TABLE tracing.traces (
    trace_id UUID PRIMARY KEY,
    flow_id VARCHAR(255) NOT NULL,
    execution_id UUID NOT NULL,
    start_time TIMESTAMPTZ NOT NULL,
    end_time TIMESTAMPTZ,
    status VARCHAR(50) NOT NULL,
    metadata JSONB,
    created_at TIMESTAMPTZ DEFAULT NOW()
) PARTITION BY RANGE (start_time);

CREATE TABLE tracing.events (
    event_id UUID PRIMARY KEY,
    trace_id UUID NOT NULL REFERENCES tracing.traces(trace_id),
    timestamp TIMESTAMPTZ NOT NULL,
    event_type VARCHAR(100) NOT NULL,
    actor_id VARCHAR(255) NOT NULL,
    data JSONB NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
) PARTITION BY RANGE (timestamp);

-- Create indexes for performance
CREATE INDEX idx_traces_flow_id ON tracing.traces(flow_id);
CREATE INDEX idx_traces_start_time ON tracing.traces(start_time);
CREATE INDEX idx_events_trace_id ON tracing.events(trace_id);
CREATE INDEX idx_events_timestamp ON tracing.events(timestamp);
CREATE INDEX idx_events_actor_id ON tracing.events(actor_id);
CREATE INDEX idx_events_type ON tracing.events(event_type);

-- GIN index for JSON queries
CREATE INDEX idx_events_data_gin ON tracing.events USING GIN(data);

Partitioning Management

#![allow(unused)]
fn main() {
// Automatic partition management
let config = PostgresConfig {
    enable_partitioning: true,
    partition_interval: PartitionInterval::Daily,
    partition_retention_days: 30,
    auto_create_partitions: true,
    ..Default::default()
};
}

Performance Optimization

-- Optimize PostgreSQL configuration
ALTER SYSTEM SET shared_buffers = '256MB';
ALTER SYSTEM SET effective_cache_size = '1GB';
ALTER SYSTEM SET maintenance_work_mem = '64MB';
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
ALTER SYSTEM SET wal_buffers = '16MB';
ALTER SYSTEM SET default_statistics_target = 100;
SELECT pg_reload_conf();

ClickHouse Storage

When to Use

  • Very high-volume trace data (millions of events per second)
  • Analytical workloads and reporting
  • Time-series analysis
  • Long-term data retention with compression
  • Real-time dashboards and monitoring

Configuration

#![allow(unused)]
fn main() {
use reflow_tracing::storage::ClickHouseStorage;

let storage = ClickHouseStorage::new("http://localhost:8123").await?;
}

Features

  • Columnar storage: Excellent compression and analytical performance
  • Distributed architecture: Horizontal scaling
  • Real-time ingestion: Handle massive write loads
  • Advanced analytics: Built-in analytical functions
  • Time-series optimized: Purpose-built for time-ordered data
#![allow(unused)]
fn main() {
use reflow_tracing::storage::{ClickHouseStorage, ClickHouseConfig};

let config = ClickHouseConfig {
    url: "http://clickhouse:8123".to_string(),
    database: "tracing".to_string(),
    cluster: Some("cluster".to_string()),
    username: Some("default".to_string()),
    password: None,
    compression: CompressionMethod::LZ4,
    batch_size: 10000,
    flush_interval_ms: 5000,
    max_memory_usage: 1_000_000_000, // 1GB
    max_execution_time_ms: 300_000,   // 5 minutes
};

let storage = ClickHouseStorage::with_config(config).await?;
}

Schema Design

-- Optimized ClickHouse schema
CREATE TABLE tracing.events_local ON CLUSTER cluster (
    timestamp DateTime64(3),
    trace_id UUID,
    event_id UUID,
    flow_id String,
    execution_id UUID,
    event_type LowCardinality(String),
    actor_id String,
    port String,
    message_type String,
    message_size UInt32,
    execution_time_ns UInt64,
    memory_usage UInt64,
    cpu_usage Float32,
    data String -- JSON as string for flexibility
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{cluster}/{shard}/events', '{replica}')
PARTITION BY toYYYYMM(timestamp)
ORDER BY (timestamp, trace_id, event_id)
SETTINGS index_granularity = 8192;

-- Distributed table
CREATE TABLE tracing.events ON CLUSTER cluster AS tracing.events_local
ENGINE = Distributed(cluster, tracing, events_local, rand());

-- Materialized views for aggregations
CREATE MATERIALIZED VIEW tracing.event_metrics
ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(timestamp)
ORDER BY (timestamp, actor_id, event_type)
AS SELECT
    toStartOfMinute(timestamp) as timestamp,
    actor_id,
    event_type,
    count() as event_count,
    avg(execution_time_ns) as avg_execution_time,
    max(execution_time_ns) as max_execution_time,
    sum(message_size) as total_bytes
FROM tracing.events_local
GROUP BY timestamp, actor_id, event_type;

Performance Tuning

<!-- ClickHouse configuration -->
<yandex>
    <profiles>
        <default>
            <max_memory_usage>10000000000</max_memory_usage>
            <use_uncompressed_cache>1</use_uncompressed_cache>
            <load_balancing>random</load_balancing>
        </default>
    </profiles>
    
    <users>
        <default>
            <profile>default</profile>
            <networks incl="networks" replace="replace">
                <ip>::/0</ip>
            </networks>
        </default>
    </users>
</yandex>

Custom Storage Implementation

Storage Trait

#![allow(unused)]
fn main() {
use async_trait::async_trait;
use reflow_tracing::storage::{StorageBackend, StorageError};

#[async_trait]
pub trait StorageBackend: Send + Sync {
    async fn store_trace(&self, trace: FlowTrace) -> Result<(), StorageError>;
    async fn get_trace(&self, trace_id: TraceId) -> Result<Option<FlowTrace>, StorageError>;
    async fn query_traces(&self, query: TraceQuery) -> Result<Vec<FlowTrace>, StorageError>;
    async fn store_event(&self, trace_id: TraceId, event: TraceEvent) -> Result<(), StorageError>;
    async fn get_events(&self, trace_id: TraceId) -> Result<Vec<TraceEvent>, StorageError>;
    async fn health_check(&self) -> Result<(), StorageError>;
}
}

Example: Redis Storage

#![allow(unused)]
fn main() {
use redis::{Client, Connection};
use reflow_tracing::storage::{StorageBackend, StorageError};

pub struct RedisStorage {
    client: Client,
}

impl RedisStorage {
    pub fn new(url: &str) -> Result<Self, StorageError> {
        let client = Client::open(url)?;
        Ok(Self { client })
    }
}

#[async_trait]
impl StorageBackend for RedisStorage {
    async fn store_trace(&self, trace: FlowTrace) -> Result<(), StorageError> {
        let mut conn = self.client.get_connection()?;
        let key = format!("trace:{}", trace.trace_id);
        let value = serde_json::to_string(&trace)?;
        
        redis::cmd("SET")
            .arg(&key)
            .arg(&value)
            .arg("EX")
            .arg(3600) // 1 hour TTL
            .query(&mut conn)?;
            
        Ok(())
    }
    
    async fn get_trace(&self, trace_id: TraceId) -> Result<Option<FlowTrace>, StorageError> {
        let mut conn = self.client.get_connection()?;
        let key = format!("trace:{}", trace_id);
        
        let value: Option<String> = redis::cmd("GET")
            .arg(&key)
            .query(&mut conn)?;
            
        match value {
            Some(json) => Ok(Some(serde_json::from_str(&json)?)),
            None => Ok(None),
        }
    }
    
    // Implement other methods...
}
}

Storage Selection Guide

Decision Matrix

FeatureMemorySQLitePostgreSQLClickHouseCustom
PersistenceDepends
ConcurrencyMediumLowHighVery HighDepends
ScaleSmallMediumLargeMassiveDepends
Setup ComplexityNoneLowMediumHighVaries
Query FlexibilityLimitedHighVery HighHighDepends
AnalyticsBasicGoodExcellentOutstandingDepends
Operational OverheadNoneLowMediumHighVaries

Recommendations

Development/Testing:

#![allow(unused)]
fn main() {
// Quick start with memory storage
let storage = MemoryStorage::new();
}

Small Production:

#![allow(unused)]
fn main() {
// SQLite for simple deployments
let storage = SqliteStorage::new("traces.db").await?;
}

Medium Production:

#![allow(unused)]
fn main() {
// PostgreSQL for robust applications
let storage = PostgresStorage::new("postgresql://...").await?;
}

Large Scale/Analytics:

#![allow(unused)]
fn main() {
// ClickHouse for high-volume scenarios
let storage = ClickHouseStorage::new("http://clickhouse:8123").await?;
}

Migration Between Backends

Export/Import Tool

#![allow(unused)]
fn main() {
use reflow_tracing::migration::StorageMigrator;

// Migrate from SQLite to PostgreSQL
let migrator = StorageMigrator::new(
    SqliteStorage::new("traces.db").await?,
    PostgresStorage::new("postgresql://...").await?
);

migrator.migrate_all_traces().await?;
}

Backup and Restore

#![allow(unused)]
fn main() {
// Backup to file
let backup_path = "traces_backup.json";
storage.export_to_file(backup_path).await?;

// Restore from file
storage.import_from_file(backup_path).await?;
}

Monitoring Storage Performance

Metrics Collection

#![allow(unused)]
fn main() {
use reflow_tracing::storage::StorageMetrics;

let metrics = storage.get_metrics().await?;
println!("Storage performance:");
println!("  Write latency: {}ms", metrics.avg_write_latency_ms);
println!("  Read latency: {}ms", metrics.avg_read_latency_ms);
println!("  Storage size: {}MB", metrics.storage_size_mb);
println!("  Query performance: {}ms", metrics.avg_query_latency_ms);
}

Health Monitoring

#![allow(unused)]
fn main() {
// Regular health checks
tokio::spawn(async move {
    loop {
        match storage.health_check().await {
            Ok(_) => println!("Storage healthy"),
            Err(e) => eprintln!("Storage unhealthy: {}", e),
        }
        tokio::time::sleep(Duration::from_secs(30)).await;
    }
});
}

Best Practices

  1. Choose Appropriate Backend: Match storage backend to your scale and requirements
  2. Plan for Growth: Start simple but design for scale
  3. Monitor Performance: Track storage metrics and query performance
  4. Regular Backups: Implement automated backup strategies
  5. Partition Large Tables: Use time-based partitioning for better performance
  6. Index Strategically: Create indexes for common query patterns
  7. Manage Retention: Implement data retention policies to control growth
  8. Test Disaster Recovery: Regularly test backup and restore procedures
  9. Optimize Queries: Use EXPLAIN to understand and optimize query performance
  10. Monitor Resources: Keep an eye on disk space, memory, and CPU usage

Production Deployment

This guide covers deploying Reflow's observability framework in production environments, including scalability considerations, security best practices, and operational procedures.

Architecture Overview

Production Architecture

graph TB
    App1[Reflow App 1] --> LB[Load Balancer]
    App2[Reflow App 2] --> LB
    App3[Reflow App N] --> LB
    
    LB --> TS1[Tracing Server 1]
    LB --> TS2[Tracing Server 2]
    
    TS1 --> DB[(PostgreSQL Primary)]
    TS2 --> DB
    
    DB --> Replica[(PostgreSQL Replica)]
    
    TS1 --> Cache[(Redis Cache)]
    TS2 --> Cache
    
    Grafana[Grafana] --> DB
    Grafana --> Cache
    
    Monitor[Monitoring] --> TS1
    Monitor --> TS2
    Monitor --> DB

Component Responsibilities

  • Reflow Applications: Generate trace events
  • Load Balancer: Distribute connections across tracing servers
  • Tracing Servers: Receive, process, and store trace data
  • PostgreSQL: Primary data storage with replication
  • Redis: Caching and real-time data
  • Grafana: Visualization and dashboards
  • Monitoring: Health checks and alerting

Infrastructure Requirements

Minimum Production Setup

Tracing Server:

  • CPU: 2 cores
  • Memory: 4GB RAM
  • Storage: 50GB SSD
  • Network: 1Gbps

Database (PostgreSQL):

  • CPU: 4 cores
  • Memory: 8GB RAM
  • Storage: 200GB SSD (for data) + 100GB (for WAL)
  • Network: 1Gbps

Cache (Redis):

  • CPU: 2 cores
  • Memory: 4GB RAM
  • Storage: 20GB SSD
  • Network: 1Gbps

High-Scale Production Setup

Tracing Server Cluster:

  • 3+ instances
  • CPU: 8 cores each
  • Memory: 16GB RAM each
  • Storage: 100GB SSD each
  • Network: 10Gbps

Database Cluster:

  • Primary + 2 replicas
  • CPU: 16 cores each
  • Memory: 64GB RAM each
  • Storage: 1TB NVMe SSD each
  • Network: 10Gbps

Cache Cluster:

  • 3 instance Redis cluster
  • CPU: 4 cores each
  • Memory: 16GB RAM each
  • Storage: 50GB SSD each
  • Network: 10Gbps

Container Deployment

Docker Compose

# docker-compose.prod.yml
version: '3.8'

services:
  tracing-server:
    image: reflow/tracing-server:latest
    deploy:
      replicas: 3
      resources:
        limits:
          cpus: '4'
          memory: 8G
        reservations:
          cpus: '2'
          memory: 4G
    environment:
      - RUST_LOG=info
      - TRACING_DATABASE_URL=postgresql://user:pass@postgres:5432/tracing
      - TRACING_REDIS_URL=redis://redis:6379
      - TRACING_BIND_ADDRESS=0.0.0.0:8080
      - TRACING_MAX_CONNECTIONS=1000
    ports:
      - "8080:8080"
    networks:
      - tracing-network
    depends_on:
      - postgres
      - redis
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  postgres:
    image: postgres:15
    environment:
      - POSTGRES_DB=tracing
      - POSTGRES_USER=tracing_user
      - POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
      - POSTGRES_INITDB_ARGS=--auth-host=scram-sha-256
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
    networks:
      - tracing-network
    secrets:
      - postgres_password
    command: postgres -c shared_preload_libraries=pg_stat_statements
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U tracing_user -d tracing"]
      interval: 30s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data
    ports:
      - "6379:6379"
    networks:
      - tracing-network
    command: redis-server --appendonly yes --maxmemory 2gb --maxmemory-policy allkeys-lru
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 30s
      timeout: 5s
      retries: 3

  nginx:
    image: nginx:alpine
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
      - "443:443"
    networks:
      - tracing-network
    depends_on:
      - tracing-server

volumes:
  postgres_data:
  redis_data:

networks:
  tracing-network:
    driver: overlay

secrets:
  postgres_password:
    external: true

Kubernetes Deployment

# tracing-server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tracing-server
  labels:
    app: tracing-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tracing-server
  template:
    metadata:
      labels:
        app: tracing-server
    spec:
      containers:
      - name: tracing-server
        image: reflow/tracing-server:latest
        ports:
        - containerPort: 8080
        env:
        - name: RUST_LOG
          value: "info"
        - name: TRACING_DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: tracing-secrets
              key: database-url
        - name: TRACING_REDIS_URL
          value: "redis://redis:6379"
        resources:
          requests:
            memory: "2Gi"
            cpu: "1000m"
          limits:
            memory: "4Gi"
            cpu: "2000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: tracing-server
spec:
  selector:
    app: tracing-server
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  type: ClusterIP

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tracing-server-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - tracing.yourdomain.com
    secretName: tracing-tls
  rules:
  - host: tracing.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tracing-server
            port:
              number: 8080

PostgreSQL Configuration

# postgres-deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_DB
          value: tracing
        - name: POSTGRES_USER
          value: tracing_user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secrets
              key: password
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
        - name: postgres-config
          mountPath: /etc/postgresql/postgresql.conf
          subPath: postgresql.conf
        resources:
          requests:
            memory: "4Gi"
            cpu: "2000m"
          limits:
            memory: "8Gi"
            cpu: "4000m"
      volumes:
      - name: postgres-config
        configMap:
          name: postgres-config
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 200Gi
      storageClassName: fast-ssd

Configuration Management

Environment-Specific Configuration

# config/production.toml
[server]
bind_address = "0.0.0.0:8080"
max_connections = 1000
worker_threads = 8
keep_alive_timeout = 30

[database]
url = "postgresql://user:pass@postgres-cluster:5432/tracing"
max_connections = 20
min_connections = 5
connection_timeout = 5000
statement_timeout = 30000

[redis]
url = "redis://redis-cluster:6379"
pool_size = 10
connection_timeout = 3000

[tracing]
batch_size = 100
batch_timeout_ms = 2000
max_event_size = 1048576  # 1MB
compression = true

[logging]
level = "info"
format = "json"
target = "stdout"

[metrics]
enabled = true
bind_address = "0.0.0.0:9090"

Secret Management

# Kubernetes secrets
kubectl create secret generic tracing-secrets \
  --from-literal=database-url="postgresql://user:pass@postgres:5432/tracing" \
  --from-literal=redis-url="redis://redis:6379" \
  --from-literal=jwt-secret="your-jwt-secret"

kubectl create secret generic postgres-secrets \
  --from-literal=password="secure-postgres-password"

# Docker secrets
echo "secure-postgres-password" | docker secret create postgres_password -

Security Configuration

TLS/SSL Setup

# nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream tracing_backend {
        server tracing-server:8080;
        keepalive 32;
    }

    server {
        listen 80;
        return 301 https://$server_name$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name tracing.yourdomain.com;

        ssl_certificate /etc/ssl/certs/tracing.crt;
        ssl_certificate_key /etc/ssl/private/tracing.key;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512;

        location / {
            proxy_pass http://tracing_backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

Authentication Configuration

#![allow(unused)]
fn main() {
// Server configuration with authentication
use reflow_tracing::auth::{AuthConfig, JwtAuth};

let auth_config = AuthConfig {
    jwt_secret: env::var("JWT_SECRET")?,
    token_expiry: Duration::from_hours(24),
    issuer: "reflow-tracing".to_string(),
    audience: "reflow-clients".to_string(),
};

let server_config = ServerConfig {
    auth: Some(auth_config),
    require_auth: true,
    ..Default::default()
};
}

Network Security

# Network policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tracing-network-policy
spec:
  podSelector:
    matchLabels:
      app: tracing-server
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: reflow-client
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: postgres
    ports:
    - protocol: TCP
      port: 5432
  - to:
    - podSelector:
        matchLabels:
          app: redis
    ports:
    - protocol: TCP
      port: 6379

Monitoring and Observability

Prometheus Metrics

# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
    
    scrape_configs:
    - job_name: 'tracing-server'
      static_configs:
      - targets: ['tracing-server:9090']
      metrics_path: /metrics
      scrape_interval: 10s
    
    - job_name: 'postgres'
      static_configs:
      - targets: ['postgres-exporter:9187']
    
    - job_name: 'redis'
      static_configs:
      - targets: ['redis-exporter:9121']

Health Checks

#![allow(unused)]
fn main() {
// Health check endpoints
use warp::Filter;

let health = warp::path("health")
    .and(warp::get())
    .map(|| {
        // Check database connectivity
        // Check Redis connectivity
        // Check disk space
        warp::reply::json(&json!({
            "status": "healthy",
            "timestamp": Utc::now(),
            "checks": {
                "database": "ok",
                "redis": "ok",
                "disk_space": "ok"
            }
        }))
    });

let ready = warp::path("ready")
    .and(warp::get())
    .map(|| {
        // Check if server is ready to accept traffic
        warp::reply::json(&json!({
            "status": "ready",
            "timestamp": Utc::now()
        }))
    });
}

Alerting Rules

# alerting-rules.yaml
groups:
- name: tracing-server
  rules:
  - alert: TracingServerDown
    expr: up{job="tracing-server"} == 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Tracing server is down"
      description: "Tracing server {{ $labels.instance }} has been down for more than 5 minutes"

  - alert: HighLatency
    expr: tracing_request_duration_seconds{quantile="0.95"} > 0.5
    for: 10m
    labels:
      severity: warning
    annotations:
      summary: "High latency detected"
      description: "95th percentile latency is {{ $value }}s"

  - alert: HighErrorRate
    expr: rate(tracing_requests_total{status="error"}[5m]) > 0.1
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "High error rate"
      description: "Error rate is {{ $value }} requests/second"

  - alert: DatabaseConnectionsHigh
    expr: pg_stat_activity_count > 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "High number of database connections"
      description: "{{ $value }} active connections to PostgreSQL"

Performance Tuning

PostgreSQL Optimization

-- postgresql.conf optimizations
shared_buffers = 2GB                    # 25% of RAM
effective_cache_size = 6GB              # 75% of RAM
maintenance_work_mem = 512MB
work_mem = 16MB
max_connections = 200
wal_buffers = 16MB
checkpoint_completion_target = 0.9
random_page_cost = 1.1                  # For SSDs
effective_io_concurrency = 200          # For SSDs

-- Enable query logging for optimization
log_min_duration_statement = 1000       # Log queries > 1s
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on

Redis Optimization

# redis.conf
maxmemory 4gb
maxmemory-policy allkeys-lru
save 900 1
save 300 10
save 60 10000
tcp-keepalive 300
tcp-backlog 511

Application Tuning

#![allow(unused)]
fn main() {
// Server configuration for high performance
let config = ServerConfig {
    worker_threads: num_cpus::get(),
    max_connections: 1000,
    connection_pool_size: 20,
    batch_size: 200,
    batch_timeout: Duration::from_millis(5000),
    compression: true,
    buffer_size: 65536,
    ..Default::default()
};
}

Backup and Recovery

Database Backup

#!/bin/bash
# backup.sh
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DB_NAME="tracing"

# Create backup
pg_dump -h postgres -U tracing_user -d $DB_NAME | gzip > "$BACKUP_DIR/tracing_$TIMESTAMP.sql.gz"

# Upload to S3
aws s3 cp "$BACKUP_DIR/tracing_$TIMESTAMP.sql.gz" s3://your-backup-bucket/database/

# Cleanup old backups (keep 30 days)
find $BACKUP_DIR -name "tracing_*.sql.gz" -mtime +30 -delete

Automated Backup with CronJob

# backup-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: postgres-backup
spec:
  schedule: "0 2 * * *"  # Daily at 2 AM
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: postgres-backup
            image: postgres:15
            env:
            - name: PGPASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-secrets
                  key: password
            command:
            - /bin/bash
            - -c
            - |
              pg_dump -h postgres -U tracing_user tracing | gzip > /backup/tracing_$(date +%Y%m%d_%H%M%S).sql.gz
              # Upload to cloud storage
              aws s3 cp /backup/tracing_*.sql.gz s3://backup-bucket/
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: backup-pvc
          restartPolicy: OnFailure

Disaster Recovery

#!/bin/bash
# restore.sh
BACKUP_FILE=$1

if [ -z "$BACKUP_FILE" ]; then
    echo "Usage: $0 <backup_file>"
    exit 1
fi

# Download backup from S3
aws s3 cp "s3://your-backup-bucket/database/$BACKUP_FILE" ./

# Restore database
gunzip -c "$BACKUP_FILE" | psql -h postgres -U tracing_user -d tracing

echo "Database restored from $BACKUP_FILE"

Scaling Strategies

Horizontal Scaling

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: tracing-server-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: tracing-server
  minReplicas: 3
  maxReplicas: 20
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Database Scaling

-- Read replicas configuration
-- Primary server
ALTER SYSTEM SET wal_level = replica;
ALTER SYSTEM SET max_wal_senders = 3;
ALTER SYSTEM SET max_replication_slots = 3;
SELECT pg_reload_conf();

-- Create replication slot
SELECT pg_create_physical_replication_slot('replica_1');

-- Replica server setup
standby_mode = 'on'
primary_conninfo = 'host=postgres-primary port=5432 user=replicator'

Maintenance Procedures

Rolling Updates

#!/bin/bash
# rolling-update.sh
kubectl set image deployment/tracing-server tracing-server=reflow/tracing-server:v2.0.0
kubectl rollout status deployment/tracing-server
kubectl rollout history deployment/tracing-server

Database Maintenance

-- Regular maintenance tasks
VACUUM ANALYZE tracing.events;
VACUUM ANALYZE tracing.traces;
REINDEX INDEX CONCURRENTLY idx_events_timestamp;

-- Partition maintenance
SELECT create_monthly_partitions('tracing.events', '2024-01-01'::date);
SELECT drop_old_partitions('tracing.events', interval '90 days');

Log Rotation

# fluent-bit-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
data:
  fluent-bit.conf: |
    [INPUT]
        Name tail
        Path /var/log/containers/*tracing-server*.log
        Parser docker
        Tag tracing.*
    
    [OUTPUT]
        Name es
        Match tracing.*
        Host elasticsearch.logging.svc.cluster.local
        Port 9200
        Index tracing-logs
        Type _doc

Troubleshooting

Common Issues

High Memory Usage:

# Check memory usage
kubectl top pods
kubectl describe pod tracing-server-xxx

# Adjust memory limits
kubectl patch deployment tracing-server -p '{"spec":{"template":{"spec":{"containers":[{"name":"tracing-server","resources":{"limits":{"memory":"8Gi"}}}]}}}}'

Database Connection Issues:

-- Check active connections
SELECT count(*) FROM pg_stat_activity;

-- Kill long-running queries
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'active' AND query_start < NOW() - INTERVAL '10 minutes';

Performance Issues:

# Check metrics
curl http://tracing-server:9090/metrics | grep -E "latency|throughput|errors"

# Scale up
kubectl scale deployment tracing-server --replicas=10

This production deployment guide provides a comprehensive foundation for running Reflow's observability framework at scale with proper security, monitoring, and operational procedures.

Creating Actors

This guide covers how to create custom actors in Reflow using the correct implementation patterns. Learn everything from basic actors to advanced patterns with state management and error handling.

Creating Actors: Two Approaches

Reflow provides two ways to create actors:

  1. Actor Macro (Recommended): Use the #[actor] macro for simple, declarative actor creation
  2. Manual Implementation: Implement the Actor trait directly for maximum control

Using the Actor Macro

The #[actor] macro is the recommended way to create actors. It generates all the necessary boilerplate code including the Actor trait implementation, port management, and process creation.

Basic Actor

#![allow(unused)]
fn main() {
use std::collections::HashMap;
use reflow_network::{
    actor::ActorContext,
    message::Message,
};
use actor_macro::actor;

#[actor(
    HelloActor,
    inports::<100>(input),
    outports::<50>(output)
)]
async fn hello_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    if let Some(Message::String(text)) = payload.get("input") {
        let response = format!("Hello, {}!", text);
        
        Ok([
            ("output".to_owned(), Message::string(response))
        ].into())
    } else {
        Err(anyhow::anyhow!("Expected string input"))
    }
}
}

Actor with Multiple Inputs

#![allow(unused)]
fn main() {
#[actor(
    GreeterActor,
    inports::<100>(name, age),
    outports::<50>(greeting),
    await_all_inports  // Wait for both inputs before processing
)]
async fn greeter_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    let name = match payload.get("name").expect("expected name") {
        Message::String(s) => s,
        _ => return Err(anyhow::anyhow!("Name must be a string")),
    };
    
    let age = match payload.get("age").expect("expected age") {
        Message::Integer(n) => *n,
        _ => return Err(anyhow::anyhow!("Age must be an integer")),
    };
    
    let greeting = format!("Hello {}, you are {} years old!", name, age);
    
    Ok([
        ("greeting".to_owned(), Message::string(greeting))
    ].into())
}
}

Stateful Actor

#![allow(unused)]
fn main() {
use reflow_network::actor::MemoryState;

#[actor(
    CounterActor,
    state(MemoryState),
    inports::<100>(increment, reset),
    outports::<50>(count, total)
)]
async fn counter_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    
    let mut state_guard = state.lock();
    let memory_state = state_guard
        .as_mut_any()
        .downcast_mut::<MemoryState>()
        .expect("Expected MemoryState");
    
    // Initialize state if needed
    if !memory_state.contains_key("count") {
        memory_state.insert("count", serde_json::json!(0));
        memory_state.insert("total", serde_json::json!(0));
    }
    
    let current_count = memory_state.get("count")
        .and_then(|v| v.as_i64())
        .unwrap_or(0);
    
    let current_total = memory_state.get("total")
        .and_then(|v| v.as_i64())
        .unwrap_or(0);
    
    let (new_count, new_total) = if payload.contains_key("reset") {
        // Reset counter
        (0, current_total)
    } else if let Some(Message::Integer(amount)) = payload.get("increment") {
        // Increment by specific amount
        let new_count = current_count + amount;
        (new_count, current_total + amount)
    } else {
        // Default increment by 1
        let new_count = current_count + 1;
        (new_count, current_total + 1)
    };
    
    // Update state
    memory_state.insert("count", serde_json::json!(new_count));
    memory_state.insert("total", serde_json::json!(new_total));
    
    println!("Counter: {} (total: {})", new_count, new_total);
    
    Ok([
        ("count".to_owned(), Message::Integer(new_count)),
        ("total".to_owned(), Message::Integer(new_total)),
    ].into())
}
}

Actor Macro Parameters

Port Definitions

#![allow(unused)]
fn main() {
// Basic ports (unbounded channels)
inports(A, B, C)
outports(X, Y)

// Ports with capacity (bounded channels)
inports::<100>(A, B)      // Input ports with capacity 100
outports::<50>(X, Y)      // Output ports with capacity 50
}

State Management

#![allow(unused)]
fn main() {
// Use built-in MemoryState
state(MemoryState)

// Custom state types can also be used
// (must implement ActorState trait)
}

Input Synchronization

#![allow(unused)]
fn main() {
// Process inputs as they arrive (default)
#[actor(MyActor, inports(A, B), outports(C))]

// Wait for ALL inputs before processing
#[actor(MyActor, inports(A, B), outports(C), await_all_inports)]
}

Practical Examples

Data Processing Pipeline

#![allow(unused)]
fn main() {
// Sum Actor - adds numbers from multiple sources
#[actor(
    SumActor,
    inports::<100>(numbers),
    outports::<50>(sum, count)
)]
async fn sum_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    if let Some(Message::Array(numbers)) = payload.get("numbers") {
        let mut sum = 0i64;
        let mut count = 0usize;
        
        for num in numbers {
            if let Message::Integer(n) = num {
                sum += n;
                count += 1;
            }
        }
        
        println!("Sum Actor: {} numbers, sum = {}", count, sum);
        
        Ok([
            ("sum".to_owned(), Message::Integer(sum)),
            ("count".to_owned(), Message::Integer(count as i64)),
        ].into())
    } else {
        Err(anyhow::anyhow!("Expected array of numbers"))
    }
}

// Filter Actor - filters values based on condition
#[actor(
    FilterActor,
    inports::<100>(values, threshold),
    outports::<50>(passed, failed),
    await_all_inports
)]
async fn filter_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    let threshold = match payload.get("threshold").expect("expected threshold") {
        Message::Integer(t) => *t,
        _ => return Err(anyhow::anyhow!("Threshold must be integer")),
    };
    
    if let Some(Message::Array(values)) = payload.get("values") {
        let mut passed = Vec::new();
        let mut failed = Vec::new();
        
        for value in values {
            if let Message::Integer(n) = value {
                if *n >= threshold {
                    passed.push(value.clone());
                } else {
                    failed.push(value.clone());
                }
            }
        }
        
        println!("Filter Actor: {} passed, {} failed (threshold: {})", 
                passed.len(), failed.len(), threshold);
        
        Ok([
            ("passed".to_owned(), Message::Array(passed)),
            ("failed".to_owned(), Message::Array(failed)),
        ].into())
    } else {
        Err(anyhow::anyhow!("Expected array of values"))
    }
}
}

HTTP Client Actor

#![allow(unused)]
fn main() {
use reqwest;

#[actor(
    HttpClientActor,
    inports::<50>(request),
    outports::<25>(response, error)
)]
async fn http_client_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    // Parse request
    let request = match payload.get("request") {
        Some(Message::Object(obj)) => obj,
        _ => return Err(anyhow::anyhow!("Expected request object")),
    };
    
    let url = match request.get("url") {
        Some(Message::String(s)) => s,
        _ => return Err(anyhow::anyhow!("Missing URL in request")),
    };
    
    let method = request.get("method")
        .and_then(|m| if let Message::String(s) = m { Some(s.as_str()) } else { None })
        .unwrap_or("GET");
    
    // Make HTTP request
    let client = reqwest::Client::new();
    
    let result = match method {
        "GET" => {
            match client.get(url).send().await {
                Ok(response) => {
                    let status = response.status().as_u16();
                    let text = response.text().await.unwrap_or_default();
                    
                    let response_obj = [
                        ("status".to_owned(), Message::Integer(status as i64)),
                        ("body".to_owned(), Message::String(text)),
                        ("url".to_owned(), Message::String(url.clone())),
                    ].into();
                    
                    [("response".to_owned(), Message::Object(response_obj))].into()
                },
                Err(e) => {
                    let error_obj = [
                        ("message".to_owned(), Message::String(e.to_string())),
                        ("url".to_owned(), Message::String(url.clone())),
                    ].into();
                    
                    [("error".to_owned(), Message::Object(error_obj))].into()
                }
            }
        },
        "POST" => {
            let body = request.get("body")
                .and_then(|b| if let Message::String(s) = b { Some(s) } else { None })
                .unwrap_or("");
            
            match client.post(url).body(body.to_string()).send().await {
                Ok(response) => {
                    let status = response.status().as_u16();
                    let text = response.text().await.unwrap_or_default();
                    
                    let response_obj = [
                        ("status".to_owned(), Message::Integer(status as i64)),
                        ("body".to_owned(), Message::String(text)),
                        ("url".to_owned(), Message::String(url.clone())),
                    ].into();
                    
                    [("response".to_owned(), Message::Object(response_obj))].into()
                },
                Err(e) => {
                    let error_obj = [
                        ("message".to_owned(), Message::String(e.to_string())),
                        ("url".to_owned(), Message::String(url.clone())),
                    ].into();
                    
                    [("error".to_owned(), Message::Object(error_obj))].into()
                }
            }
        },
        _ => {
            let error_obj = [
                ("message".to_owned(), Message::String(format!("Unsupported method: {}", method))),
            ].into();
            
            [("error".to_owned(), Message::Object(error_obj))].into()
        }
    };
    
    Ok(result)
}
}

Batch Processing Actor

#![allow(unused)]
fn main() {
#[actor(
    BatchActor,
    state(MemoryState),
    inports::<200>(item, flush),
    outports::<50>(batch, count)
)]
async fn batch_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    
    let mut state_guard = state.lock();
    let memory_state = state_guard
        .as_mut_any()
        .downcast_mut::<MemoryState>()
        .expect("Expected MemoryState");
    
    // Initialize batch storage
    if !memory_state.contains_key("batch") {
        memory_state.insert("batch", serde_json::json!([]));
        memory_state.insert("batch_size", serde_json::json!(10)); // Configurable batch size
    }
    
    let batch_size = memory_state.get("batch_size")
        .and_then(|v| v.as_u64())
        .unwrap_or(10) as usize;
    
    let mut current_batch: Vec<serde_json::Value> = memory_state.get("batch")
        .and_then(|v| v.as_array())
        .cloned()
        .unwrap_or_default();
    
    // Handle flush command
    if payload.contains_key("flush") {
        if !current_batch.is_empty() {
            let batch_messages: Vec<Message> = current_batch
                .into_iter()
                .map(|v| Message::from(v))
                .collect();
            
            // Clear batch
            memory_state.insert("batch", serde_json::json!([]));
            
            let count = batch_messages.len();
            println!("Batch Actor: Flushing {} items", count);
            
            return Ok([
                ("batch".to_owned(), Message::Array(batch_messages)),
                ("count".to_owned(), Message::Integer(count as i64)),
            ].into());
        } else {
            return Ok(HashMap::new()); // No items to flush
        }
    }
    
    // Handle new item
    if let Some(item) = payload.get("item") {
        current_batch.push(serde_json::json!(item));
        
        // Check if batch is full
        if current_batch.len() >= batch_size {
            let batch_messages: Vec<Message> = current_batch
                .iter()
                .map(|v| Message::from(v.clone()))
                .collect();
            
            // Clear batch
            memory_state.insert("batch", serde_json::json!([]));
            
            let count = batch_messages.len();
            println!("Batch Actor: Full batch of {} items", count);
            
            Ok([
                ("batch".to_owned(), Message::Array(batch_messages)),
                ("count".to_owned(), Message::Integer(count as i64)),
            ].into())
        } else {
            // Update batch state
            memory_state.insert("batch", serde_json::json!(current_batch));
            
            // Return empty result (batch not ready yet)
            Ok(HashMap::new())
        }
    } else {
        Err(anyhow::anyhow!("Expected item or flush command"))
    }
}
}

Error Handling Patterns

Graceful Error Handling

#![allow(unused)]
fn main() {
#[actor(
    ValidatorActor,
    inports::<100>(data),
    outports::<50>(valid, invalid, error)
)]
async fn validator_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    match payload.get("data") {
        Some(Message::Integer(n)) if *n > 0 => {
            println!("Validator: Valid number {}", n);
            Ok([("valid".to_owned(), Message::Integer(*n))].into())
        },
        Some(Message::Integer(n)) => {
            println!("Validator: Invalid number {} (must be positive)", n);
            Ok([("invalid".to_owned(), Message::Integer(*n))].into())
        },
        Some(other) => {
            let error_msg = format!("Expected integer, got {:?}", other);
            println!("Validator: {}", error_msg);
            Ok([("error".to_owned(), Message::Error(error_msg))].into())
        },
        None => {
            Err(anyhow::anyhow!("Missing data field"))
        }
    }
}

// Retry Actor - implements retry logic with exponential backoff
#[actor(
    RetryActor,
    state(MemoryState),
    inports::<50>(task),
    outports::<25>(success, failure)
)]
async fn retry_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    
    let task = payload.get("task")
        .ok_or_else(|| anyhow::anyhow!("Missing task"))?;
    
    let max_retries = 3;
    let base_delay_ms = 100;
    
    for attempt in 1..=max_retries {
        match simulate_task_processing(task).await {
            Ok(result) => {
                println!("Retry Actor: Task succeeded on attempt {}", attempt);
                return Ok([("success".to_owned(), result)].into());
            },
            Err(e) => {
                if attempt < max_retries {
                    let delay = base_delay_ms * (2_u64.pow(attempt - 1));
                    println!("Retry Actor: Attempt {} failed, retrying in {}ms: {}", 
                            attempt, delay, e);
                    
                    tokio::time::sleep(tokio::time::Duration::from_millis(delay)).await;
                } else {
                    println!("Retry Actor: All {} attempts failed: {}", max_retries, e);
                    return Ok([
                        ("failure".to_owned(), Message::Error(format!("Failed after {} attempts: {}", max_retries, e)))
                    ].into());
                }
            }
        }
    }
    
    unreachable!()
}

async fn simulate_task_processing(task: &Message) -> Result<Message, anyhow::Error> {
    // Simulate processing that might fail
    use rand::Rng;
    let mut rng = rand::thread_rng();
    
    if rng.gen_bool(0.7) { // 70% success rate
        Ok(Message::String(format!("Processed: {:?}", task)))
    } else {
        Err(anyhow::anyhow!("Simulated task failure"))
    }
}
}

Using Actors in Networks

Registration and Instantiation

use reflow_network::{Network, NetworkConfig};
use reflow_network::connector::{Connector, ConnectionPoint, InitialPacket};

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    let mut network = Network::new(NetworkConfig::default());
    
    // Register actor types
    network.register_actor("hello_process", HelloActor::new())?;
    network.register_actor("counter_process", CounterActor::new())?;
    network.register_actor("validator_process", ValidatorActor::new())?;
    
    // Create actor instances
    network.add_node("hello1", "hello_process")?;
    network.add_node("counter1", "counter_process")?;
    network.add_node("validator1", "validator_process")?;
    
    // Connect actors
    network.add_connection(Connector {
        from: ConnectionPoint {
            actor: "hello1".to_owned(),
            port: "output".to_owned(),
            ..Default::default()
        },
        to: ConnectionPoint {
            actor: "validator1".to_owned(),
            port: "data".to_owned(),
            ..Default::default()
        },
    });
    
    // Send initial data
    network.add_initial(InitialPacket {
        to: ConnectionPoint {
            actor: "hello1".to_owned(),
            port: "input".to_owned(),
            initial_data: Some(Message::String("World".to_owned())),
        },
    });
    
    // Start the network
    network.start().await?;
    
    // Wait for processing
    tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
    
    Ok(())
}

Testing Actors

Unit Testing Actor Functions

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    use reflow_network::actor::{ActorContext, MemoryState, ActorLoad};
    use std::sync::Arc;
    use parking_lot::Mutex;
    
    fn create_test_context(payload: HashMap<String, Message>) -> ActorContext {
        let (tx, _rx) = flume::unbounded();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(MemoryState::default()));
        
        ActorContext::new(
            payload,
            (tx, _rx),
            state,
            HashMap::new(),
            Arc::new(Mutex::new(ActorLoad::new(0))),
        )
    }
    
    #[tokio::test]
    async fn test_hello_actor() {
        let payload = HashMap::from([
            ("input".to_string(), Message::String("Test".to_string()))
        ]);
        
        let context = create_test_context(payload);
        let result = hello_actor(context).await.unwrap();
        
        assert_eq!(
            result.get("output"),
            Some(&Message::String("Hello, Test!".to_string()))
        );
    }
    
    #[tokio::test]
    async fn test_counter_actor_increment() {
        let payload = HashMap::from([
            ("increment".to_string(), Message::Integer(5))
        ]);
        
        let context = create_test_context(payload);
        let result = counter_actor(context).await.unwrap();
        
        assert_eq!(result.get("count"), Some(&Message::Integer(5)));
        assert_eq!(result.get("total"), Some(&Message::Integer(5)));
    }
    
    #[tokio::test]
    async fn test_greeter_actor() {
        let payload = HashMap::from([
            ("name".to_string(), Message::String("Alice".to_string())),
            ("age".to_string(), Message::Integer(30)),
        ]);
        
        let context = create_test_context(payload);
        let result = greeter_actor(context).await.unwrap();
        
        assert_eq!(
            result.get("greeting"),
            Some(&Message::String("Hello Alice, you are 30 years old!".to_string()))
        );
    }
    
    #[tokio::test]
    async fn test_validator_actor_valid() {
        let payload = HashMap::from([
            ("data".to_string(), Message::Integer(42))
        ]);
        
        let context = create_test_context(payload);
        let result = validator_actor(context).await.unwrap();
        
        assert_eq!(result.get("valid"), Some(&Message::Integer(42)));
        assert!(!result.contains_key("invalid"));
        assert!(!result.contains_key("error"));
    }
    
    #[tokio::test]
    async fn test_validator_actor_invalid() {
        let payload = HashMap::from([
            ("data".to_string(), Message::Integer(-5))
        ]);
        
        let context = create_test_context(payload);
        let result = validator_actor(context).await.unwrap();
        
        assert_eq!(result.get("invalid"), Some(&Message::Integer(-5)));
        assert!(!result.contains_key("valid"));
    }
}
}

Best Practices

Actor Design Guidelines

  1. Single Responsibility: Each actor should have one clear purpose
  2. Idempotent Processing: Handle duplicate messages gracefully
  3. Error Propagation: Use both Result returns and error output ports
  4. State Minimal: Keep state minimal and well-defined
  5. Port Naming: Use descriptive port names

Performance Tips

#![allow(unused)]
fn main() {
// Use appropriate channel capacities
inports::<1000>(high_volume_input)   // High throughput
inports::<10>(low_volume_input)      // Low throughput

// Batch processing for efficiency
#[actor(
    EfficientProcessor,
    inports::<500>(batch),
    outports::<100>(results)
)]
async fn efficient_processor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    if let Some(Message::Array(items)) = payload.get("batch") {
        // Process items in parallel
        use futures::stream::{self, StreamExt};
        
        let results: Vec<Message> = stream::iter(items.iter())
            .map(|item| async move {
                process_single_item(item).await
            })
            .buffer_unordered(10) // Process 10 items concurrently
            .collect()
            .await;
        
        Ok([("results".to_owned(), Message::Array(results))].into())
    } else {
        Err(anyhow::anyhow!("Expected batch of items"))
    }
}

async fn process_single_item(item: &Message) -> Message {
    // Simulate processing
    tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
    item.clone()
}
}

Memory Management

#![allow(unused)]
fn main() {
// Avoid cloning large data when possible
#[actor(
    MemoryEfficientActor,
    inports::<100>(data),
    outports::<50>(processed)
)]
async fn memory_efficient_actor(context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
    let payload = context.get_payload();
    
    // Process data in-place when possible
    if let Some(Message::Array(items)) = payload.get("data") {
        let count = items.len();
        
        // Instead of cloning all items, just extract what we need
        let summary = Message::Object([
            ("count".to_owned(), Message::Integer(count as i64)),
            ("first_item".to_owned(), items.first().cloned().unwrap_or(Message::None)),
            ("last_item".to_owned(), items.last().cloned().unwrap_or(Message::None)),
        ].into());
        
        Ok([("processed".to_owned(), summary)].into())
    } else {
        Err(anyhow::anyhow!("Expected array data"))
    }
}
}

Manual Actor Implementation

For maximum control or when the macro limitations are insufficient, you can implement the Actor trait manually. This approach gives you complete control over the actor's behavior and lifecycle.

Basic Manual Actor

#![allow(unused)]
fn main() {
use reflow_network::actor::{Actor, ActorBehavior, ActorContext, Port, MemoryState, ActorLoad};
use reflow_network::message::Message;
use std::collections::HashMap;
use std::sync::Arc;
use parking_lot::Mutex;
use std::pin::Pin;
use std::future::Future;

pub struct ManualActor {
    inports: Port,
    outports: Port,
    name: String,
    load: Arc<Mutex<ActorLoad>>,
}

impl ManualActor {
    pub fn new(name: String) -> Self {
        Self {
            inports: flume::unbounded(),
            outports: flume::unbounded(),
            name,
            load: Arc::new(Mutex::new(ActorLoad::new(0))),
        }
    }
}

impl Actor for ManualActor {
    fn get_behavior(&self) -> ActorBehavior {
        let name = self.name.clone();
        
        Box::new(move |context: ActorContext| {
            let name = name.clone();
            
            Box::pin(async move {
                let payload = context.get_payload();
                
                if let Some(Message::String(text)) = payload.get("input") {
                    let response = format!("{}: Processing '{}'", name, text);
                    println!("{}", response);
                    
                    Ok([
                        ("output".to_owned(), Message::String(response))
                    ].into())
                } else {
                    Err(anyhow::anyhow!("Expected string input"))
                }
            })
        })
    }
    
    fn get_inports(&self) -> Port {
        self.inports.clone()
    }
    
    fn get_outports(&self) -> Port {
        self.outports.clone()
    }
    
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> {
        self.load.clone()
    }
    
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
        let inports = self.get_inports();
        let behavior = self.get_behavior();
        let outports = self.get_outports();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(MemoryState::default()));
        let load_count = self.load_count();
        
        Box::pin(async move {
            use futures::stream::StreamExt;
            
            loop {
                if let Some(payload) = inports.1.stream().next().await {
                    // Increment load count
                    {
                        let mut load = load_count.lock();
                        load.inc();
                    }
                    
                    let context = ActorContext::new(
                        payload,
                        outports.clone(),
                        state.clone(),
                        HashMap::new(),
                        load_count.clone(),
                    );
                    
                    match behavior(context).await {
                        Ok(result) => {
                            if !result.is_empty() {
                                let _ = outports.0.send(result);
                            }
                        },
                        Err(e) => {
                            eprintln!("Error in actor behavior: {:?}", e);
                        }
                    }
                    
                    // Decrement load count
                    {
                        let mut load = load_count.lock();
                        load.dec();
                    }
                }
            }
        })
    }
}
}

Stateful Manual Actor

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};

#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct CustomState {
    pub counter: i64,
    pub last_message: String,
    pub timestamps: Vec<i64>,
}

impl reflow_network::actor::ActorState for CustomState {
    fn as_mut_any(&mut self) -> &mut dyn std::any::Any {
        self
    }
    
    fn as_any(&self) -> &dyn std::any::Any {
        self
    }
}

pub struct StatefulManualActor {
    inports: Port,
    outports: Port,
    initial_state: CustomState,
    load: Arc<Mutex<ActorLoad>>,
}

impl StatefulManualActor {
    pub fn new(initial_state: CustomState) -> Self {
        Self {
            inports: flume::unbounded(),
            outports: flume::unbounded(),
            initial_state,
            load: Arc::new(Mutex::new(ActorLoad::new(0))),
        }
    }
}

impl Actor for StatefulManualActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                let state = context.get_state();
                
                let mut state_guard = state.lock();
                let custom_state = state_guard
                    .as_mut_any()
                    .downcast_mut::<CustomState>()
                    .expect("Expected CustomState");
                
                // Update counter
                custom_state.counter += 1;
                
                // Record timestamp
                let now = chrono::Utc::now().timestamp_millis();
                custom_state.timestamps.push(now);
                
                // Keep only last 10 timestamps
                if custom_state.timestamps.len() > 10 {
                    custom_state.timestamps.remove(0);
                }
                
                // Process message
                if let Some(Message::String(text)) = payload.get("message") {
                    custom_state.last_message = text.clone();
                    
                    let response = format!(
                        "Processed message #{}: '{}' (last 5 timestamps: {:?})",
                        custom_state.counter,
                        text,
                        custom_state.timestamps.iter().rev().take(5).collect::<Vec<_>>()
                    );
                    
                    Ok([
                        ("response".to_owned(), Message::String(response)),
                        ("counter".to_owned(), Message::Integer(custom_state.counter)),
                    ].into())
                } else {
                    Err(anyhow::anyhow!("Expected message field"))
                }
            })
        })
    }
    
    fn get_inports(&self) -> Port {
        self.inports.clone()
    }
    
    fn get_outports(&self) -> Port {
        self.outports.clone()
    }
    
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> {
        self.load.clone()
    }
    
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
        let inports = self.get_inports();
        let behavior = self.get_behavior();
        let outports = self.get_outports();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(self.initial_state.clone()));
        let load_count = self.load_count();
        
        Box::pin(async move {
            use futures::stream::StreamExt;
            
            loop {
                if let Some(payload) = inports.1.stream().next().await {
                    // Increment load count
                    {
                        let mut load = load_count.lock();
                        load.inc();
                    }
                    
                    let context = ActorContext::new(
                        payload,
                        outports.clone(),
                        state.clone(),
                        HashMap::new(),
                        load_count.clone(),
                    );
                    
                    match behavior(context).await {
                        Ok(result) => {
                            if !result.is_empty() {
                                let _ = outports.0.send(result);
                            }
                        },
                        Err(e) => {
                            eprintln!("Error in stateful actor behavior: {:?}", e);
                        }
                    }
                    
                    // Decrement load count
                    {
                        let mut load = load_count.lock();
                        load.dec();
                    }
                }
            }
        })
    }
}
}

Multi-Input Manual Actor

#![allow(unused)]
fn main() {
pub struct MultiInputActor {
    inports: Port,
    outports: Port,
    await_all_inputs: bool,
    input_ports: Vec<String>,
    load: Arc<Mutex<ActorLoad>>,
}

impl MultiInputActor {
    pub fn new(input_ports: Vec<String>, await_all_inputs: bool) -> Self {
        Self {
            inports: flume::bounded(100),
            outports: flume::bounded(50),
            await_all_inputs,
            input_ports,
            load: Arc::new(Mutex::new(ActorLoad::new(0))),
        }
    }
}

impl Actor for MultiInputActor {
    fn get_behavior(&self) -> ActorBehavior {
        Box::new(|context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                
                // Collect all available data
                let mut results = HashMap::new();
                let mut total_value = 0i64;
                let mut value_count = 0;
                
                for (port, message) in &payload {
                    if let Message::Integer(value) = message {
                        total_value += value;
                        value_count += 1;
                        
                        results.insert(
                            format!("processed_{}", port), 
                            Message::Integer(value * 2)
                        );
                    }
                }
                
                if value_count > 0 {
                    results.insert("sum".to_owned(), Message::Integer(total_value));
                    results.insert("average".to_owned(), Message::Integer(total_value / value_count));
                    results.insert("count".to_owned(), Message::Integer(value_count));
                }
                
                println!("MultiInput Actor: processed {} values, sum = {}", value_count, total_value);
                
                Ok(results)
            })
        })
    }
    
    fn get_inports(&self) -> Port {
        self.inports.clone()
    }
    
    fn get_outports(&self) -> Port {
        self.outports.clone()
    }
    
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> {
        self.load.clone()
    }
    
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
        let inports = self.get_inports();
        let behavior = self.get_behavior();
        let outports = self.get_outports();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(MemoryState::default()));
        let load_count = self.load_count();
        let await_all_inputs = self.await_all_inputs;
        let input_ports_count = self.input_ports.len();
        
        Box::pin(async move {
            use futures::stream::StreamExt;
            let mut all_inputs: HashMap<String, Message> = HashMap::new();
            
            loop {
                if let Some(packet) = inports.1.stream().next().await {
                    // Increment load count
                    {
                        let mut load = load_count.lock();
                        load.inc();
                    }
                    
                    if await_all_inputs {
                        // Accumulate inputs until we have all expected ports
                        all_inputs.extend(packet);
                        
                        if all_inputs.len() >= input_ports_count {
                            let context = ActorContext::new(
                                all_inputs.clone(),
                                outports.clone(),
                                state.clone(),
                                HashMap::new(),
                                load_count.clone(),
                            );
                            
                            match behavior(context).await {
                                Ok(result) => {
                                    if !result.is_empty() {
                                        let _ = outports.0.send(result);
                                    }
                                },
                                Err(e) => {
                                    eprintln!("Error in multi-input actor behavior: {:?}", e);
                                }
                            }
                            
                            all_inputs.clear();
                        } else {
                            // Continue without decrementing load count
                            {
                                let mut load = load_count.lock();
                                load.dec();
                            }
                            continue;
                        }
                    } else {
                        // Process immediately
                        let context = ActorContext::new(
                            packet,
                            outports.clone(),
                            state.clone(),
                            HashMap::new(),
                            load_count.clone(),
                        );
                        
                        match behavior(context).await {
                            Ok(result) => {
                                if !result.is_empty() {
                                    let _ = outports.0.send(result);
                                }
                            },
                            Err(e) => {
                                eprintln!("Error in multi-input actor behavior: {:?}", e);
                            }
                        }
                    }
                    
                    // Decrement load count
                    {
                        let mut load = load_count.lock();
                        load.dec();
                    }
                }
            }
        })
    }
}
}

When to Use Manual Implementation

Use manual implementation when:

  1. Complex State Requirements: You need custom state types or complex state initialization
  2. Custom Port Logic: You need dynamic port creation or complex routing logic
  3. Advanced Error Handling: You need sophisticated error recovery or circuit breaker patterns
  4. Performance Optimization: You need fine-grained control over message processing
  5. Integration Requirements: You need to integrate with external systems in specific ways

Example: Circuit Breaker Actor

#![allow(unused)]
fn main() {
#[derive(Debug, Clone)]
enum CircuitState {
    Closed,   // Normal operation
    Open,     // Failing, rejecting requests
    HalfOpen, // Testing if service recovered
}

pub struct CircuitBreakerActor {
    inports: Port,
    outports: Port,
    failure_threshold: u32,
    timeout_ms: u64,
    load: Arc<Mutex<ActorLoad>>,
}

impl CircuitBreakerActor {
    pub fn new(failure_threshold: u32, timeout_ms: u64) -> Self {
        Self {
            inports: flume::unbounded(),
            outports: flume::unbounded(),
            failure_threshold,
            timeout_ms,
            load: Arc::new(Mutex::new(ActorLoad::new(0))),
        }
    }
}

impl Actor for CircuitBreakerActor {
    fn get_behavior(&self) -> ActorBehavior {
        let failure_threshold = self.failure_threshold;
        let timeout_ms = self.timeout_ms;
        
        Box::new(move |context: ActorContext| {
            Box::pin(async move {
                let payload = context.get_payload();
                let state = context.get_state();
                
                let mut state_guard = state.lock();
                let memory_state = state_guard
                    .as_mut_any()
                    .downcast_mut::<MemoryState>()
                    .expect("Expected MemoryState");
                
                // Initialize circuit breaker state
                if !memory_state.contains_key("circuit_state") {
                    memory_state.insert("circuit_state", serde_json::json!("Closed"));
                    memory_state.insert("failure_count", serde_json::json!(0));
                    memory_state.insert("last_failure_time", serde_json::json!(0));
                }
                
                let circuit_state_str = memory_state.get("circuit_state")
                    .and_then(|v| v.as_str())
                    .unwrap_or("Closed");
                
                let failure_count = memory_state.get("failure_count")
                    .and_then(|v| v.as_u64())
                    .unwrap_or(0) as u32;
                
                let last_failure_time = memory_state.get("last_failure_time")
                    .and_then(|v| v.as_i64())
                    .unwrap_or(0);
                
                let circuit_state = match circuit_state_str {
                    "Open" => CircuitState::Open,
                    "HalfOpen" => CircuitState::HalfOpen,
                    _ => CircuitState::Closed,
                };
                
                let now = chrono::Utc::now().timestamp_millis();
                
                match circuit_state {
                    CircuitState::Open => {
                        // Check if timeout has passed
                        if now - last_failure_time > timeout_ms as i64 {
                            memory_state.insert("circuit_state", serde_json::json!("HalfOpen"));
                            println!("Circuit breaker: Transitioning to HalfOpen");
                        } else {
                            return Ok([
                                ("rejected".to_owned(), 
                                 Message::Error("Circuit breaker is OPEN".to_string()))
                            ].into());
                        }
                    },
                    CircuitState::HalfOpen => {
                        // Process one request to test
                    },
                    CircuitState::Closed => {
                        // Normal operation
                    }
                }
                
                // Simulate processing the request
                if let Some(request) = payload.get("request") {
                    // Simulate success/failure (in real implementation, you'd call actual service)
                    let success = payload.get("simulate_success")
                        .and_then(|v| if let Message::Boolean(b) = v { Some(*b) } else { None })
                        .unwrap_or(true);
                    
                    if success {
                        // Success - reset failure count if in HalfOpen
                        if matches!(circuit_state, CircuitState::HalfOpen) {
                            memory_state.insert("circuit_state", serde_json::json!("Closed"));
                            memory_state.insert("failure_count", serde_json::json!(0));
                            println!("Circuit breaker: Transitioning to Closed");
                        }
                        
                        Ok([
                            ("success".to_owned(), Message::String("Request processed".to_string()))
                        ].into())
                    } else {
                        // Failure
                        let new_failure_count = failure_count + 1;
                        memory_state.insert("failure_count", serde_json::json!(new_failure_count));
                        memory_state.insert("last_failure_time", serde_json::json!(now));
                        
                        if new_failure_count >= failure_threshold {
                            memory_state.insert("circuit_state", serde_json::json!("Open"));
                            println!("Circuit breaker: Transitioning to Open");
                        }
                        
                        Ok([
                            ("failure".to_owned(), 
                             Message::Error(format!("Request failed (failures: {})", new_failure_count)))
                        ].into())
                    }
                } else {
                    Err(anyhow::anyhow!("Missing request"))
                }
            })
        })
    }
    
    fn get_inports(&self) -> Port { self.inports.clone() }
    fn get_outports(&self) -> Port { self.outports.clone() }
    fn load_count(&self) -> Arc<Mutex<ActorLoad>> { self.load.clone() }
    
    fn create_process(&self) -> Pin<Box<dyn Future<Output = ()> + 'static + Send>> {
        let inports = self.get_inports();
        let behavior = self.get_behavior();
        let outports = self.get_outports();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(MemoryState::default()));
        let load_count = self.load_count();
        
        Box::pin(async move {
            use futures::stream::StreamExt;
            
            loop {
                if let Some(payload) = inports.1.stream().next().await {
                    {
                        let mut load = load_count.lock();
                        load.inc();
                    }
                    
                    let context = ActorContext::new(
                        payload,
                        outports.clone(),
                        state.clone(),
                        HashMap::new(),
                        load_count.clone(),
                    );
                    
                    match behavior(context).await {
                        Ok(result) => {
                            if !result.is_empty() {
                                let _ = outports.0.send(result);
                            }
                        },
                        Err(e) => {
                            eprintln!("Error in circuit breaker: {:?}", e);
                        }
                    }
                    
                    {
                        let mut load = load_count.lock();
                        load.dec();
                    }
                }
            }
        })
    }
}
}

Testing Manual Actors

#![allow(unused)]
fn main() {
#[cfg(test)]
mod manual_actor_tests {
    use super::*;
    
    #[tokio::test]
    async fn test_manual_actor() {
        let actor = ManualActor::new("TestActor".to_string());
        let behavior = actor.get_behavior();
        
        let payload = HashMap::from([
            ("input".to_string(), Message::String("test".to_string()))
        ]);
        
        let (tx, _rx) = flume::unbounded();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(MemoryState::default()));
        
        let context = ActorContext::new(
            payload,
            (tx, _rx),
            state,
            HashMap::new(),
            Arc::new(Mutex::new(ActorLoad::new(0))),
        );
        
        let result = behavior(context).await.unwrap();
        
        assert!(result.contains_key("output"));
        if let Some(Message::String(output)) = result.get("output") {
            assert!(output.contains("TestActor"));
            assert!(output.contains("test"));
        }
    }
    
    #[tokio::test]
    async fn test_stateful_manual_actor() {
        let initial_state = CustomState {
            counter: 0,
            last_message: String::new(),
            timestamps: Vec::new(),
        };
        
        let actor = StatefulManualActor::new(initial_state);
        let behavior = actor.get_behavior();
        
        let payload = HashMap::from([
            ("message".to_string(), Message::String("hello".to_string()))
        ]);
        
        let (tx, _rx) = flume::unbounded();
        let state: Arc<Mutex<dyn reflow_network::actor::ActorState>> = 
            Arc::new(Mutex::new(CustomState::default()));
        
        let context = ActorContext::new(
            payload,
            (tx, _rx),
            state,
            HashMap::new(),
            Arc::new(Mutex::new(ActorLoad::new(0))),
        );
        
        let result = behavior(context).await.unwrap();
        
        assert_eq!(result.get("counter"), Some(&Message::Integer(1)));
        assert!(result.contains_key("response"));
    }
}
}

Choosing Between Macro and Manual Implementation

Use Actor Macro When:

  • Simple, stateless processing
  • Standard input/output patterns
  • Rapid prototyping
  • Most common use cases

Use Manual Implementation When:

  • Complex state management
  • Custom error handling strategies
  • Performance-critical applications
  • Integration with external systems
  • Advanced patterns (circuit breakers, rate limiting, etc.)

Next Steps

ActorConfig System

The ActorConfig system provides a unified configuration framework for all actors in Reflow, enabling dynamic configuration, runtime parameter adjustment, and consistent actor behavior across different deployment environments.

Overview

ActorConfig replaces the previous ad-hoc configuration approach with a structured, type-safe system that supports:

  • Type-Safe Configuration: Strongly typed configuration parameters with validation
  • Dynamic Updates: Runtime configuration changes without actor restart
  • Environment Variables: Automatic environment variable injection
  • JSON/YAML Support: Flexible configuration file formats
  • Validation & Defaults: Built-in validation with sensible defaults
  • Metadata Integration: Rich metadata for configuration documentation

Basic Usage

Simple Actor Configuration

#![allow(unused)]
fn main() {
use reflow_network::actor::{Actor, ActorConfig, ActorContext};
use std::collections::HashMap;

#[derive(Debug)]
struct ProcessorActor {
    config: ActorConfig,
}

impl ProcessorActor {
    fn new() -> Self {
        Self {
            config: ActorConfig::default(),
        }
    }
}

impl Actor for ProcessorActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Extract configuration values
        let batch_size = config.get_number("batch_size").unwrap_or(10.0) as usize;
        let timeout_ms = config.get_number("timeout_ms").unwrap_or(5000.0) as u64;
        let enable_retry = config.get_boolean("enable_retry").unwrap_or(true);
        let processor_name = config.get_string("name").unwrap_or("default_processor".to_string().into());
        
        Box::pin(async move {
            println!("Processor {} starting with batch_size={}, timeout={}ms, retry={}", 
                processor_name, batch_size, timeout_ms, enable_retry);
            
            // Actor implementation using configuration...
        })
    }
    
    // ... other Actor trait methods
}
}

Configuration from JSON

{
  "name": "data_processor",
  "batch_size": 50,
  "timeout_ms": 10000,
  "enable_retry": true,
  "processing_mode": "parallel",
  "max_retries": 3,
  "retry_delay_ms": 1000
}
#![allow(unused)]
fn main() {
// Load configuration from JSON
let config_json = r#"
{
  "name": "data_processor",
  "batch_size": 50,
  "timeout_ms": 10000,
  "enable_retry": true,
  "processing_mode": "parallel"
}
"#;

let config = ActorConfig::from_json(config_json)?;
let actor = ProcessorActor::new();

// Use configuration when creating actor process
let process = actor.create_process(config);
tokio::spawn(process);
}

Configuration Sources

Environment Variables

ActorConfig automatically reads from environment variables with configurable prefixes:

#![allow(unused)]
fn main() {
// Environment variables:
// PROCESSOR_BATCH_SIZE=100
// PROCESSOR_TIMEOUT_MS=15000
// PROCESSOR_ENABLE_RETRY=false

let config = ActorConfig::from_env("PROCESSOR")?;

// Access values
let batch_size = config.get_number("batch_size").unwrap(); // 100.0
let timeout = config.get_number("timeout_ms").unwrap();   // 15000.0
let retry = config.get_boolean("enable_retry").unwrap();  // false
}

Configuration Files

#![allow(unused)]
fn main() {
// From YAML file
let config = ActorConfig::from_yaml_file("configs/processor.yaml").await?;

// From JSON file
let config = ActorConfig::from_json_file("configs/processor.json").await?;

// From TOML file
let config = ActorConfig::from_toml_file("configs/processor.toml").await?;
}

Combined Sources with Precedence

#![allow(unused)]
fn main() {
// Build configuration with precedence: CLI args > env vars > config file > defaults
let config = ActorConfig::builder()
    .from_file("configs/defaults.yaml").await?
    .from_env("PROCESSOR")?
    .from_args(std::env::args())?
    .build()?;
}

Configuration Schema and Validation

Defining Configuration Schema

#![allow(unused)]
fn main() {
use reflow_network::actor::{ActorConfigSchema, ConfigField, ConfigType};
use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize)]
struct ProcessorConfigSchema {
    #[serde(default = "default_batch_size")]
    batch_size: u32,
    
    #[serde(default = "default_timeout")]
    timeout_ms: u64,
    
    #[serde(default)]
    enable_retry: bool,
    
    #[serde(default = "default_name")]
    name: String,
    
    processing_mode: ProcessingMode,
}

#[derive(Debug, Serialize, Deserialize)]
enum ProcessingMode {
    Sequential,
    Parallel,
    Batch,
}

fn default_batch_size() -> u32 { 10 }
fn default_timeout() -> u64 { 5000 }
fn default_name() -> String { "processor".to_string() }

impl ActorConfigSchema for ProcessorConfigSchema {
    fn schema() -> Vec<ConfigField> {
        vec![
            ConfigField {
                name: "batch_size".to_string(),
                config_type: ConfigType::Number,
                required: false,
                default_value: Some(serde_json::Value::Number(10.into())),
                description: Some("Number of items to process in each batch".to_string()),
                validation: Some("Must be between 1 and 1000".to_string()),
            },
            ConfigField {
                name: "timeout_ms".to_string(),
                config_type: ConfigType::Number,
                required: false,
                default_value: Some(serde_json::Value::Number(5000.into())),
                description: Some("Processing timeout in milliseconds".to_string()),
                validation: Some("Must be positive".to_string()),
            },
            ConfigField {
                name: "enable_retry".to_string(),
                config_type: ConfigType::Boolean,
                required: false,
                default_value: Some(serde_json::Value::Bool(false)),
                description: Some("Enable automatic retry on failure".to_string()),
                validation: None,
            },
            ConfigField {
                name: "name".to_string(),
                config_type: ConfigType::String,
                required: false,
                default_value: Some(serde_json::Value::String("processor".to_string())),
                description: Some("Actor instance name".to_string()),
                validation: Some("Must be non-empty alphanumeric".to_string()),
            },
            ConfigField {
                name: "processing_mode".to_string(),
                config_type: ConfigType::String,
                required: true,
                default_value: None,
                description: Some("Processing execution mode".to_string()),
                validation: Some("Must be one of: sequential, parallel, batch".to_string()),
            },
        ]
    }
    
    fn validate(&self) -> Result<(), String> {
        if self.batch_size == 0 || self.batch_size > 1000 {
            return Err("batch_size must be between 1 and 1000".to_string());
        }
        
        if self.timeout_ms == 0 {
            return Err("timeout_ms must be positive".to_string());
        }
        
        if self.name.is_empty() {
            return Err("name cannot be empty".to_string());
        }
        
        Ok(())
    }
}
}

Using Typed Configuration

#![allow(unused)]
fn main() {
impl Actor for ProcessorActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Parse and validate configuration against schema
        let typed_config: ProcessorConfigSchema = config.parse_typed()?;
        
        // Configuration is now type-safe and validated
        let batch_size = typed_config.batch_size;
        let timeout = Duration::from_millis(typed_config.timeout_ms);
        let enable_retry = typed_config.enable_retry;
        let name = typed_config.name;
        let mode = typed_config.processing_mode;
        
        Box::pin(async move {
            match mode {
                ProcessingMode::Sequential => {
                    // Sequential processing logic
                },
                ProcessingMode::Parallel => {
                    // Parallel processing logic
                },
                ProcessingMode::Batch => {
                    // Batch processing logic
                },
            }
        })
    }
}
}

Dynamic Configuration Updates

Runtime Configuration Changes

#![allow(unused)]
fn main() {
use tokio::sync::watch;

struct DynamicProcessorActor {
    config_receiver: watch::Receiver<ActorConfig>,
}

impl DynamicProcessorActor {
    fn new(config_receiver: watch::Receiver<ActorConfig>) -> Self {
        Self { config_receiver }
    }
}

impl Actor for DynamicProcessorActor {
    fn create_process(&self, initial_config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        let mut config_receiver = self.config_receiver.clone();
        
        Box::pin(async move {
            let mut current_config = initial_config;
            
            loop {
                // Check for configuration updates
                if config_receiver.has_changed().unwrap_or(false) {
                    current_config = config_receiver.borrow().clone();
                    println!("Configuration updated: {:?}", current_config);
                    
                    // Apply new configuration
                    let batch_size = current_config.get_number("batch_size").unwrap_or(10.0) as usize;
                    println!("New batch size: {}", batch_size);
                }
                
                // Process with current configuration
                // ... actor logic ...
                
                tokio::time::sleep(Duration::from_millis(100)).await;
            }
        })
    }
}

// Update configuration at runtime
async fn update_actor_config() -> Result<(), Box<dyn std::error::Error>> {
    let (config_sender, config_receiver) = watch::channel(ActorConfig::default());
    
    let actor = DynamicProcessorActor::new(config_receiver);
    tokio::spawn(actor.create_process(ActorConfig::default()));
    
    // Update configuration after 5 seconds
    tokio::time::sleep(Duration::from_secs(5)).await;
    
    let new_config = ActorConfig::from_json(r#"
    {
        "batch_size": 100,
        "timeout_ms": 20000,
        "enable_retry": true
    }
    "#)?;
    
    config_sender.send(new_config)?;
    println!("Configuration updated!");
    
    Ok(())
}
}

Configuration in Networks and Graphs

Network-Level Configuration

#![allow(unused)]
fn main() {
use reflow_network::network::{Network, NetworkConfig};

// Configure network with global defaults
let network_config = NetworkConfig {
    default_actor_config: Some(ActorConfig::from_json(r#"
    {
        "default_timeout_ms": 10000,
        "enable_monitoring": true,
        "log_level": "info"
    }
    "#)?),
    // ... other network config
};

let mut network = Network::new(network_config);

// Add actor with specific configuration
let actor_config = ActorConfig::from_json(r#"
{
    "batch_size": 50,
    "timeout_ms": 15000,
    "name": "data_processor_1"
}
"#)?;

network.add_node_with_config("processor1", "DataProcessorActor", Some(actor_config))?;
}

Graph-Level Configuration

{
  "caseSensitive": false,
  "properties": {
    "name": "data_processing_pipeline"
  },
  "processes": {
    "collector": {
      "component": "DataCollectorActor",
      "metadata": {
        "config": {
          "source_url": "https://api.example.com/data",
          "poll_interval_ms": 5000,
          "batch_size": 100
        }
      }
    },
    "processor": {
      "component": "DataProcessorActor",
      "metadata": {
        "config": {
          "processing_mode": "parallel",
          "worker_count": 4,
          "timeout_ms": 30000
        }
      }
    },
    "validator": {
      "component": "DataValidatorActor",
      "metadata": {
        "config": {
          "strict_validation": true,
          "schema_file": "./schemas/data.json"
        }
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "collector", "portId": "Output" },
      "to": { "nodeId": "processor", "portId": "Input" }
    },
    {
      "from": { "nodeId": "processor", "portId": "Output" },
      "to": { "nodeId": "validator", "portId": "Input" }
    }
  ]
}

Loading Graph with Configurations

#![allow(unused)]
fn main() {
use reflow_network::graph::Graph;

// Load graph - configurations are automatically extracted from metadata
let graph = Graph::load_from_file("data_pipeline.graph.json").await?;

// Each actor will receive its specific configuration
// Network automatically extracts config from process metadata
let mut network = Network::new(NetworkConfig::default());
network.load_graph(graph).await?;
}

Environment-Specific Configurations

Development vs Production

#![allow(unused)]
fn main() {
// Development configuration
let dev_config = ActorConfig::from_json(r#"
{
    "log_level": "debug",
    "enable_profiling": true,
    "timeout_ms": 60000,
    "batch_size": 5
}
"#)?;

// Production configuration
let prod_config = ActorConfig::from_json(r#"
{
    "log_level": "warn",
    "enable_profiling": false,
    "timeout_ms": 10000,
    "batch_size": 100
}
"#)?;

// Select configuration based on environment
let config = match std::env::var("ENVIRONMENT").as_deref() {
    Ok("production") => prod_config,
    Ok("staging") => prod_config, // Use prod config for staging
    _ => dev_config, // Default to dev config
};
}

Configuration Profiles

#![allow(unused)]
fn main() {
// Base configuration
let base_config = ActorConfig::from_yaml_file("configs/base.yaml").await?;

// Environment-specific overrides
let env = std::env::var("ENVIRONMENT").unwrap_or_else(|_| "development".to_string());
let env_config_path = format!("configs/{}.yaml", env);

let final_config = if std::path::Path::new(&env_config_path).exists() {
    base_config.merge_with(ActorConfig::from_yaml_file(&env_config_path).await?)?
} else {
    base_config
};
}

Configuration Migration

Migrating from Direct HashMap

Before (Old Pattern):

#![allow(unused)]
fn main() {
// Old approach - direct HashMap usage
impl Actor for OldActor {
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.batch_size = config.get("batch_size")
            .and_then(|v| v.as_f64())
            .unwrap_or(10.0) as usize;
        
        self.timeout = Duration::from_millis(
            config.get("timeout_ms")
                .and_then(|v| v.as_f64())
                .unwrap_or(5000.0) as u64
        );
    }
}
}

After (New Pattern):

#![allow(unused)]
fn main() {
// New approach - ActorConfig
impl Actor for NewActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        let batch_size = config.get_number("batch_size").unwrap_or(10.0) as usize;
        let timeout = Duration::from_millis(config.get_number("timeout_ms").unwrap_or(5000.0) as u64);
        
        Box::pin(async move {
            // Actor implementation with configuration
        })
    }
}
}

Migration Helper

#![allow(unused)]
fn main() {
// Helper function to migrate from old HashMap format
impl ActorConfig {
    pub fn from_legacy_hashmap(legacy: HashMap<String, serde_json::Value>) -> Self {
        let mut config = ActorConfig::default();
        
        for (key, value) in legacy {
            config.set(&key, value);
        }
        
        config
    }
}

// Usage in migration
let legacy_config = HashMap::from([
    ("batch_size".to_string(), serde_json::Value::Number(50.into())),
    ("timeout_ms".to_string(), serde_json::Value::Number(10000.into())),
]);

let actor_config = ActorConfig::from_legacy_hashmap(legacy_config);
}

Advanced Features

Conditional Configuration

#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Deserialize)]
struct ConditionalConfig {
    #[serde(default)]
    enable_cache: bool,
    
    #[serde(skip_serializing_if = "Option::is_none")]
    cache_size_mb: Option<u32>,
    
    #[serde(skip_serializing_if = "Option::is_none")]
    cache_ttl_seconds: Option<u64>,
}

impl ActorConfigSchema for ConditionalConfig {
    fn validate(&self) -> Result<(), String> {
        if self.enable_cache {
            if self.cache_size_mb.is_none() {
                return Err("cache_size_mb is required when cache is enabled".to_string());
            }
            if self.cache_ttl_seconds.is_none() {
                return Err("cache_ttl_seconds is required when cache is enabled".to_string());
            }
        }
        Ok(())
    }
}
}

Configuration Inheritance

#![allow(unused)]
fn main() {
// Base actor configuration
let base_config = ActorConfig::from_json(r#"
{
    "timeout_ms": 10000,
    "enable_logging": true,
    "log_level": "info"
}
"#)?;

// Specialized configuration inheriting from base
let specialized_config = base_config.extend_with(ActorConfig::from_json(r#"
{
    "batch_size": 50,
    "processing_mode": "parallel",
    "timeout_ms": 20000
}
"#)?)?;

// Result combines both configs with specialized values taking precedence
// timeout_ms: 20000 (overridden)
// enable_logging: true (inherited)
// log_level: "info" (inherited)  
// batch_size: 50 (added)
// processing_mode: "parallel" (added)
}

Secret Management

#![allow(unused)]
fn main() {
use reflow_network::actor::SecretResolver;

// Configuration with secret references
let config_with_secrets = ActorConfig::from_json(r#"
{
    "database_url": "${secret:DATABASE_URL}",
    "api_key": "${secret:API_KEY}",
    "batch_size": 100
}
"#)?;

// Resolve secrets from environment or secret store
let secret_resolver = SecretResolver::new()
    .with_env_prefix("SECRET_")
    .with_vault_client(vault_client);

let resolved_config = secret_resolver.resolve(config_with_secrets).await?;

// Secrets are now resolved:
// database_url: "postgresql://user:password@localhost/db"  
// api_key: "sk-1234567890abcdef"
// batch_size: 100
}

Testing with ActorConfig

Test Configuration Helpers

#![allow(unused)]
fn main() {
use reflow_network::actor::testing::TestActorConfig;

#[tokio::test]
async fn test_actor_with_config() {
    let test_config = TestActorConfig::builder()
        .with_number("batch_size", 10.0)
        .with_boolean("enable_retry", false)
        .with_string("name", "test_actor")
        .build();
    
    let actor = MyActor::new();
    let process = actor.create_process(test_config.into());
    
    // Test actor behavior with specific configuration
    // ...
}

#[tokio::test]
async fn test_actor_configuration_validation() {
    let invalid_config = ActorConfig::from_json(r#"
    {
        "batch_size": -1,
        "timeout_ms": 0
    }
    "#).unwrap();
    
    let schema = MyActorConfigSchema::default();
    assert!(schema.validate_config(&invalid_config).is_err());
}
}

Configuration Mocking

#![allow(unused)]
fn main() {
// Mock configuration for testing
struct MockConfigProvider {
    configs: HashMap<String, ActorConfig>,
}

impl MockConfigProvider {
    fn new() -> Self {
        Self {
            configs: HashMap::new(),
        }
    }
    
    fn add_config(&mut self, actor_id: &str, config: ActorConfig) {
        self.configs.insert(actor_id.to_string(), config);
    }
}

impl ConfigProvider for MockConfigProvider {
    async fn get_config(&self, actor_id: &str) -> Result<ActorConfig, ConfigError> {
        self.configs.get(actor_id)
            .cloned()
            .ok_or_else(|| ConfigError::NotFound(actor_id.to_string()))
    }
}
}

Best Practices

Configuration Organization

  1. Use Typed Schemas: Define strongly typed configuration schemas for validation
  2. Provide Sensible Defaults: Always provide reasonable default values
  3. Document Configuration: Include descriptions and validation rules
  4. Environment Separation: Use different configurations for different environments
  5. Secret Security: Never store secrets in plain text configuration files

Performance Considerations

#![allow(unused)]
fn main() {
// Cache parsed configuration for performance
use std::sync::Arc;
use once_cell::sync::OnceCell;

struct CachedConfigActor {
    cached_config: OnceCell<Arc<ProcessorConfigSchema>>,
}

impl Actor for CachedConfigActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Parse configuration once and cache it
        let parsed_config = self.cached_config.get_or_init(|| {
            Arc::new(config.parse_typed().expect("Invalid configuration"))
        }).clone();
        
        Box::pin(async move {
            // Use cached configuration
            let batch_size = parsed_config.batch_size;
            // ...
        })
    }
}
}

Error Handling

#![allow(unused)]
fn main() {
use reflow_network::actor::ConfigError;

impl Actor for RobustActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        Box::pin(async move {
            // Graceful configuration error handling
            let batch_size = match config.get_number("batch_size") {
                Some(size) if size > 0.0 => size as usize,
                Some(_) => {
                    eprintln!("Invalid batch_size, using default");
                    10
                },
                None => {
                    println!("No batch_size specified, using default");
                    10
                }
            };
            
            // Continue with actor logic
        })
    }
}
}

Next Steps

Creating and Managing Graphs

This guide covers the core APIs for creating, modifying, and managing Reflow graphs.

Graph Creation

Basic Graph Creation

#![allow(unused)]
fn main() {
use reflow_network::graph::{Graph, PortType};
use std::collections::HashMap;
use serde_json::json;

// Create a new graph
let mut graph = Graph::new("MyWorkflow", false, None);

// Create with case sensitivity enabled
let mut case_sensitive_graph = Graph::new("CaseSensitive", true, None);

// Create with initial properties
let properties = HashMap::from([
    ("description".to_string(), json!("Data processing workflow")),
    ("version".to_string(), json!("1.0.0")),
    ("author".to_string(), json!("John Doe"))
]);
let mut graph_with_props = Graph::new("WorkflowV1", false, Some(properties));
}

Graph with History Tracking

#![allow(unused)]
fn main() {
// Create graph with unlimited history
let (mut graph, mut history) = Graph::with_history();

// Create graph with limited history (recommended for production)
let (mut graph, mut history) = Graph::with_history_and_limit(100);
}

Node Management

Adding Nodes

#![allow(unused)]
fn main() {
// Basic node addition
graph.add_node("data_source", "FileReader", None);

// Node with position metadata
let metadata = HashMap::from([
    ("x".to_string(), json!(100)),
    ("y".to_string(), json!(200))
]);
graph.add_node("processor", "DataProcessor", Some(metadata));

// Node with comprehensive metadata
let rich_metadata = HashMap::from([
    ("x".to_string(), json!(300)),
    ("y".to_string(), json!(150)),
    ("label".to_string(), json!("CSV Parser")),
    ("color".to_string(), json!("#3498db")),
    ("estimated_time".to_string(), json!(2.5)),
    ("resources".to_string(), json!({
        "memory": 128,
        "cpu": 0.5
    })),
    ("configuration".to_string(), json!({
        "delimiter": ",",
        "has_header": true
    }))
]);
graph.add_node("csv_parser", "CSVParser", Some(rich_metadata));
}

Node Retrieval

#![allow(unused)]
fn main() {
// Get node reference
if let Some(node) = graph.get_node("processor") {
    println!("Node component: {}", node.component);
    if let Some(metadata) = &node.metadata {
        println!("Node metadata: {:?}", metadata);
    }
}

// Get mutable node reference
if let Some(node) = graph.get_node_mut("processor") {
    // Modify node directly (not recommended - use set_node_metadata instead)
}
}

Updating Node Metadata

#![allow(unused)]
fn main() {
// Update metadata (merges with existing)
let updates = HashMap::from([
    ("color".to_string(), json!("#e74c3c")),
    ("priority".to_string(), json!("high"))
]);
graph.set_node_metadata("processor", updates);

// Clear specific metadata field by setting to null
let clear_color = HashMap::from([
    ("color".to_string(), json!(null))
]);
graph.set_node_metadata("processor", clear_color);
}

Node Removal

#![allow(unused)]
fn main() {
// Remove node (automatically removes all connections)
graph.remove_node("old_processor");
}

Node Renaming

#![allow(unused)]
fn main() {
// Rename node (updates all references)
graph.rename_node("old_name", "new_name");
}

Connection Management

Creating Connections

#![allow(unused)]
fn main() {
// Basic connection
graph.add_connection("source", "output", "processor", "input", None);

// Connection with metadata
let conn_metadata = HashMap::from([
    ("weight".to_string(), json!(0.8)),
    ("priority".to_string(), json!("high")),
    ("buffer_size".to_string(), json!(1024))
]);
graph.add_connection("processor", "output", "sink", "input", Some(conn_metadata));
}

Connection Queries

#![allow(unused)]
fn main() {
// Get specific connection
if let Some(connection) = graph.get_connection("source", "output", "processor", "input") {
    println!("Connection metadata: {:?}", connection.metadata);
}

// Get all connections for a node
let incoming = graph.get_incoming_connections("processor");
for (source_node, source_port, connection) in incoming {
    println!("Input from {}:{}", source_node, source_port);
}

let outgoing = graph.get_outgoing_connections("processor");
for (target_node, target_port, connection) in outgoing {
    println!("Output to {}:{}", target_node, target_port);
}

// Get connections for specific port
let port_incoming = graph.get_incoming_connections_for_port("processor", "input");
let port_outgoing = graph.get_outgoing_connections_for_port("processor", "output");
}

Connection Analysis

#![allow(unused)]
fn main() {
// Check if nodes are connected
if graph.are_nodes_connected("source", "processor") {
    println!("Nodes are connected");
}

// Check specific port connections
if graph.are_ports_connected("source", "output", "processor", "input") {
    println!("Ports are connected");
}

// Get connection degrees
let (in_degree, out_degree) = graph.get_connection_degree("processor");
println!("Node has {} inputs and {} outputs", in_degree, out_degree);

// Get port-specific degrees
let (port_in, port_out) = graph.get_port_connection_degree("processor", "data");
}

Connection Updates

#![allow(unused)]
fn main() {
// Update connection metadata
let new_metadata = HashMap::from([
    ("bandwidth".to_string(), json!("high")),
    ("encrypted".to_string(), json!(true))
]);
graph.set_connection_metadata("source", "output", "processor", "input", new_metadata);
}

Connection Removal

#![allow(unused)]
fn main() {
// Remove specific connection
graph.remove_connection("source", "output", "processor", "input");

// Remove all connections for a node (called automatically when removing node)
graph.remove_node_connections("isolated_node");
}

Graph Ports (Inports/Outports)

Graph ports expose internal node ports as external interfaces, making subgraphs reusable.

Adding Input Ports

#![allow(unused)]
fn main() {
// Basic inport
graph.add_inport("data_input", "processor", "input", PortType::Any, None);

// Inport with metadata
let port_metadata = HashMap::from([
    ("description".to_string(), json!("Main data input stream")),
    ("required".to_string(), json!(true)),
    ("default_value".to_string(), json!(null))
]);
graph.add_inport("config", "processor", "config", PortType::Object("Config".to_string()), Some(port_metadata));
}

Adding Output Ports

#![allow(unused)]
fn main() {
// Basic outport
graph.add_outport("processed_data", "processor", "output", PortType::Object("ProcessedData".to_string()), None);

// Outport with metadata
let out_metadata = HashMap::from([
    ("description".to_string(), json!("Processed data stream")),
    ("format".to_string(), json!("json"))
]);
graph.add_outport("results", "processor", "result", PortType::Array(Box::new(PortType::Object("Result".to_string()))), Some(out_metadata));
}

Port Management

#![allow(unused)]
fn main() {
// Update port metadata
let port_updates = HashMap::from([
    ("required".to_string(), json!(false)),
    ("deprecated".to_string(), json!(true))
]);
graph.set_inport_metadata("data_input", port_updates);
graph.set_outport_metadata("results", port_updates);

// Rename ports
graph.rename_inport("old_input", "new_input");
graph.rename_outport("old_output", "new_output");

// Remove ports
graph.remove_inport("unused_input");
graph.remove_outport("unused_output");
}

Initial Information Packets (IIPs)

IIPs provide static data to nodes at startup.

Adding IIPs

#![allow(unused)]
fn main() {
// Basic IIP
graph.add_initial(
    json!("config.yaml"),
    "file_reader",
    "filename",
    None
);

// IIP with metadata
let iip_metadata = HashMap::from([
    ("source".to_string(), json!("configuration")),
    ("priority".to_string(), json!("high"))
]);
graph.add_initial(
    json!({"host": "localhost", "port": 8080}),
    "server",
    "config",
    Some(iip_metadata)
);

// IIP with array index
graph.add_initial_index(
    json!("file1.txt"),
    "multi_reader",
    "files",
    0,
    None
);
}

Graph-level IIPs

When using graph ports, you can add IIPs at the graph level:

#![allow(unused)]
fn main() {
// Add IIP to graph inport
graph.add_graph_initial(
    json!({"mode": "production"}),
    "config_input",  // Graph inport name
    None
);

// Add indexed IIP to graph inport
graph.add_graph_initial_index(
    json!("primary.db"),
    "database_files",  // Graph inport name
    0,
    None
);
}

Removing IIPs

#![allow(unused)]
fn main() {
// Remove node-level IIP
graph.remove_initial("file_reader", "filename");

// Remove graph-level IIP
graph.remove_graph_initial("config_input");
}

Node Groups

Groups provide logical organization of related nodes.

Creating Groups

#![allow(unused)]
fn main() {
// Create basic group
graph.add_group("data_processing", vec!["parser".to_string(), "validator".to_string(), "transformer".to_string()], None);

// Group with metadata
let group_metadata = HashMap::from([
    ("color".to_string(), json!("#2ecc71")),
    ("description".to_string(), json!("Data processing pipeline")),
    ("collapsed".to_string(), json!(false))
]);
graph.add_group("preprocessing", vec!["cleaner".to_string(), "normalizer".to_string()], Some(group_metadata));
}

Managing Group Membership

#![allow(unused)]
fn main() {
// Add node to existing group
graph.add_to_group("data_processing", "formatter");

// Remove node from group
graph.remove_from_group("data_processing", "formatter");
}

Group Metadata

#![allow(unused)]
fn main() {
// Update group metadata
let group_updates = HashMap::from([
    ("collapsed".to_string(), json!(true)),
    ("priority".to_string(), json!("high"))
]);
graph.set_group_metadata("data_processing", group_updates);
}

Removing Groups

#![allow(unused)]
fn main() {
// Remove entire group (nodes remain, just ungrouped)
graph.remove_group("old_group");
}

Graph Properties

Setting Properties

#![allow(unused)]
fn main() {
// Set multiple properties
let properties = HashMap::from([
    ("name".to_string(), json!("Updated Workflow")),
    ("version".to_string(), json!("2.0.0")),
    ("description".to_string(), json!("Enhanced data processing")),
    ("tags".to_string(), json!(["data", "processing", "etl"]))
]);
graph.set_properties(properties);
}

Getting Properties

#![allow(unused)]
fn main() {
// Properties are accessible via graph.properties field
if let Some(name) = graph.properties.get("name") {
    println!("Graph name: {}", name);
}
}

Event Handling

Subscribing to Events

#![allow(unused)]
fn main() {
use reflow_network::graph::GraphEvents;

// Get event receiver
let event_receiver = graph.event_channel.1.clone();

// Handle events in a loop
std::thread::spawn(move || {
    while let Ok(event) = event_receiver.recv() {
        match event {
            GraphEvents::AddNode(data) => {
                println!("Node added: {:?}", data);
            }
            GraphEvents::RemoveNode(data) => {
                println!("Node removed: {:?}", data);
            }
            GraphEvents::AddConnection(data) => {
                println!("Connection added: {:?}", data);
            }
            GraphEvents::RemoveConnection(data) => {
                println!("Connection removed: {:?}", data);
            }
            GraphEvents::ChangeNode(data) => {
                println!("Node changed: {:?}", data);
            }
            // ... handle other events
            _ => {}
        }
    }
});
}

Event Types Reference

EventTriggered WhenData
AddNodeNode is addedNode data
RemoveNodeNode is removedNode data
RenameNodeNode is renamed{old, new}
ChangeNodeNode metadata changes{node, old_metadata, new_metadata}
AddConnectionConnection is addedConnection data
RemoveConnectionConnection is removedConnection data
ChangeConnectionConnection metadata changes{connection, old_metadata, new_metadata}
AddInitialIIP is addedIIP data
RemoveInitialIIP is removedIIP data
AddGroupGroup is createdGroup data
RemoveGroupGroup is removedGroup data
RenameGroupGroup is renamed{old, new}
ChangeGroupGroup metadata changes{group, old_metadata, new_metadata}
AddInportInport is added{id, port}
RemoveInportInport is removed{id, port}
RenameInportInport is renamed{old, new}
ChangeInportInport metadata changes{name, port, old_metadata, new_metadata}
AddOutportOutport is added{id, port}
RemoveOutportOutport is removed{id, port}
RenameOutportOutport is renamed{old, new}
ChangeOutportOutport metadata changes{name, port, old_metadata, new_metadata}
ChangePropertiesGraph properties change{new, before}

Serialization and Loading

Exporting Graphs

#![allow(unused)]
fn main() {
// Export to GraphExport format
let export = graph.export();

// Serialize to JSON
let json_string = serde_json::to_string_pretty(&export)?;
std::fs::write("workflow.json", json_string)?;
}

Loading Graphs

#![allow(unused)]
fn main() {
// Load from JSON
let json_content = std::fs::read_to_string("workflow.json")?;
let export: GraphExport = serde_json::from_str(&json_content)?;

// Create graph from export
let metadata = HashMap::from([
    ("loaded_at".to_string(), json!(chrono::Utc::now().to_rfc3339()))
]);
let loaded_graph = Graph::load(export, Some(metadata));
}

WebAssembly API

When using the graph system in a browser via WebAssembly:

JavaScript/TypeScript Usage

import { Graph, PortType } from 'reflow-network';

// Create graph
const graph = new Graph("WebWorkflow", false, {
    description: "Browser-based workflow"
});

// Add nodes
graph.addNode("input", "InputNode", { x: 0, y: 0 });
graph.addNode("output", "OutputNode", { x: 200, y: 0 });

// Add connections
graph.addConnection("input", "out", "output", "in", {});

// Subscribe to events
graph.subscribe((event) => {
    console.log("Graph event:", event);
    // Update UI based on event
    updateUI(event);
});

// Export for persistence
const exported = graph.toJSON();
localStorage.setItem('workflow', JSON.stringify(exported));

// Load saved workflow
const saved = JSON.parse(localStorage.getItem('workflow'));
const restoredGraph = Graph.load(saved, {});

Error Handling

Common Error Scenarios

#![allow(unused)]
fn main() {
use reflow_network::graph::GraphError;

// Handle node operations
match graph.add_node("duplicate", "TestNode", None) {
    Ok(_) => println!("Node added successfully"),
    Err(GraphError::DuplicateNode(id)) => println!("Node {} already exists", id),
    Err(e) => println!("Error: {}", e),
}

// Handle traversal errors
match graph.traverse_depth_first("nonexistent", |node| {
    println!("Visiting: {}", node.id);
}) {
    Ok(_) => println!("Traversal completed"),
    Err(GraphError::NodeNotFound(id)) => println!("Start node {} not found", id),
    Err(e) => println!("Traversal error: {}", e),
}
}

Error Types

  • NodeNotFound(String) - Referenced node doesn't exist
  • DuplicateNode(String) - Node with same ID already exists
  • InvalidConnection { from: String, to: String } - Connection cannot be created
  • CycleDetected - Operation would create a cycle (if validation enabled)
  • InvalidOperation(String) - Generic operation error

Best Practices

Performance Tips

  1. Batch Operations: Group related changes to minimize events

    #![allow(unused)]
    fn main() {
    // Instead of individual operations, batch them
    graph.add_node("n1", "Node1", None);
    graph.add_node("n2", "Node2", None);
    graph.add_connection("n1", "out", "n2", "in", None);
    }
  2. Use Appropriate Data Structures: Store frequently accessed metadata efficiently

    #![allow(unused)]
    fn main() {
    // Good: structured metadata
    let metadata = HashMap::from([
        ("config".to_string(), json!({
            "retries": 3,
            "timeout": 30
        }))
    ]);
    
    // Avoid: flat key-value for complex data
    }
  3. Validate Incrementally: Use targeted validation instead of full graph validation

    #![allow(unused)]
    fn main() {
    // Check specific aspects instead of full validation
    if let Some(cycle) = graph.detect_cycles() {
        // Handle cycle
    }
    }

Memory Management

  1. Limit History: Use bounded history for production systems

    #![allow(unused)]
    fn main() {
    let (graph, history) = Graph::with_history_and_limit(50);
    }
  2. Clean Up Events: Ensure event listeners are properly disposed

    #![allow(unused)]
    fn main() {
    // Store receiver handle to drop when done
    let receiver = graph.event_channel.1.clone();
    // ... use receiver
    drop(receiver); // Clean up
    }
  3. Efficient Metadata: Avoid storing large objects in metadata

    #![allow(unused)]
    fn main() {
    // Good: reference to external data
    let metadata = HashMap::from([
        ("data_ref".to_string(), json!("storage://large-dataset-id"))
    ]);
    
    // Avoid: embedding large data
    }

Next Steps

Graph Analysis and Validation

Reflow's graph system provides extensive analysis capabilities for validation, performance optimization, and structural insights. This guide covers all analysis features available in the graph system.

Flow Validation

Comprehensive Validation

The validate_flow method performs a complete analysis of graph integrity:

#![allow(unused)]
fn main() {
use reflow_network::graph::{FlowValidation, PortMismatch};

// Perform full validation
let validation = graph.validate_flow()?;

// Check for issues
if !validation.cycles.is_empty() {
    for cycle in validation.cycles {
        println!("Cycle detected: {:?}", cycle);
    }
}

if !validation.orphaned_nodes.is_empty() {
    println!("Orphaned nodes: {:?}", validation.orphaned_nodes);
}

if !validation.port_mismatches.is_empty() {
    for mismatch in validation.port_mismatches {
        println!("Port type mismatch: {}", mismatch);
    }
}
}

Validation Results Structure

#![allow(unused)]
fn main() {
pub struct FlowValidation {
    pub cycles: Vec<Vec<String>>,           // Detected cycles
    pub orphaned_nodes: Vec<String>,        // Disconnected nodes
    pub port_mismatches: Vec<PortMismatch>, // Type incompatibilities
}

pub struct PortMismatch {
    pub from_node: String,
    pub from_port: String,
    pub from_type: PortType,
    pub to_node: String,
    pub to_port: String,
    pub to_type: PortType,
    pub reason: String,
}
}

Cycle Detection

Basic Cycle Detection

#![allow(unused)]
fn main() {
// Detect first cycle found
if let Some(cycle) = graph.detect_cycles() {
    println!("Cycle path: {:?}", cycle);
    // cycle is Vec<String> showing the path of the cycle
}

// Check if specific node is in a cycle
if graph.is_node_in_cycle("suspicious_node") {
    println!("Node is part of a cycle");
}
}

Comprehensive Cycle Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::CycleAnalysis;

let cycle_analysis = graph.analyze_cycles();

println!("Total cycles found: {}", cycle_analysis.total_cycles);
println!("Cycle lengths: {:?}", cycle_analysis.cycle_lengths);
println!("Nodes involved in cycles: {:?}", cycle_analysis.nodes_in_cycles);

if let Some(longest) = cycle_analysis.longest_cycle {
    println!("Longest cycle: {:?} (length: {})", longest, longest.len());
}

if let Some(shortest) = cycle_analysis.shortest_cycle {
    println!("Shortest cycle: {:?} (length: {})", shortest, shortest.len());
}
}

All Cycles Detection

#![allow(unused)]
fn main() {
// Find all cycles in the graph
let all_cycles = graph.detect_all_cycles();

for (i, cycle) in all_cycles.iter().enumerate() {
    println!("Cycle {}: {:?}", i + 1, cycle);
}
}

Orphaned Node Analysis

Basic Orphaned Node Detection

#![allow(unused)]
fn main() {
// Find all orphaned nodes
let orphaned = graph.find_orphaned_nodes();

for node in orphaned {
    println!("Orphaned node: {}", node);
}
}

Detailed Orphaned Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::OrphanedNodeAnalysis;

let orphan_analysis = graph.analyze_orphaned_nodes();

println!("Total orphaned nodes: {}", orphan_analysis.total_orphaned);

println!("Completely isolated nodes:");
for node in orphan_analysis.completely_isolated {
    println!("  - {}", node);
}

println!("Unreachable nodes (have connections but no path from entry points):");
for node in orphan_analysis.unreachable {
    println!("  - {}", node);
}

println!("Disconnected groups:");
for (i, group) in orphan_analysis.disconnected_groups.iter().enumerate() {
    println!("  Group {}: {:?}", i + 1, group);
}
}

Port Type Validation

Port Compatibility Checking

#![allow(unused)]
fn main() {
// Validate all port types in the graph
let port_mismatches = graph.validate_port_types();

for mismatch in port_mismatches {
    println!("Port mismatch: {} -> {}", 
        format!("{}:{}", mismatch.from_node, mismatch.from_port),
        format!("{}:{}", mismatch.to_node, mismatch.to_port)
    );
    println!("  Types: {:?} -> {:?}", mismatch.from_type, mismatch.to_type);
    println!("  Reason: {}", mismatch.reason);
}
}

Custom Type Compatibility

The graph system includes built-in type compatibility rules:

#![allow(unused)]
fn main() {
// Built-in compatibility rules:
// Any ↔ Any type (always compatible)
// Integer → Float (automatic promotion)
// T → Stream (streaming any type)
// T → Option<T> (wrapping in option)
// Array<T> → Array<U> (if T → U)

// Example of compatible connections:
graph.add_connection("int_source", "out", "float_processor", "in", None);     // Integer → Float ✓
graph.add_connection("data_source", "out", "stream_processor", "in", None);   // Any → Stream ✓
graph.add_connection("value", "out", "optional_sink", "in", None);            // T → Option<T> ✓
}

Performance Analysis

Parallelism Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::{ParallelismAnalysis, PipelineStage};

let parallelism = graph.analyze_parallelism();

println!("Maximum parallelism: {}", parallelism.max_parallelism);

// Parallel branches that can execute simultaneously
println!("Parallel branches:");
for (i, branch) in parallelism.parallel_branches.iter().enumerate() {
    println!("  Branch {}: {:?}", i + 1, branch.nodes);
    println!("    Entry points: {:?}", branch.entry_points);
    println!("    Exit points: {:?}", branch.exit_points);
}

// Pipeline stages for sequential execution
println!("Pipeline stages:");
for stage in parallelism.pipeline_stages {
    println!("  Stage {}: {:?}", stage.level, stage.nodes);
}
}

Bottleneck Detection

#![allow(unused)]
fn main() {
use reflow_network::graph::Bottleneck;

let bottlenecks = graph.detect_bottlenecks();

for bottleneck in bottlenecks {
    match bottleneck {
        Bottleneck::HighDegree(node) => {
            let (in_deg, out_deg) = graph.get_connection_degree(&node);
            println!("High-degree bottleneck: {} ({} in, {} out)", node, in_deg, out_deg);
        }
        Bottleneck::SequentialChain(chain) => {
            println!("Sequential chain (could be parallelized): {:?}", chain);
        }
    }
}
}

High-Degree Node Analysis

#![allow(unused)]
fn main() {
// Find nodes with unusually high connection counts
let high_degree_nodes = graph.find_high_degree_nodes();

for node in high_degree_nodes {
    let (incoming, outgoing) = graph.get_connection_degree(&node);
    let total_degree = incoming + outgoing;
    
    println!("High-degree node: {} (total degree: {})", node, total_degree);
    println!("  Incoming: {}, Outgoing: {}", incoming, outgoing);
    
    // Analyze connected nodes
    let connected = graph.get_connected_nodes(&node);
    println!("  Connected to {} other nodes", connected.len());
}
}

Sequential Chain Analysis

#![allow(unused)]
fn main() {
// Find chains that could potentially be parallelized
let sequential_chains = graph.find_sequential_chains();

for (i, chain) in sequential_chains.iter().enumerate() {
    println!("Sequential chain {}: {:?}", i + 1, chain);
    println!("  Length: {} nodes", chain.len());
    
    // Analyze chain characteristics
    if chain.len() >= 5 {
        println!("  ⚠️  Long chain - consider breaking into parallel segments");
    }
}
}

Data Flow Analysis

Flow Path Tracing

#![allow(unused)]
fn main() {
use reflow_network::graph::{DataFlowPath, DataTransform};

// Trace data flow from a starting node
let flow_paths = graph.trace_data_flow("input_node")?;

for (i, path) in flow_paths.iter().enumerate() {
    println!("Flow path {}:", i + 1);
    println!("  Nodes: {:?}", path.nodes);
    
    println!("  Transformations:");
    for transform in &path.transforms {
        println!("    {} -> {} ({}: {} -> {})", 
            transform.node, 
            transform.operation,
            transform.node,
            transform.input_type, 
            transform.output_type
        );
    }
}
}

Execution Path Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::ExecutionPath;

// Find all possible execution paths
let execution_paths = graph.find_execution_paths();

for (i, path) in execution_paths.iter().enumerate() {
    println!("Execution path {}:", i + 1);
    println!("  Nodes: {:?}", path.nodes);
    println!("  Estimated time: {:.2}s", path.estimated_time);
    println!("  Resource requirements: {:?}", path.resource_requirements);
    
    // Check for parallel execution markers
    if path.resource_requirements.contains_key("parallel_branches") {
        let branches = path.resource_requirements["parallel_branches"];
        println!("  ⚡ Contains {} parallel branches", branches);
    }
    
    if path.resource_requirements.contains_key("contains_cycle") {
        println!("  ⚠️  Path contains cycles");
    }
}
}

Resource Requirements Analysis

#![allow(unused)]
fn main() {
// Analyze resource requirements for the entire graph
let resource_analysis = graph.analyze_resource_requirements();

println!("Graph resource requirements:");
for (resource, requirement) in resource_analysis {
    match resource.as_str() {
        "memory" => println!("  Memory: {:.1} MB", requirement),
        "cpu" => println!("  CPU cores: {:.1}", requirement),
        "disk" => println!("  Disk space: {:.1} GB", requirement),
        "network" => println!("  Network bandwidth: {:.1} Mbps", requirement),
        _ => println!("  {}: {:.2}", resource, requirement),
    }
}
}

Runtime Analysis

Comprehensive Runtime Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::{EnhancedGraphAnalysis, OptimizationSuggestion};

let runtime_analysis = graph.analyze_for_runtime();

println!("=== Runtime Analysis ===");
println!("Estimated execution time: {:.2}s", runtime_analysis.estimated_execution_time);
println!("Resource requirements: {:?}", runtime_analysis.resource_requirements);

// Parallelism opportunities
println!("\nParallelism analysis:");
println!("  Max parallelism: {}", runtime_analysis.parallelism.max_parallelism);
println!("  Parallel branches: {}", runtime_analysis.parallelism.parallel_branches.len());
println!("  Pipeline stages: {}", runtime_analysis.parallelism.pipeline_stages.len());

// Optimization suggestions
println!("\nOptimization suggestions:");
for suggestion in runtime_analysis.optimization_suggestions {
    match suggestion {
        OptimizationSuggestion::ParallelizableChain { nodes } => {
            println!("  ⚡ Parallelize chain: {:?}", nodes);
        }
        OptimizationSuggestion::RedundantNode { node, reason } => {
            println!("  🗑️  Remove redundant node '{}': {}", node, reason);
        }
        OptimizationSuggestion::ResourceBottleneck { resource, severity } => {
            println!("  ⚠️  Resource bottleneck in '{}': {:.1}% usage", resource, severity * 100.0);
        }
        OptimizationSuggestion::DataTypeOptimization { from, to, suggestion } => {
            println!("  🔧 Optimize types {} → {}: {}", from, to, suggestion);
        }
    }
}

// Performance bottlenecks
println!("\nPerformance bottlenecks:");
for bottleneck in runtime_analysis.performance_bottlenecks {
    match bottleneck {
        Bottleneck::HighDegree(node) => {
            println!("  🔥 High-degree node: {}", node);
        }
        Bottleneck::SequentialChain(chain) => {
            println!("  🐌 Sequential bottleneck: {:?}", chain);
        }
    }
}
}

Subgraph Analysis

Extracting Subgraphs

#![allow(unused)]
fn main() {
use reflow_network::graph::{Subgraph, SubgraphAnalysis};

// Get reachable subgraph from a node
if let Some(subgraph) = graph.get_reachable_subgraph("start_node") {
    println!("Subgraph from 'start_node':");
    println!("  Nodes: {:?}", subgraph.nodes);
    println!("  Entry points: {:?}", subgraph.entry_points);
    println!("  Exit points: {:?}", subgraph.exit_points);
    println!("  Internal connections: {}", subgraph.internal_connections.len());
    
    // Analyze subgraph characteristics
    let analysis = graph.analyze_subgraph(&subgraph);
    println!("  Analysis:");
    println!("    Node count: {}", analysis.node_count);
    println!("    Connection count: {}", analysis.connection_count);
    println!("    Max depth: {}", analysis.max_depth);
    println!("    Is cyclic: {}", analysis.is_cyclic);
    println!("    Branching factor: {:.2}", analysis.branching_factor);
}
}

Independent Subgraph Detection

#![allow(unused)]
fn main() {
// Find all independent subgraphs
let subgraphs = graph.find_subgraphs();

println!("Found {} independent subgraphs:", subgraphs.len());
for (i, subgraph) in subgraphs.iter().enumerate() {
    println!("  Subgraph {}: {} nodes", i + 1, subgraph.nodes.len());
    
    let analysis = graph.analyze_subgraph(subgraph);
    if analysis.is_cyclic {
        println!("    ⚠️  Contains cycles");
    }
    
    if subgraph.entry_points.len() > 1 {
        println!("    ⚡ Multiple entry points - potential for parallel input");
    }
    
    if subgraph.exit_points.len() > 1 {
        println!("    📊 Multiple exit points - produces multiple outputs");
    }
}
}

Graph Traversal Analysis

Traversal with Analysis

#![allow(unused)]
fn main() {
use std::collections::HashSet;

// Depth-first traversal with custom analysis
let mut visited_order = Vec::new();
let mut max_depth = 0;
let mut current_depth = 0;

graph.traverse_depth_first("start_node", |node| {
    visited_order.push(node.id.clone());
    current_depth += 1;
    max_depth = max_depth.max(current_depth);
    
    println!("Visiting {} at depth {}", node.id, current_depth);
    
    // Analyze node characteristics
    if let Some(metadata) = &node.metadata {
        if let Some(estimated_time) = metadata.get("estimated_time") {
            println!("  Estimated processing time: {:?}", estimated_time);
        }
    }
})?;

println!("Traversal completed:");
println!("  Visit order: {:?}", visited_order);
println!("  Maximum depth: {}", max_depth);
}

Breadth-First Layer Analysis

#![allow(unused)]
fn main() {
// Breadth-first traversal to analyze layers
let mut layers: HashMap<usize, Vec<String>> = HashMap::new();
let mut current_layer = 0;

graph.traverse_breadth_first("start_node", |node| {
    // In a real implementation, you'd track depth
    layers.entry(current_layer)
        .or_insert_with(Vec::new)
        .push(node.id.clone());
    
    println!("Layer {}: {}", current_layer, node.id);
})?;

// Analyze layer characteristics
for (layer, nodes) in layers {
    println!("Layer {} has {} nodes: {:?}", layer, nodes.len(), nodes);
    
    if nodes.len() > 1 {
        println!("  ⚡ Layer {} can be parallelized", layer);
    }
}
}

Custom Analysis Functions

Building Custom Analyzers

#![allow(unused)]
fn main() {
// Custom analyzer for finding critical paths
fn find_critical_path(graph: &Graph, start: &str, end: &str) -> Option<Vec<String>> {
    let mut longest_path = Vec::new();
    let mut max_weight = 0.0;
    
    // Use path tracing to find all paths
    if let Ok(paths) = graph.trace_data_flow(start) {
        for path in paths {
            if path.nodes.last() == Some(&end.to_string()) {
                // Calculate path weight based on estimated times
                let mut path_weight = 0.0;
                
                for node_id in &path.nodes {
                    if let Some(node) = graph.get_node(node_id) {
                        if let Some(metadata) = &node.metadata {
                            if let Some(time) = metadata.get("estimated_time") {
                                if let Some(t) = time.as_f64() {
                                    path_weight += t;
                                }
                            }
                        }
                    }
                }
                
                if path_weight > max_weight {
                    max_weight = path_weight;
                    longest_path = path.nodes;
                }
            }
        }
    }
    
    if longest_path.is_empty() {
        None
    } else {
        Some(longest_path)
    }
}

// Usage
if let Some(critical_path) = find_critical_path(&graph, "input", "output") {
    println!("Critical path: {:?}", critical_path);
}
}

Performance Metrics Collection

#![allow(unused)]
fn main() {
use std::time::Instant;

// Benchmark graph operations
fn benchmark_graph_operations(graph: &Graph) {
    let start = Instant::now();
    
    // Benchmark cycle detection
    let cycle_start = Instant::now();
    let _cycles = graph.detect_all_cycles();
    let cycle_time = cycle_start.elapsed();
    
    // Benchmark validation
    let validation_start = Instant::now();
    let _validation = graph.validate_flow();
    let validation_time = validation_start.elapsed();
    
    // Benchmark parallelism analysis
    let parallelism_start = Instant::now();
    let _parallelism = graph.analyze_parallelism();
    let parallelism_time = parallelism_start.elapsed();
    
    let total_time = start.elapsed();
    
    println!("=== Performance Metrics ===");
    println!("Graph size: {} nodes, {} connections", 
        graph.nodes.len(), 
        graph.connections.len()
    );
    println!("Cycle detection: {:?}", cycle_time);
    println!("Flow validation: {:?}", validation_time);
    println!("Parallelism analysis: {:?}", parallelism_time);
    println!("Total analysis time: {:?}", total_time);
}
}

Analysis Best Practices

Incremental Analysis

For large graphs, perform incremental analysis:

#![allow(unused)]
fn main() {
// Instead of full validation on every change
let full_validation = graph.validate_flow()?; // Expensive

// Use targeted analysis
if let Some(cycle) = graph.detect_cycles() {
    // Handle cycles specifically
}

// Check only specific node connections
let node_issues = graph.find_orphaned_nodes()
    .into_iter()
    .filter(|n| recently_modified_nodes.contains(n))
    .collect::<Vec<_>>();
}

Caching Analysis Results

#![allow(unused)]
fn main() {
use std::cell::RefCell;

struct CachedAnalyzer {
    graph: Graph,
    cached_validation: RefCell<Option<FlowValidation>>,
    validation_dirty: RefCell<bool>,
}

impl CachedAnalyzer {
    fn get_validation(&self) -> Result<FlowValidation, GraphError> {
        if *self.validation_dirty.borrow() {
            let validation = self.graph.validate_flow()?;
            *self.cached_validation.borrow_mut() = Some(validation.clone());
            *self.validation_dirty.borrow_mut() = false;
            Ok(validation)
        } else {
            Ok(self.cached_validation.borrow().clone().unwrap())
        }
    }
    
    fn invalidate_cache(&self) {
        *self.validation_dirty.borrow_mut() = true;
    }
}
}

Parallel Analysis

For very large graphs, consider parallel analysis:

#![allow(unused)]
fn main() {
use std::thread;

// Analyze different aspects in parallel
let graph_clone = graph.clone();
let cycle_handle = thread::spawn(move || {
    graph_clone.detect_all_cycles()
});

let graph_clone2 = graph.clone();
let orphan_handle = thread::spawn(move || {
    graph_clone2.analyze_orphaned_nodes()
});

let graph_clone3 = graph.clone();
let parallelism_handle = thread::spawn(move || {
    graph_clone3.analyze_parallelism()
});

// Collect results
let cycles = cycle_handle.join().unwrap();
let orphan_analysis = orphan_handle.join().unwrap();
let parallelism_analysis = parallelism_handle.join().unwrap();

println!("Parallel analysis completed:");
println!("  Cycles: {}", cycles.len());
println!("  Orphaned: {}", orphan_analysis.total_orphaned);
println!("  Max parallelism: {}", parallelism_analysis.max_parallelism);
}

Analysis Error Handling

Robust Error Handling

#![allow(unused)]
fn main() {
use reflow_network::graph::GraphError;

fn safe_analysis(graph: &Graph) -> Result<(), Box<dyn std::error::Error>> {
    // Validate graph structure first
    match graph.validate_flow() {
        Ok(validation) => {
            if !validation.cycles.is_empty() {
                println!("⚠️  Cycles detected - some analyses may not be reliable");
            }
        }
        Err(e) => {
            eprintln!("Validation failed: {}", e);
            return Err(Box::new(e));
        }
    }
    
    // Perform safe traversal
    match graph.traverse_depth_first("start", |node| {
        println!("Processing: {}", node.id);
    }) {
        Ok(_) => println!("Traversal completed successfully"),
        Err(GraphError::NodeNotFound(node)) => {
            eprintln!("Start node '{}' not found", node);
        }
        Err(e) => {
            eprintln!("Traversal error: {}", e);
            return Err(Box::new(e));
        }
    }
    
    Ok(())
}
}

Integration with Visual Editors

Real-time Analysis Updates

#![allow(unused)]
fn main() {
// Update UI based on analysis results
fn update_editor_with_analysis(graph: &Graph, ui: &mut GraphEditor) {
    // Highlight cycles
    if let Some(cycle) = graph.detect_cycles() {
        for node in cycle {
            ui.highlight_node(&node, "error");
        }
    }
    
    // Show bottlenecks
    let bottlenecks = graph.detect_bottlenecks();
    for bottleneck in bottlenecks {
        match bottleneck {
            Bottleneck::HighDegree(node) => {
                ui.highlight_node(&node, "bottleneck");
            }
            Bottleneck::SequentialChain(chain) => {
                ui.highlight_chain(&chain, "optimization-opportunity");
            }
        }
    }
    
    // Show parallel opportunities
    let parallelism = graph.analyze_parallelism();
    for branch in parallelism.parallel_branches {
        ui.group_nodes(&branch.nodes, "parallel-group");
    }
}
}

Next Steps

Graph Layout System

Reflow's layout system provides intelligent automatic positioning and manual positioning capabilities for graph nodes. The system supports multiple layout algorithms, custom positioning, and integration with visual editors.

Automatic Layout

Basic Auto-Layout

#![allow(unused)]
fn main() {
use reflow_network::graph::Position;

// Calculate optimal positions using default algorithm
let positions = graph.calculate_layout();

for (node_id, position) in positions {
    println!("Node {}: x={:.1}, y={:.1}", node_id, position.x, position.y);
}

// Apply calculated layout to graph metadata
graph.auto_layout()?;
}

Layout Algorithms

The system supports multiple layout algorithms optimized for different graph types:

#![allow(unused)]
fn main() {
use reflow_network::graph::{LayoutAlgorithm, LayoutConfig};

// Hierarchical layout for DAGs (default)
let hierarchical_config = LayoutConfig {
    algorithm: LayoutAlgorithm::Hierarchical,
    node_spacing: 120.0,
    layer_spacing: 80.0,
    edge_spacing: 40.0,
    ..Default::default()
};

let positions = graph.calculate_layout_with_config(&hierarchical_config);

// Force-directed layout for general graphs
let force_config = LayoutConfig {
    algorithm: LayoutAlgorithm::ForceDirected,
    iterations: 100,
    spring_strength: 0.5,
    repulsion_strength: 1000.0,
    ..Default::default()
};

let positions = graph.calculate_layout_with_config(&force_config);

// Grid layout for structured workflows
let grid_config = LayoutConfig {
    algorithm: LayoutAlgorithm::Grid,
    grid_size: 150.0,
    columns: 5,
    align_to_grid: true,
    ..Default::default()
};

let positions = graph.calculate_layout_with_config(&grid_config);
}

Hierarchical Layout

Best for directed acyclic graphs (DAGs) and workflow diagrams:

#![allow(unused)]
fn main() {
use reflow_network::graph::HierarchicalConfig;

let hierarchical = HierarchicalConfig {
    direction: LayoutDirection::TopToBottom,
    layer_spacing: 100.0,
    node_spacing: 80.0,
    edge_routing: EdgeRouting::Orthogonal,
    minimize_crossings: true,
    balance_nodes: true,
};

let positions = graph.hierarchical_layout(&hierarchical);

// Apply with automatic layer detection
graph.auto_layout_hierarchical()?;
}

Force-Directed Layout

Ideal for general graphs with cycles and complex interconnections:

#![allow(unused)]
fn main() {
use reflow_network::graph::ForceDirectedConfig;

let force_config = ForceDirectedConfig {
    iterations: 150,
    cooling_factor: 0.95,
    initial_temperature: 100.0,
    spring_strength: 0.3,
    spring_length: 100.0,
    repulsion_strength: 800.0,
    gravity_strength: 0.1,
    node_charge: -30.0,
};

let positions = graph.force_directed_layout(&force_config);
}

Organic Layout

Creates natural, flowing layouts:

#![allow(unused)]
fn main() {
use reflow_network::graph::OrganicConfig;

let organic_config = OrganicConfig {
    preferred_edge_length: 120.0,
    edge_length_cost_factor: 0.0001,
    node_distribution_cost_factor: 20000.0,
    edge_crossing_cost_factor: 6000.0,
    edge_distance_cost_factor: 15000.0,
    border_line_cost_factor: 100.0,
    max_iterations: 200,
};

let positions = graph.organic_layout(&organic_config);
}

Manual Positioning

Setting Node Positions

#![allow(unused)]
fn main() {
use reflow_network::graph::Position;

// Set specific position
graph.set_node_position("input_node", 0.0, 0.0)?;
graph.set_node_position("processor", 200.0, 100.0)?;
graph.set_node_position("output_node", 400.0, 0.0)?;

// Set position with custom anchor point
let position = Position {
    x: 150.0,
    y: 75.0,
    anchor: Some(Anchor { x: 0.5, y: 0.5 }), // Center anchor
};
graph.set_node_position_with_anchor("centered_node", position)?;
}

Position Metadata Structure

Positions are stored in node metadata following this convention:

#![allow(unused)]
fn main() {
use serde_json::json;
use std::collections::HashMap;

// Standard position metadata
let position_metadata = HashMap::from([
    ("x".to_string(), json!(100)),
    ("y".to_string(), json!(150)),
    ("width".to_string(), json!(120)),
    ("height".to_string(), json!(80)),
    ("anchor".to_string(), json!({
        "x": 0.5,  // Horizontal anchor (0.0 = left, 0.5 = center, 1.0 = right)
        "y": 0.5   // Vertical anchor (0.0 = top, 0.5 = middle, 1.0 = bottom)
    }))
]);

graph.set_node_metadata("positioned_node", position_metadata);
}

Retrieving Positions

#![allow(unused)]
fn main() {
// Get position for a specific node
if let Some(position) = graph.get_node_position("processor") {
    println!("Node position: ({}, {})", position.x, position.y);
}

// Get all node positions
let all_positions = graph.get_all_positions();
for (node_id, position) in all_positions {
    println!("{}: ({:.1}, {:.1})", node_id, position.x, position.y);
}

// Get positions within a bounding box
let bbox = BoundingBox {
    min_x: 0.0,
    min_y: 0.0,
    max_x: 500.0,
    max_y: 300.0,
};
let nodes_in_area = graph.get_nodes_in_area(bbox);
}

Layout Constraints

Alignment Constraints

#![allow(unused)]
fn main() {
use reflow_network::graph::{AlignmentConstraint, ConstraintType};

// Horizontal alignment
let horizontal_alignment = AlignmentConstraint {
    nodes: vec!["node1".to_string(), "node2".to_string(), "node3".to_string()],
    constraint_type: ConstraintType::HorizontalAlignment,
    offset: 0.0,
};

// Vertical alignment
let vertical_alignment = AlignmentConstraint {
    nodes: vec!["input1".to_string(), "input2".to_string()],
    constraint_type: ConstraintType::VerticalAlignment,
    offset: 50.0, // 50 pixels apart
};

// Apply constraints during layout
let config = LayoutConfig {
    algorithm: LayoutAlgorithm::Hierarchical,
    constraints: vec![horizontal_alignment, vertical_alignment],
    ..Default::default()
};

graph.apply_layout_with_constraints(&config)?;
}

Distance Constraints

#![allow(unused)]
fn main() {
use reflow_network::graph::{DistanceConstraint, DistanceType};

// Minimum distance constraint
let min_distance = DistanceConstraint {
    from_node: "source".to_string(),
    to_node: "sink".to_string(),
    distance_type: DistanceType::Minimum,
    distance: 200.0,
};

// Maximum distance constraint
let max_distance = DistanceConstraint {
    from_node: "processor1".to_string(),
    to_node: "processor2".to_string(),
    distance_type: DistanceType::Maximum,
    distance: 300.0,
};

// Fixed distance constraint
let fixed_distance = DistanceConstraint {
    from_node: "controller".to_string(),
    to_node: "display".to_string(),
    distance_type: DistanceType::Fixed,
    distance: 150.0,
};
}

Boundary Constraints

#![allow(unused)]
fn main() {
use reflow_network::graph::BoundaryConstraint;

// Keep nodes within bounds
let boundary = BoundaryConstraint {
    min_x: 0.0,
    min_y: 0.0,
    max_x: 1000.0,
    max_y: 600.0,
    enforce_during_layout: true,
};

// Apply boundary constraint
graph.set_layout_boundary(boundary);
}

Layout Optimization

Minimize Edge Crossings

#![allow(unused)]
fn main() {
// Optimize layout to reduce edge crossings
let optimized_positions = graph.minimize_edge_crossings()?;

// Apply optimization with maximum iterations
let crossings_config = EdgeCrossingConfig {
    max_iterations: 50,
    improvement_threshold: 0.01,
    use_barycenter_heuristic: true,
};

graph.optimize_edge_crossings(&crossings_config)?;
}

Edge Bundling

#![allow(unused)]
fn main() {
use reflow_network::graph::EdgeBundling;

// Enable edge bundling for cleaner layouts
let bundling_config = EdgeBundling {
    enable: true,
    strength: 0.8,
    step_size: 0.1,
    iterations: 60,
    min_distance: 10.0,
};

graph.apply_edge_bundling(&bundling_config)?;
}

Compact Layout

#![allow(unused)]
fn main() {
// Create compact layout by minimizing overall area
let compact_config = CompactLayoutConfig {
    preserve_aspect_ratio: true,
    min_node_spacing: 20.0,
    pack_components: true,
};

graph.create_compact_layout(&compact_config)?;
}

Layer-Based Layout

Automatic Layer Detection

#![allow(unused)]
fn main() {
use reflow_network::graph::{LayerAnalysis, LayerDirection};

// Detect natural layers in the graph
let layer_analysis = graph.analyze_layers();

println!("Detected {} layers:", layer_analysis.layers.len());
for (level, nodes) in layer_analysis.layers.iter().enumerate() {
    println!("  Layer {}: {:?}", level, nodes);
}

// Apply layer-based layout
let layer_config = LayerLayoutConfig {
    direction: LayerDirection::LeftToRight,
    layer_spacing: 150.0,
    node_spacing: 100.0,
    center_nodes_in_layer: true,
};

graph.apply_layer_layout(&layer_config)?;
}

Manual Layer Assignment

#![allow(unused)]
fn main() {
// Manually assign nodes to layers
let layer_assignments = HashMap::from([
    ("input1".to_string(), 0),
    ("input2".to_string(), 0),
    ("processor1".to_string(), 1),
    ("processor2".to_string(), 1),
    ("output".to_string(), 2),
]);

graph.set_layer_assignments(layer_assignments);
graph.apply_layer_layout(&layer_config)?;
}

Group-Based Layout

Layout Node Groups

#![allow(unused)]
fn main() {
use reflow_network::graph::GroupLayoutConfig;

// Layout nodes within groups
let group_config = GroupLayoutConfig {
    group_spacing: 200.0,
    internal_spacing: 50.0,
    group_padding: 20.0,
    layout_algorithm: LayoutAlgorithm::Grid,
};

// Apply group-aware layout
graph.layout_groups(&group_config)?;

// Layout specific group
graph.layout_group("data_processing", &group_config)?;
}

Group Boundaries

#![allow(unused)]
fn main() {
// Calculate group boundaries
let group_bounds = graph.calculate_group_bounds("data_processing");
if let Some(bounds) = group_bounds {
    println!("Group bounds: ({}, {}) to ({}, {})", 
        bounds.min_x, bounds.min_y, bounds.max_x, bounds.max_y);
}

// Set custom group boundary
let custom_bounds = BoundingBox {
    min_x: 100.0,
    min_y: 50.0,
    max_x: 400.0,
    max_y: 250.0,
};
graph.set_group_bounds("data_processing", custom_bounds);
}

Advanced Layout Features

Multi-Level Layout

For very large graphs, use multi-level layout:

#![allow(unused)]
fn main() {
use reflow_network::graph::MultiLevelConfig;

let multilevel_config = MultiLevelConfig {
    coarsening_factor: 0.7,
    max_levels: 5,
    uncoarsening_iterations: 10,
    finest_level_iterations: 20,
};

let positions = graph.multilevel_layout(&multilevel_config)?;
}

Incremental Layout

Update layout incrementally when nodes are added/removed:

#![allow(unused)]
fn main() {
// Add node with incremental layout update
graph.add_node("new_processor", "DataProcessor", None);
graph.add_connection("source", "out", "new_processor", "in", None);

// Update layout incrementally
let incremental_config = IncrementalLayoutConfig {
    stabilization_iterations: 10,
    affected_nodes_only: true,
    preserve_existing_positions: true,
};

graph.incremental_layout_update(&incremental_config)?;
}

Layout Animation Support

#![allow(unused)]
fn main() {
use reflow_network::graph::{LayoutAnimation, AnimationFrame};

// Generate animation frames for smooth transitions
let from_positions = graph.get_all_positions();
let to_positions = graph.calculate_layout();

let animation = LayoutAnimation::new(from_positions, to_positions, 30); // 30 frames

// Get animation frames
for (frame_idx, frame) in animation.frames().enumerate() {
    println!("Frame {}: {} position updates", frame_idx, frame.positions.len());
    
    // Apply frame in UI
    for (node_id, position) in frame.positions {
        // Update UI node position
        ui.set_node_position(&node_id, position.x, position.y);
    }
}
}

Layout Quality Metrics

Measuring Layout Quality

#![allow(unused)]
fn main() {
use reflow_network::graph::{LayoutMetrics, LayoutQuality};

let metrics = graph.calculate_layout_metrics();

println!("Layout Quality Metrics:");
println!("  Edge crossings: {}", metrics.edge_crossings);
println!("  Average edge length: {:.2}", metrics.average_edge_length);
println!("  Node distribution score: {:.2}", metrics.node_distribution_score);
println!("  Aspect ratio: {:.2}", metrics.aspect_ratio);
println!("  Overall score: {:.2}", metrics.overall_quality_score);

// Detailed metrics
println!("\nDetailed Metrics:");
println!("  Minimum edge length: {:.2}", metrics.min_edge_length);
println!("  Maximum edge length: {:.2}", metrics.max_edge_length);
println!("  Edge length variance: {:.2}", metrics.edge_length_variance);
println!("  Node overlap count: {}", metrics.node_overlaps);
println!("  Angular resolution: {:.2}°", metrics.angular_resolution);
}

Layout Comparison

#![allow(unused)]
fn main() {
// Compare different layout algorithms
let algorithms = vec![
    LayoutAlgorithm::Hierarchical,
    LayoutAlgorithm::ForceDirected,
    LayoutAlgorithm::Organic,
];

let mut best_layout = None;
let mut best_score = 0.0;

for algorithm in algorithms {
    let config = LayoutConfig {
        algorithm: algorithm.clone(),
        ..Default::default()
    };
    
    let positions = graph.calculate_layout_with_config(&config);
    graph.apply_positions(positions);
    
    let metrics = graph.calculate_layout_metrics();
    let score = metrics.overall_quality_score;
    
    println!("{:?}: score {:.2}", algorithm, score);
    
    if score > best_score {
        best_score = score;
        best_layout = Some(algorithm);
    }
}

if let Some(best) = best_layout {
    println!("Best layout algorithm: {:?} (score: {:.2})", best, best_score);
}
}

Custom Layout Algorithms

Implementing Custom Layout

#![allow(unused)]
fn main() {
use reflow_network::graph::{CustomLayout, LayoutContext};

struct CircularLayout {
    radius: f64,
    start_angle: f64,
}

impl CustomLayout for CircularLayout {
    fn calculate_positions(&self, context: &LayoutContext) -> HashMap<String, Position> {
        let mut positions = HashMap::new();
        let node_count = context.nodes.len();
        let angle_step = 2.0 * std::f64::consts::PI / node_count as f64;
        
        for (i, node_id) in context.nodes.iter().enumerate() {
            let angle = self.start_angle + i as f64 * angle_step;
            let x = self.radius * angle.cos();
            let y = self.radius * angle.sin();
            
            positions.insert(node_id.clone(), Position { x, y, anchor: None });
        }
        
        positions
    }
}

// Use custom layout
let circular = CircularLayout {
    radius: 200.0,
    start_angle: 0.0,
};

let positions = graph.apply_custom_layout(&circular)?;
}

Layout Plugins

#![allow(unused)]
fn main() {
// Register layout plugin
graph.register_layout_plugin("spiral", Box::new(SpiralLayout::new()));

// Use registered plugin
let config = LayoutConfig {
    algorithm: LayoutAlgorithm::Custom("spiral".to_string()),
    ..Default::default()
};

graph.calculate_layout_with_config(&config);
}

Layout Events

Listening to Layout Changes

#![allow(unused)]
fn main() {
use reflow_network::graph::LayoutEvents;

// Subscribe to layout events
let layout_receiver = graph.layout_event_channel.1.clone();

std::thread::spawn(move || {
    while let Ok(event) = layout_receiver.recv() {
        match event {
            LayoutEvents::LayoutStarted { algorithm } => {
                println!("Layout started: {:?}", algorithm);
            }
            LayoutEvents::LayoutCompleted { algorithm, duration } => {
                println!("Layout completed: {:?} in {:?}", algorithm, duration);
            }
            LayoutEvents::NodePositionChanged { node_id, old_pos, new_pos } => {
                println!("Node {} moved: ({:.1}, {:.1}) -> ({:.1}, {:.1})", 
                    node_id, old_pos.x, old_pos.y, new_pos.x, new_pos.y);
            }
            LayoutEvents::LayoutProgress { progress } => {
                println!("Layout progress: {:.1}%", progress * 100.0);
            }
        }
    }
});
}

WebAssembly Layout API

JavaScript Integration

import { Graph, LayoutAlgorithm } from 'reflow-network';

const graph = new Graph("LayoutDemo", false, {});

// Add nodes and connections
graph.addNode("input", "InputNode", {});
graph.addNode("processor", "ProcessorNode", {});
graph.addNode("output", "OutputNode", {});
graph.addConnection("input", "out", "processor", "in", {});
graph.addConnection("processor", "out", "output", "in", {});

// Apply automatic layout
const positions = graph.calculateLayout({
    algorithm: LayoutAlgorithm.Hierarchical,
    nodeSpacing: 120,
    layerSpacing: 80
});

// Update UI with calculated positions
for (const [nodeId, position] of positions) {
    const nodeElement = document.getElementById(nodeId);
    nodeElement.style.left = `${position.x}px`;
    nodeElement.style.top = `${position.y}px`;
}

// Manual positioning
graph.setNodePosition("processor", 200, 100);

// Listen for layout events
graph.onLayoutChange((event) => {
    if (event.type === 'position_changed') {
        updateNodeElement(event.nodeId, event.newPosition);
    }
});

Layout Best Practices

Performance Optimization

  1. Use appropriate algorithms: Choose the right algorithm for your graph type
  2. Limit iterations: Set reasonable iteration limits for force-directed layouts
  3. Cache layouts: Store calculated positions to avoid recalculation
  4. Incremental updates: Use incremental layout for small changes
#![allow(unused)]
fn main() {
// Good: Incremental update for small changes
graph.add_node("new_node", "Component", None);
graph.incremental_layout_update(&incremental_config)?;

// Avoid: Full recalculation for small changes
graph.auto_layout()?; // Expensive for large graphs
}

Visual Quality

  1. Minimize crossings: Use algorithms that reduce edge crossings
  2. Consistent spacing: Maintain uniform spacing between nodes
  3. Respect hierarchy: Use hierarchical layout for workflow graphs
  4. Group related nodes: Use group layouts for related components
#![allow(unused)]
fn main() {
// Good: Group-aware layout
let group_config = GroupLayoutConfig {
    group_spacing: 200.0,
    internal_spacing: 50.0,
    group_padding: 20.0,
    layout_algorithm: LayoutAlgorithm::Grid,
};
graph.layout_groups(&group_config)?;
}

User Experience

  1. Smooth transitions: Use animation between layout changes
  2. Preserve user positioning: Respect manually positioned nodes
  3. Provide layout options: Allow users to choose layout algorithms
  4. Show progress: Display progress for long-running layout calculations
#![allow(unused)]
fn main() {
// Preserve manual positions
let manual_positions = graph.get_manually_positioned_nodes();
let config = LayoutConfig {
    preserve_positions: manual_positions,
    ..Default::default()
};
}

Troubleshooting

Common Layout Issues

  1. Overlapping nodes: Increase node spacing or use different algorithm
  2. Poor aspect ratio: Adjust layout bounds or use compact layout
  3. Too many crossings: Use hierarchical layout or enable crossing minimization
  4. Unstable force layout: Reduce spring strength or increase damping
#![allow(unused)]
fn main() {
// Fix overlapping nodes
let config = LayoutConfig {
    node_spacing: 150.0, // Increase spacing
    collision_detection: true,
    ..Default::default()
};

// Fix unstable force layout
let force_config = ForceDirectedConfig {
    spring_strength: 0.1, // Reduce from default 0.3
    damping: 0.8,         // Add damping
    ..Default::default()
};
}

Next Steps

Advanced Graph Features

This guide covers advanced features of Reflow's graph system including history management, subgraph operations, optimization techniques, and performance tuning.

History Management

Basic History Operations

#![allow(unused)]
fn main() {
use reflow_network::graph::{Graph, GraphHistory};

// Create graph with history tracking
let (mut graph, mut history) = Graph::with_history();

// Make some changes
graph.add_node("input", "InputNode", None);
graph.add_node("output", "OutputNode", None);
graph.add_connection("input", "out", "output", "in", None);

// Undo last operation
if let Some(operation) = history.undo() {
    history.apply_inverse(&mut graph, operation)?;
    println!("Undid: {:?}", operation);
}

// Redo operation
if let Some(operation) = history.redo() {
    history.apply_operation(&mut graph, operation)?;
    println!("Redid: {:?}", operation);
}
}

Advanced History Configuration

#![allow(unused)]
fn main() {
use reflow_network::graph::{HistoryConfig, HistoryLimit};

// Create history with custom configuration
let history_config = HistoryConfig {
    limit: HistoryLimit::Operations(100),  // Limit to 100 operations
    compress_threshold: 50,                // Compress after 50 operations
    auto_cleanup: true,                    // Clean up old entries automatically
    track_metadata_changes: true,          // Track metadata changes
};

let (mut graph, mut history) = Graph::with_history_config(history_config);

// Alternative: Limit by memory usage
let memory_config = HistoryConfig {
    limit: HistoryLimit::Memory(10 * 1024 * 1024), // 10 MB limit
    ..Default::default()
};
}

History Compression

#![allow(unused)]
fn main() {
// Manually compress history
history.compress()?;

// Get compression statistics
let stats = history.compression_stats();
println!("Compressed {} operations into {} snapshots", 
    stats.original_operations, stats.compressed_snapshots);
println!("Memory saved: {:.1} MB", stats.memory_saved / 1024.0 / 1024.0);

// Force full compression
history.force_compress_all()?;
}

History Snapshots

#![allow(unused)]
fn main() {
use reflow_network::graph::Snapshot;

// Create named snapshot
let snapshot_id = history.create_snapshot("before_major_changes")?;

// Make changes...
graph.add_node("processor1", "DataProcessor", None);
graph.add_node("processor2", "DataProcessor", None);

// Restore to snapshot
history.restore_snapshot(&mut graph, &snapshot_id)?;

// List all snapshots
let snapshots = history.list_snapshots();
for snapshot in snapshots {
    println!("Snapshot: {} (created: {})", snapshot.name, snapshot.timestamp);
}

// Delete old snapshots
history.delete_snapshot("old_snapshot")?;
}

Branching History

#![allow(unused)]
fn main() {
use reflow_network::graph::HistoryBranch;

// Create branch from current state
let branch_id = history.create_branch("experimental_feature")?;

// Switch to branch
history.switch_branch(&mut graph, &branch_id)?;

// Make experimental changes
graph.add_node("experimental", "ExperimentalNode", None);

// Switch back to main branch
history.switch_branch(&mut graph, "main")?;

// Merge branch if satisfied with changes
history.merge_branch(&mut graph, &branch_id, "main")?;
}

History Events

#![allow(unused)]
fn main() {
use reflow_network::graph::HistoryEvents;

// Subscribe to history events
let history_receiver = history.event_channel().1.clone();

std::thread::spawn(move || {
    while let Ok(event) = history_receiver.recv() {
        match event {
            HistoryEvents::OperationAdded { operation, index } => {
                println!("Added operation {}: {:?}", index, operation);
            }
            HistoryEvents::Undo { operation } => {
                println!("Undid operation: {:?}", operation);
            }
            HistoryEvents::Redo { operation } => {
                println!("Redid operation: {:?}", operation);
            }
            HistoryEvents::SnapshotCreated { name, timestamp } => {
                println!("Created snapshot '{}' at {}", name, timestamp);
            }
            HistoryEvents::HistoryCompressed { before_size, after_size } => {
                println!("Compressed history: {} -> {} operations", before_size, after_size);
            }
        }
    }
});
}

Subgraph Operations

Creating Subgraphs

#![allow(unused)]
fn main() {
use reflow_network::graph::{Subgraph, SubgraphConfig};

// Extract subgraph by node selection
let selected_nodes = vec!["processor1", "processor2", "connector"];
let subgraph = graph.extract_subgraph(&selected_nodes)?;

println!("Extracted subgraph:");
println!("  Nodes: {:?}", subgraph.nodes);
println!("  Internal connections: {}", subgraph.internal_connections.len());
println!("  External connections: {}", subgraph.external_connections.len());

// Create subgraph with configuration
let config = SubgraphConfig {
    include_metadata: true,
    preserve_external_connections: true,
    auto_add_ports: true,
};

let configured_subgraph = graph.extract_subgraph_with_config(&selected_nodes, &config)?;
}

Subgraph Analysis

#![allow(unused)]
fn main() {
use reflow_network::graph::SubgraphAnalysis;

let analysis = graph.analyze_subgraph(&subgraph);

println!("Subgraph Analysis:");
println!("  Node count: {}", analysis.node_count);
println!("  Connection count: {}", analysis.connection_count);
println!("  Max depth: {}", analysis.max_depth);
println!("  Is cyclic: {}", analysis.is_cyclic);
println!("  Branching factor: {:.2}", analysis.branching_factor);
println!("  Complexity score: {:.2}", analysis.complexity_score);

// Detailed connectivity analysis
println!("  Entry points: {:?}", analysis.entry_points);
println!("  Exit points: {:?}", analysis.exit_points);
println!("  Internal clusters: {}", analysis.internal_clusters);
}

Subgraph Operations

#![allow(unused)]
fn main() {
// Clone subgraph
let cloned_subgraph = subgraph.clone();

// Merge subgraphs
let merged = Subgraph::merge(vec![subgraph1, subgraph2, subgraph3])?;

// Subtract subgraph (remove nodes)
let remainder = graph.subtract_subgraph(&subgraph)?;

// Replace subgraph with optimized version
let optimized = optimize_subgraph(&subgraph)?;
graph.replace_subgraph(&subgraph, &optimized)?;
}

Subgraph Templates

#![allow(unused)]
fn main() {
use reflow_network::graph::{SubgraphTemplate, TemplateParameter};

// Create reusable subgraph template
let template = SubgraphTemplate {
    name: "data_processing_pipeline".to_string(),
    description: "Standard data processing pipeline".to_string(),
    nodes: subgraph.nodes.clone(),
    connections: subgraph.internal_connections.clone(),
    parameters: vec![
        TemplateParameter {
            name: "buffer_size".to_string(),
            param_type: "integer".to_string(),
            default_value: Some(json!(1024)),
            description: "Buffer size for data processing".to_string(),
        }
    ],
};

// Instantiate template with parameters
let instance_params = HashMap::from([
    ("buffer_size".to_string(), json!(2048))
]);

let instance = template.instantiate("pipeline_1", instance_params)?;
graph.add_subgraph_instance(instance)?;
}

Graph Optimization

Automatic Optimization

#![allow(unused)]
fn main() {
use reflow_network::graph::{OptimizationConfig, OptimizationLevel};

// Basic optimization
let optimized_graph = graph.optimize()?;

// Advanced optimization with configuration
let optimization_config = OptimizationConfig {
    level: OptimizationLevel::Aggressive,
    remove_redundant_nodes: true,
    merge_compatible_nodes: true,
    optimize_connection_paths: true,
    reorder_for_cache_locality: true,
    minimize_communication_cost: true,
};

let optimized = graph.optimize_with_config(&optimization_config)?;

// Apply optimizations in-place
graph.apply_optimizations(&optimization_config)?;
}

Redundancy Elimination

#![allow(unused)]
fn main() {
use reflow_network::graph::RedundancyAnalysis;

// Find redundant nodes
let redundancy = graph.analyze_redundancy();

println!("Redundancy Analysis:");
for redundant in redundancy.redundant_nodes {
    println!("  Node '{}': {}", redundant.node, redundant.reason);
    
    match redundant.redundancy_type {
        RedundancyType::DuplicateFunction => {
            println!("    Can be merged with: {:?}", redundant.merge_candidates);
        }
        RedundancyType::NoOperation => {
            println!("    Performs no operation - can be removed");
        }
        RedundancyType::BypassableTransform => {
            println!("    Transform can be bypassed");
        }
    }
}

// Automatically remove redundant nodes
graph.remove_redundant_nodes()?;
}

Node Fusion

#![allow(unused)]
fn main() {
use reflow_network::graph::FusionCandidate;

// Find nodes that can be fused together
let fusion_candidates = graph.find_fusion_candidates();

for candidate in fusion_candidates {
    println!("Fusion opportunity: {:?}", candidate.nodes);
    println!("  Estimated speedup: {:.1}x", candidate.estimated_speedup);
    println!("  Memory savings: {:.1} MB", candidate.memory_savings);
    
    // Apply fusion if beneficial
    if candidate.estimated_speedup > 1.5 {
        graph.fuse_nodes(&candidate.nodes, &candidate.fusion_strategy)?;
    }
}
}

Connection Optimization

#![allow(unused)]
fn main() {
use reflow_network::graph::ConnectionOptimization;

// Optimize connection routing
let connection_opt = ConnectionOptimization {
    minimize_wire_length: true,
    reduce_crossings: true,
    bundle_parallel_connections: true,
    use_hierarchical_routing: true,
};

graph.optimize_connections(&connection_opt)?;

// Find and eliminate unnecessary intermediate nodes
let bypass_candidates = graph.find_bypass_candidates();
for candidate in bypass_candidates {
    if candidate.is_safe_to_bypass() {
        graph.bypass_node(&candidate.node)?;
    }
}
}

Performance Tuning

Memory Optimization

#![allow(unused)]
fn main() {
use reflow_network::graph::{MemoryConfig, MemoryOptimization};

// Configure memory usage
let memory_config = MemoryConfig {
    node_pool_size: 1000,
    connection_pool_size: 5000,
    metadata_cache_size: 10 * 1024 * 1024, // 10 MB
    enable_lazy_loading: true,
    compress_metadata: true,
};

graph.configure_memory(&memory_config)?;

// Apply memory optimizations
let memory_opt = MemoryOptimization {
    compact_node_storage: true,
    use_interned_strings: true,
    enable_copy_on_write: true,
    garbage_collect_threshold: 0.8,
};

graph.apply_memory_optimization(&memory_opt)?;
}

Index Optimization

#![allow(unused)]
fn main() {
use reflow_network::graph::IndexConfig;

// Optimize internal indices for better performance
let index_config = IndexConfig {
    connection_index_type: IndexType::HashMap, // Fast lookups
    node_index_type: IndexType::BTreeMap,      // Ordered iteration
    spatial_index_enabled: true,               // For layout operations
    cache_frequently_accessed: true,
};

graph.rebuild_indices(&index_config)?;

// Enable adaptive indexing
graph.enable_adaptive_indexing(true);
}

Parallel Processing Setup

#![allow(unused)]
fn main() {
use reflow_network::graph::{ParallelConfig, ThreadingModel};

// Configure parallel processing
let parallel_config = ParallelConfig {
    max_threads: num_cpus::get(),
    threading_model: ThreadingModel::WorkStealing,
    enable_parallel_analysis: true,
    parallel_layout_threshold: 100, // Use parallel layout for >100 nodes
    chunk_size: 50,
};

graph.configure_parallel_processing(&parallel_config)?;

// Enable parallel operations
graph.enable_parallel_operations(true);
}

Benchmarking and Profiling

#![allow(unused)]
fn main() {
use reflow_network::graph::{Benchmark, ProfileConfig};
use std::time::Instant;

// Benchmark graph operations
let benchmark = Benchmark::new(&graph);

let results = benchmark.run_full_suite()?;
println!("Benchmark Results:");
println!("  Node addition: {:.2}μs", results.node_addition_time.as_micros());
println!("  Connection creation: {:.2}μs", results.connection_time.as_micros());
println!("  Cycle detection: {:.2}ms", results.cycle_detection_time.as_millis());
println!("  Layout calculation: {:.2}ms", results.layout_time.as_millis());
println!("  Validation: {:.2}ms", results.validation_time.as_millis());

// Profile specific operations
let profile_config = ProfileConfig {
    sample_rate: 1000, // Sample every 1000 operations
    track_memory: true,
    track_time: true,
    output_format: OutputFormat::Json,
};

let profiler = graph.create_profiler(&profile_config)?;
profiler.start();

// Perform operations...
graph.add_node("test", "TestNode", None);
// ... more operations

let profile_results = profiler.stop_and_collect();
profile_results.save_to_file("graph_profile.json")?;
}

Large Graph Handling

Streaming Operations

#![allow(unused)]
fn main() {
use reflow_network::graph::{StreamingConfig, GraphStream};

// Handle very large graphs with streaming
let streaming_config = StreamingConfig {
    chunk_size: 1000,
    memory_limit: 100 * 1024 * 1024, // 100 MB
    enable_disk_spillover: true,
    compression_level: 6,
};

let graph_stream = GraphStream::new(streaming_config);

// Process graph in chunks
for chunk in graph_stream.process_in_chunks(&large_graph) {
    let chunk_result = process_graph_chunk(chunk)?;
    graph_stream.accumulate_result(chunk_result);
}

let final_result = graph_stream.finalize()?;
}

Lazy Loading

#![allow(unused)]
fn main() {
use reflow_network::graph::LazyGraph;

// Create lazy-loading graph for very large datasets
let lazy_graph = LazyGraph::from_file("massive_graph.json")?;

// Nodes and connections are loaded on demand
if let Some(node) = lazy_graph.get_node("some_node")? {
    // Node is loaded into memory only when accessed
    println!("Node component: {}", node.component);
}

// Preload specific subgraphs for better performance
lazy_graph.preload_subgraph(&["critical_node_1", "critical_node_2"])?;
}

Distributed Graph Processing

#![allow(unused)]
fn main() {
use reflow_network::graph::{DistributedGraph, NodePartition};

// Partition graph across multiple nodes
let partitions = graph.create_partitions(4)?; // 4 partitions

for (i, partition) in partitions.iter().enumerate() {
    println!("Partition {}: {} nodes", i, partition.nodes.len());
    
    // Deploy partition to worker node
    let worker_id = format!("worker_{}", i);
    deploy_partition_to_worker(&worker_id, partition)?;
}

// Coordinate distributed operations
let distributed_graph = DistributedGraph::new(partitions);
let distributed_result = distributed_graph.execute_distributed_analysis().await?;
}

Advanced Analysis

Machine Learning Integration

#![allow(unused)]
fn main() {
use reflow_network::graph::{MLFeatures, GraphEmbedding};

// Extract features for machine learning
let features = graph.extract_ml_features();

println!("Graph ML Features:");
println!("  Node features: {} dimensions", features.node_features.len());
println!("  Edge features: {} dimensions", features.edge_features.len());
println!("  Global features: {} dimensions", features.global_features.len());

// Generate graph embeddings
let embedding_config = EmbeddingConfig {
    embedding_size: 128,
    walk_length: 10,
    num_walks: 100,
    context_size: 5,
};

let embeddings = graph.generate_embeddings(&embedding_config)?;

// Use embeddings for similarity analysis
let similar_nodes = embeddings.find_similar_nodes("reference_node", 5)?;
for (node, similarity) in similar_nodes {
    println!("Similar node: {} (similarity: {:.3})", node, similarity);
}
}

Pattern Mining

#![allow(unused)]
fn main() {
use reflow_network::graph::{PatternMiner, FrequentPattern};

// Mine frequent subgraph patterns
let miner = PatternMiner::new();
let patterns = miner.mine_frequent_patterns(&graph, 0.1)?; // 10% minimum support

for pattern in patterns {
    println!("Frequent pattern (support: {:.1}%):", pattern.support * 100.0);
    println!("  Nodes: {:?}", pattern.nodes);
    println!("  Connections: {:?}", pattern.connections);
    
    // Find all instances of this pattern
    let instances = graph.find_pattern_instances(&pattern)?;
    println!("  Found in {} locations", instances.len());
}
}

Anomaly Detection

#![allow(unused)]
fn main() {
use reflow_network::graph::{AnomalyDetector, AnomalyType};

// Detect structural anomalies
let detector = AnomalyDetector::new();
let anomalies = detector.detect_anomalies(&graph)?;

for anomaly in anomalies {
    match anomaly.anomaly_type {
        AnomalyType::UnusualDegree => {
            println!("Node '{}' has unusual connectivity: {} connections", 
                anomaly.node, anomaly.score);
        }
        AnomalyType::IsolatedCluster => {
            println!("Isolated cluster detected around node '{}'", anomaly.node);
        }
        AnomalyType::UnexpectedPattern => {
            println!("Unexpected pattern at node '{}' (novelty: {:.2})", 
                anomaly.node, anomaly.score);
        }
    }
}
}

Graph Transformation

Rule-Based Transformations

#![allow(unused)]
fn main() {
use reflow_network::graph::{TransformationRule, RuleEngine};

// Define transformation rules
let rule = TransformationRule {
    name: "optimize_serial_processors".to_string(),
    pattern: GraphPattern::parse("A -> B -> C where A.type == B.type == 'Processor'")?,
    replacement: GraphReplacement::parse("A+B+C -> OptimizedProcessor")?,
    condition: |nodes| {
        // Custom condition logic
        nodes.iter().all(|n| n.metadata.get("parallelizable") == Some(&json!(true)))
    },
};

// Apply transformation rules
let rule_engine = RuleEngine::new();
rule_engine.add_rule(rule);

let transformed_graph = rule_engine.apply_rules(&graph)?;
}

Graph Morphing

#![allow(unused)]
fn main() {
use reflow_network::graph::{MorphingConfig, MorphingStrategy};

// Gradually transform graph structure
let morphing_config = MorphingConfig {
    strategy: MorphingStrategy::Gradual,
    steps: 10,
    preserve_semantics: true,
    target_layout: Some(target_positions),
};

let morphing_steps = graph.create_morphing_sequence(&target_graph, &morphing_config)?;

for (step, intermediate_graph) in morphing_steps.enumerate() {
    println!("Morphing step {}/{}", step + 1, morphing_config.steps);
    
    // Apply intermediate graph state
    apply_graph_state(&intermediate_graph);
    
    // Optional: pause for animation
    std::thread::sleep(std::time::Duration::from_millis(100));
}
}

Custom Extensions

Plugin System

#![allow(unused)]
fn main() {
use reflow_network::graph::{GraphPlugin, PluginConfig};

// Create custom graph plugin
struct MyCustomPlugin {
    config: PluginConfig,
}

impl GraphPlugin for MyCustomPlugin {
    fn initialize(&mut self, graph: &mut Graph) -> Result<(), GraphError> {
        // Plugin initialization logic
        println!("Initializing custom plugin for graph: {}", graph.name);
        Ok(())
    }
    
    fn on_node_added(&mut self, graph: &Graph, node: &GraphNode) {
        // Custom logic when nodes are added
        println!("Plugin: Node added: {}", node.id);
    }
    
    fn on_connection_added(&mut self, graph: &Graph, connection: &GraphConnection) {
        // Custom logic when connections are added
        println!("Plugin: Connection added");
    }
    
    fn custom_analysis(&self, graph: &Graph) -> CustomAnalysisResult {
        // Custom analysis implementation
        CustomAnalysisResult::new()
    }
}

// Register and use plugin
graph.register_plugin("my_plugin", Box::new(MyCustomPlugin::new()))?;
graph.enable_plugin("my_plugin")?;

// Call custom analysis
let custom_result = graph.call_plugin_analysis("my_plugin")?;
}

Event Hooks

#![allow(unused)]
fn main() {
use reflow_network::graph::{EventHook, HookPriority};

// Create custom event hook
let custom_hook = EventHook::new()
    .on_node_added(|graph, node| {
        println!("Custom hook: Node {} added to graph {}", node.id, graph.name);
    })
    .on_connection_added(|graph, connection| {
        println!("Custom hook: Connection added");
    })
    .with_priority(HookPriority::High);

// Register hook
graph.register_hook("custom_logger", custom_hook)?;

// Temporary hooks for specific operations
graph.with_temporary_hook("validation_hook", |graph| {
    // This hook only applies during this operation
    let validation = graph.validate_flow()?;
    Ok(validation)
})?;
}

Error Recovery

Automatic Error Recovery

#![allow(unused)]
fn main() {
use reflow_network::graph::{ErrorRecovery, RecoveryStrategy};

// Configure automatic error recovery
let recovery_config = ErrorRecovery {
    strategy: RecoveryStrategy::Rollback,
    max_retries: 3,
    backup_frequency: 10, // Create backup every 10 operations
    auto_fix_common_issues: true,
};

graph.configure_error_recovery(&recovery_config)?;

// Operations are automatically protected
match graph.add_connection("nonexistent", "out", "target", "in", None) {
    Err(e) => {
        // Graph automatically attempts recovery
        println!("Error occurred but graph recovered: {}", e);
    }
    Ok(_) => println!("Operation succeeded"),
}
}

Manual Recovery Operations

#![allow(unused)]
fn main() {
// Create manual checkpoint
let checkpoint = graph.create_checkpoint("before_risky_operation")?;

// Perform risky operations
match risky_graph_operation(&mut graph) {
    Ok(result) => {
        // Success - commit changes
        graph.commit_checkpoint(&checkpoint)?;
        Ok(result)
    }
    Err(e) => {
        // Failure - rollback to checkpoint
        graph.rollback_to_checkpoint(&checkpoint)?;
        Err(e)
    }
}
}

Integration Patterns

Event Sourcing

#![allow(unused)]
fn main() {
use reflow_network::graph::{EventStore, GraphEvent};

// Set up event sourcing
let event_store = EventStore::new("graph_events.log")?;
graph.enable_event_sourcing(&event_store)?;

// All graph changes are automatically logged
graph.add_node("event_sourced", "EventNode", None);
// Event is automatically persisted

// Replay events to reconstruct graph state
let events = event_store.read_events_from(timestamp)?;
let reconstructed_graph = Graph::replay_events(events)?;
}

CQRS Pattern

#![allow(unused)]
fn main() {
use reflow_network::graph::{CommandHandler, QueryHandler};

// Separate command and query responsibilities
let command_handler = CommandHandler::new(&mut graph);
let query_handler = QueryHandler::new(&graph);

// Commands modify state
command_handler.execute(AddNodeCommand {
    id: "cmd_node".to_string(),
    component: "CommandNode".to_string(),
    metadata: None,
})?;

// Queries read state (potentially from optimized read models)
let node_info = query_handler.get_node_info("cmd_node")?;
let analysis = query_handler.analyze_connectivity("cmd_node")?;
}

Best Practices Summary

Performance Best Practices

  1. Use appropriate data structures: Choose indices based on access patterns
  2. Enable lazy loading: For large graphs, load data on demand
  3. Configure memory limits: Prevent memory exhaustion
  4. Use parallel processing: Enable for CPU-intensive operations
  5. Cache analysis results: Store expensive computations

Scalability Best Practices

  1. Partition large graphs: Distribute across multiple nodes
  2. Stream large operations: Process data in chunks
  3. Use compression: Reduce memory and storage requirements
  4. Implement backpressure: Control data flow rates
  5. Monitor resource usage: Track memory and CPU consumption

Maintainability Best Practices

  1. Use version control: Track graph schema changes
  2. Implement proper error handling: Handle edge cases gracefully
  3. Document custom extensions: Maintain clear plugin documentation
  4. Use consistent naming: Follow naming conventions
  5. Implement comprehensive testing: Test all graph operations

Next Steps

Workspace Discovery

Learn how to automatically discover and load graph files in multi-graph workspaces.

Overview

Workspace discovery enables:

  • Automatic graph discovery: Find all *.graph.json and *.graph.yaml files recursively
  • Folder-based namespacing: Use directory structure as natural namespaces
  • Clean instantiation: Load discovered graphs into memory with proper isolation
  • Rich metadata: Inject discovery information and workspace context
  • Flexible configuration: Control discovery patterns and exclusions

Basic Discovery

Simple Workspace Discovery

Discover all graphs in a workspace directory:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::{WorkspaceDiscovery, WorkspaceConfig};

// Basic workspace discovery
let config = WorkspaceConfig::default();
let discovery = WorkspaceDiscovery::new(config);

// Discover all graphs in current directory
let workspace = discovery.discover_workspace().await?;

println!("🎉 Discovered {} graphs across {} namespaces", 
    workspace.graphs.len(), 
    workspace.namespaces.len()
);

// Print discovered graphs
for graph_meta in &workspace.graphs {
    let graph_name = graph_meta.graph.properties
        .get("name")
        .and_then(|v| v.as_str())
        .unwrap_or("unnamed");
    
    println!("📈 Graph: {} (namespace: {})", 
        graph_name,
        graph_meta.discovered_namespace
    );
}
}

Custom Discovery Configuration

Configure discovery behavior for your needs:

#![allow(unused)]
fn main() {
use std::path::PathBuf;

let workspace_config = WorkspaceConfig {
    root_path: PathBuf::from("./my_workspace"),
    graph_patterns: vec![
        "**/*.graph.json".to_string(),
        "**/*.graph.yaml".to_string(),
        "**/*.graph.yml".to_string(),
    ],
    excluded_paths: vec![
        "**/node_modules/**".to_string(),
        "**/target/**".to_string(),
        "**/.git/**".to_string(),
        "**/test/**".to_string(),
        "**/.*/**".to_string(),
    ],
    max_depth: Some(8),
    namespace_strategy: NamespaceStrategy::FolderStructure,
};

let discovery = WorkspaceDiscovery::new(workspace_config);
let workspace = discovery.discover_workspace().await?;
}

Namespace Strategies

1. Folder Structure (Default)

Use directory structure as hierarchical namespaces:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    namespace_strategy: NamespaceStrategy::FolderStructure,
    ..Default::default()
};

// Example structure:
// data/ingestion/collector.graph.json    → namespace: "data/ingestion"
// data/processing/transformer.graph.json → namespace: "data/processing"  
// ml/training/trainer.graph.json         → namespace: "ml/training"
// ml/inference/predictor.graph.json      → namespace: "ml/inference"
}

2. Flattened Namespace

Put all graphs in the root namespace:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    namespace_strategy: NamespaceStrategy::Flatten,
    ..Default::default()
};

// All graphs get namespace: "" (root)
}

3. File-Based Prefixes

Use filename prefixes as namespaces:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    namespace_strategy: NamespaceStrategy::FileBasedPrefix,
    ..Default::default()
};

// Examples:
// ml_trainer.graph.json     → namespace: "ml"
// data_processor.graph.json → namespace: "data"
// auth_service.graph.json   → namespace: "auth"
}

4. Custom Namespace Functions

Define custom namespacing logic:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::NamespaceStrategy;

// Semantic-based namespacing
let config = WorkspaceConfig {
    namespace_strategy: NamespaceStrategy::custom(
        "semantic_based",
        Some(serde_json::json!({
            "rules": {
                "ml": ["model", "train", "predict"],
                "data": ["ingest", "process", "transform"],
                "api": ["service", "endpoint", "rest"]
            }
        }))
    )?,
    ..Default::default()
};

// Graphs are organized by semantic content
}

Discovery Results

Workspace Collection Structure

The discovery process returns a comprehensive workspace collection:

#![allow(unused)]
fn main() {
#[derive(Debug)]
pub struct WorkspaceCollection {
    pub graphs: Vec<GraphWithMetadata>,
    pub namespaces: HashMap<String, NamespaceInfo>,
    pub dependency_analysis: DependencyAnalysis,
    pub workspace_root: PathBuf,
}

// Access discovered information
let workspace = discovery.discover_workspace().await?;

// Individual graphs with metadata
for graph_meta in &workspace.graphs {
    println!("Graph: {}", graph_meta.file_info.graph_name);
    println!("  Path: {}", graph_meta.file_info.path.display());
    println!("  Namespace: {}", graph_meta.discovered_namespace);
    println!("  Size: {} bytes", graph_meta.file_info.size_bytes);
    println!("  Modified: {:?}", graph_meta.file_info.modified);
}

// Namespace organization
for (namespace, info) in &workspace.namespaces {
    println!("📁 Namespace: {} ({} graphs)", namespace, info.graph_count);
    for graph_name in &info.graphs {
        println!("  📈 {}", graph_name);
    }
}
}

Graph Metadata Enhancement

Discovery automatically enhances graphs with workspace metadata:

#![allow(unused)]
fn main() {
// Original graph properties are preserved and enhanced
let enhanced_graph = &workspace.graphs[0].graph;

// Injected workspace metadata
let workspace_namespace = enhanced_graph.properties
    .get("workspace_namespace")
    .and_then(|v| v.as_str());

let workspace_path = enhanced_graph.properties
    .get("workspace_path")
    .and_then(|v| v.as_str());

let discovery_timestamp = enhanced_graph.properties
    .get("discovery_timestamp")
    .and_then(|v| v.as_str());

println!("Discovered at: {}", discovery_timestamp.unwrap_or("unknown"));
}

Advanced Discovery

Filtered Discovery

Discover specific types of graphs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::DiscoveryFilter;

let filter = DiscoveryFilter {
    name_patterns: vec!["*processor*".to_string(), "*trainer*".to_string()],
    capability_requirements: vec!["ml_training".to_string(), "data_processing".to_string()],
    min_file_size: Some(1024), // At least 1KB
    max_file_age_days: Some(30), // Modified within 30 days
};

let filtered_workspace = discovery.discover_workspace_with_filter(filter).await?;
}

Incremental Discovery

Update workspace with only changed files:

#![allow(unused)]
fn main() {
// Initial discovery
let workspace = discovery.discover_workspace().await?;

// Later, discover only changes
let changes = discovery.discover_changes_since(&workspace).await?;

println!("📊 Changes since last discovery:");
println!("  Added: {} graphs", changes.added.len());
println!("  Modified: {} graphs", changes.modified.len());
println!("  Removed: {} graphs", changes.removed.len());

// Apply changes to workspace
let updated_workspace = discovery.apply_changes(workspace, changes).await?;
}

Parallel Discovery

Speed up discovery with parallel processing:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    parallel_discovery: true,
    max_concurrent_loads: 8,
    ..Default::default()
};

let discovery = WorkspaceDiscovery::new(config);

// Discovery happens in parallel across multiple threads
let workspace = discovery.discover_workspace().await?;
}

Dependency Analysis

Automatic Dependency Detection

Discovery analyzes dependencies between graphs:

#![allow(unused)]
fn main() {
let workspace = discovery.discover_workspace().await?;
let analysis = &workspace.dependency_analysis;

// View dependency relationships
for dep in &analysis.dependencies {
    println!("🔗 {} depends on {} ({})", 
        dep.dependent_graph,
        dep.dependency_graph,
        if dep.required { "required" } else { "optional" }
    );
}

// Check for circular dependencies
if analysis.has_circular_dependencies() {
    println!("⚠️  Circular dependencies detected!");
    for cycle in analysis.get_circular_dependencies() {
        println!("  🔄 {}", cycle.join(" → "));
    }
}
}

Interface Analysis

Analyze provided and required interfaces:

#![allow(unused)]
fn main() {
// Graphs that provide interfaces
for interface in &analysis.provided_interfaces {
    println!("📤 {} provides interface: {} ({})", 
        interface.graph_name,
        interface.interface_name,
        interface.interface_definition.description.as_ref().unwrap_or(&"No description".to_string())
    );
}

// Graphs that require interfaces
for interface in &analysis.required_interfaces {
    println!("📥 {} requires interface: {} ({})", 
        interface.graph_name,
        interface.interface_name,
        interface.interface_definition.description.as_ref().unwrap_or(&"No description".to_string())
    );
}

// Find interface compatibility
let compatibility_report = analysis.analyze_interface_compatibility();
for incompatibility in compatibility_report.mismatches {
    println!("❌ Interface mismatch: {} → {}", 
        incompatibility.provider,
        incompatibility.consumer
    );
}
}

Error Handling

Discovery Errors

Handle common discovery issues:

#![allow(unused)]
fn main() {
match discovery.discover_workspace().await {
    Ok(workspace) => {
        println!("✅ Discovery successful: {} graphs", workspace.graphs.len());
    },
    Err(e) => {
        match e {
            DiscoveryError::GlobError(pattern_err) => {
                eprintln!("❌ Invalid glob pattern: {}", pattern_err);
            },
            DiscoveryError::LoadError(path, reason) => {
                eprintln!("❌ Failed to load {}: {}", path.display(), reason);
            },
            DiscoveryError::UnsupportedFormat(path) => {
                eprintln!("❌ Unsupported file format: {}", path.display());
            },
            DiscoveryError::IoError(io_err) => {
                eprintln!("❌ IO error during discovery: {}", io_err);
            },
            _ => {
                eprintln!("❌ Discovery failed: {}", e);
            }
        }
    }
}
}

Resilient Discovery

Continue discovery even when some files fail to load:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    continue_on_load_error: true,
    max_load_errors: 5,
    ..Default::default()
};

let discovery = WorkspaceDiscovery::new(config);
let result = discovery.discover_workspace().await?;

// Check for partial failures
if !result.load_errors.is_empty() {
    println!("⚠️  {} files failed to load:", result.load_errors.len());
    for error in &result.load_errors {
        println!("  ❌ {}: {}", error.path.display(), error.reason);
    }
}
}

Performance Optimization

Caching Discovery Results

Cache discovery results to speed up subsequent runs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::DiscoveryCache;

let cache = DiscoveryCache::new("./workspace_cache");
let discovery = WorkspaceDiscovery::with_cache(config, cache);

// First run: Full discovery and cache
let workspace = discovery.discover_workspace().await?;

// Subsequent runs: Load from cache if nothing changed
let cached_workspace = discovery.discover_workspace().await?; // Much faster!
}

Memory Management

Configure memory usage for large workspaces:

#![allow(unused)]
fn main() {
let config = WorkspaceConfig {
    lazy_load_graphs: true,        // Load graph content on demand
    max_memory_usage_mb: 512,      // Limit memory usage
    graph_content_cache_size: 100, // Cache up to 100 graph contents
    ..Default::default()
};
}

Progress Monitoring

Monitor discovery progress for large workspaces:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::DiscoveryProgress;

let (discovery, mut progress_rx) = WorkspaceDiscovery::with_progress(config);

// Start discovery in background
let workspace_future = discovery.discover_workspace();

// Monitor progress
tokio::spawn(async move {
    while let Some(progress) = progress_rx.recv().await {
        match progress {
            DiscoveryProgress::FilesFound(count) => {
                println!("📁 Found {} graph files", count);
            },
            DiscoveryProgress::LoadingFile(path) => {
                println!("📈 Loading {}", path.display());
            },
            DiscoveryProgress::NamespaceCreated(namespace, graph_count) => {
                println!("📂 Namespace '{}' with {} graphs", namespace, graph_count);
            },
            DiscoveryProgress::Complete(total_graphs) => {
                println!("✅ Discovery complete: {} graphs", total_graphs);
                break;
            }
        }
    }
});

// Wait for completion
let workspace = workspace_future.await?;
}

Integration Examples

Example Workspace Structure

my_workspace/
├── data/
│   ├── ingestion/
│   │   ├── api_collector.graph.json      → namespace: data/ingestion
│   │   └── file_reader.graph.yaml        → namespace: data/ingestion
│   ├── processing/
│   │   ├── cleaner.graph.json            → namespace: data/processing
│   │   ├── transformer.graph.json        → namespace: data/processing
│   │   └── validator.graph.yaml          → namespace: data/processing
│   └── storage/
│       ├── database_writer.graph.json    → namespace: data/storage
│       └── cache_manager.graph.yaml      → namespace: data/storage
├── ml/
│   ├── training/
│   │   ├── model_trainer.graph.json      → namespace: ml/training
│   │   └── feature_engineer.graph.yaml   → namespace: ml/training
│   ├── inference/
│   │   ├── predictor.graph.json          → namespace: ml/inference
│   │   └── batch_scorer.graph.json       → namespace: ml/inference
│   └── evaluation/
│       └── model_evaluator.graph.yaml    → namespace: ml/evaluation
├── monitoring/
│   ├── metrics.graph.json                → namespace: monitoring
│   ├── alerts.graph.yaml                 → namespace: monitoring
│   └── dashboard.graph.json              → namespace: monitoring
└── shared/
    ├── logging.graph.yaml                 → namespace: shared
    ├── auth.graph.json                    → namespace: shared
    └── config.graph.json                  → namespace: shared

Complete Discovery Example

use reflow_network::multi_graph::workspace::*;
use std::path::PathBuf;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Configure discovery
    let config = WorkspaceConfig {
        root_path: PathBuf::from("./my_workspace"),
        graph_patterns: vec![
            "**/*.graph.json".to_string(),
            "**/*.graph.yaml".to_string(),
        ],
        excluded_paths: vec![
            "**/test/**".to_string(),
            "**/.*/**".to_string(),
        ],
        max_depth: Some(6),
        namespace_strategy: NamespaceStrategy::FolderStructure,
    };
    
    // Perform discovery
    println!("🔍 Starting workspace discovery...");
    let discovery = WorkspaceDiscovery::new(config);
    let workspace = discovery.discover_workspace().await?;
    
    // Print results
    println!("\n📊 Discovery Results");
    println!("================");
    println!("📁 Workspace root: {}", workspace.workspace_root.display());
    println!("🎯 Total graphs: {}", workspace.graphs.len());
    println!("📂 Namespaces: {}", workspace.namespaces.len());
    
    // Show namespace breakdown
    println!("\n📂 Namespace Organization:");
    for (namespace, info) in &workspace.namespaces {
        println!("  📁 {} ({} graphs)", namespace, info.graphs.len());
        for graph_name in &info.graphs {
            println!("    📈 {}", graph_name);
        }
    }
    
    // Show dependencies
    println!("\n🔗 Dependencies:");
    for dep in &workspace.dependency_analysis.dependencies {
        println!("  {} → {} ({})", 
            dep.dependent_graph,
            dep.dependency_graph,
            if dep.required { "required" } else { "optional" }
        );
    }
    
    println!("\n✅ Workspace discovery completed successfully!");
    
    Ok(())
}

Best Practices

1. Organize by Function

#![allow(unused)]
fn main() {
// Good: Functional organization
data/
  ingestion/     # Data collection graphs
  processing/    # Data transformation graphs  
  storage/       # Data persistence graphs
ml/
  training/      # ML training graphs
  inference/     # ML prediction graphs
  evaluation/    # ML validation graphs
}

2. Consistent Naming

#![allow(unused)]
fn main() {
// Good: Descriptive, consistent names
api_data_collector.graph.json
stream_data_processor.graph.json
ml_model_trainer.graph.json
postgres_storage_writer.graph.json

// Avoid: Generic names
collector.graph.json
processor.graph.json
trainer.graph.json
writer.graph.json
}

3. Graph Documentation

Include metadata in graph files for better discovery:

{
  "properties": {
    "name": "data_processor",
    "description": "Processes incoming data streams with validation and transformation",
    "version": "1.2.0",
    "tags": ["data", "processing", "validation"],
    "capabilities": ["stream_processing", "data_validation"],
    "dependencies": ["data_collector"]
  }
}

Next Steps

Graph Composition

Learn how to compose multiple discovered graphs into unified workflows.

Overview

Graph composition allows you to:

  • Combine multiple graphs: Merge discovered graphs into a single executable network
  • Create cross-graph connections: Connect processes across different graph namespaces
  • Resolve dependencies: Handle inter-graph dependencies automatically
  • Share resources: Create shared processes accessible by multiple graphs
  • Build unified workflows: Transform modular graphs into cohesive pipelines

Basic Composition

Using GraphComposer

The primary tool for composing graphs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{GraphComposer, GraphComposition, GraphSource};

// Create composer
let mut composer = GraphComposer::new();

// Define composition
let composition = GraphComposition {
    sources: vec![
        GraphSource::JsonFile("data/collector.graph.json".to_string()),
        GraphSource::JsonFile("data/processor.graph.json".to_string()),
        GraphSource::JsonFile("ml/trainer.graph.json".to_string()),
    ],
    connections: vec![
        // Cross-graph connections defined here
    ],
    shared_resources: vec![
        // Shared processes defined here
    ],
    properties: HashMap::from([
        ("name".to_string(), serde_json::json!("composed_workflow")),
        ("version".to_string(), serde_json::json!("1.0.0")),
    ]),
    case_sensitive: Some(false),
    metadata: None,
};

// Compose into unified graph
let composed_graph = composer.compose_graphs(composition).await?;
}

From Workspace Discovery

Compose directly from discovered workspaces:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::workspace::{WorkspaceDiscovery, WorkspaceConfig};

// Discover workspace
let discovery = WorkspaceDiscovery::new(WorkspaceConfig::default());
let workspace = discovery.discover_workspace().await?;

// Convert to composition sources
let sources: Vec<GraphSource> = workspace.graphs
    .into_iter()
    .map(|g| GraphSource::GraphExport(g.graph))
    .collect();

let composition = GraphComposition {
    sources,
    connections: vec![], // Will be populated
    shared_resources: vec![],
    properties: HashMap::from([
        ("name".to_string(), serde_json::json!("workspace_composition")),
    ]),
    case_sensitive: Some(false),
    metadata: None,
};

let composed_graph = composer.compose_graphs(composition).await?;
}

Cross-Graph Connections

Manual Connection Definition

Create explicit connections between graphs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{CompositionConnection, CompositionEndpoint};

let composition = GraphComposition {
    sources: vec![
        GraphSource::JsonFile("data/collector.graph.json".to_string()),
        GraphSource::JsonFile("ml/trainer.graph.json".to_string()),
    ],
    connections: vec![
        CompositionConnection {
            from: CompositionEndpoint {
                process: "data/collector".to_string(),  // Namespaced process name
                port: "Output".to_string(),
                index: None,
            },
            to: CompositionEndpoint {
                process: "ml/feature_engineer".to_string(),
                port: "Input".to_string(),
                index: None,
            },
            metadata: Some(HashMap::from([
                ("description".to_string(), serde_json::json!("Data pipeline to ML training")),
            ])),
        },
    ],
    // ... rest of composition
};
}

Using Connection Builder

Programmatically build connections:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::GraphConnectionBuilder;

// First, discover workspace to get graph information
let workspace = discovery.discover_workspace().await?;

// Create connection builder
let mut connection_builder = GraphConnectionBuilder::new(workspace);

// Build connections using fluent API
connection_builder
    .connect(
        "collector",       // from graph
        "data_collector", // from process
        "Output",         // from port
        "processor",      // to graph
        "data_cleaner",   // to process
        "Input"           // to port
    )?
    .connect(
        "processor",
        "data_transformer",
        "Output",
        "trainer",
        "feature_engineer",
        "Input"
    )?;

// Get connections for composition
let connections = connection_builder.build();

let composition = GraphComposition {
    sources: workspace.graphs.into_iter()
        .map(|g| GraphSource::GraphExport(g.graph))
        .collect(),
    connections,
    // ... rest of composition
};
}

Interface-Based Connections

Connect using declared interfaces:

#![allow(unused)]
fn main() {
// Connect using interface definitions from graphs
connection_builder
    .connect_interface(
        "processor",           // from graph
        "clean_data_output",   // from interface
        "trainer",             // to graph
        "training_data_input"  // to interface
    )?
    .connect_interface(
        "trainer",
        "model_output",
        "predictor",
        "model_input"
    )?;

let connections = connection_builder.build();
}

Shared Resources

Defining Shared Processes

Create processes accessible by multiple graphs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::SharedResource;

let composition = GraphComposition {
    sources: vec![
        // Multiple graphs that need logging
        GraphSource::JsonFile("data/processor.graph.json".to_string()),
        GraphSource::JsonFile("ml/trainer.graph.json".to_string()),
        GraphSource::JsonFile("api/service.graph.json".to_string()),
    ],
    shared_resources: vec![
        SharedResource {
            name: "shared_logger".to_string(),
            component: "LoggerActor".to_string(),
            metadata: Some(HashMap::from([
                ("log_level".to_string(), serde_json::json!("info")),
                ("output_file".to_string(), serde_json::json!("workflow.log")),
            ])),
        },
        SharedResource {
            name: "config_manager".to_string(),
            component: "ConfigManagerActor".to_string(),
            metadata: Some(HashMap::from([
                ("config_file".to_string(), serde_json::json!("config.yaml")),
            ])),
        },
    ],
    connections: vec![
        // Connect graphs to shared resources
        CompositionConnection {
            from: CompositionEndpoint {
                process: "data/processor".to_string(),
                port: "LogOutput".to_string(),
                index: None,
            },
            to: CompositionEndpoint {
                process: "shared_logger".to_string(),
                port: "Input".to_string(),
                index: None,
            },
            metadata: None,
        },
        // More connections to shared logger...
    ],
    // ... rest of composition
};
}

Resource Sharing Patterns

Common patterns for shared resources:

#![allow(unused)]
fn main() {
// 1. Centralized Logging
let shared_logging = SharedResource {
    name: "central_logger".to_string(),
    component: "CentralLoggerActor".to_string(),
    metadata: Some(HashMap::from([
        ("aggregation".to_string(), serde_json::json!(true)),
        ("format".to_string(), serde_json::json!("json")),
    ])),
};

// 2. Configuration Management
let config_service = SharedResource {
    name: "config_service".to_string(),
    component: "ConfigServiceActor".to_string(),
    metadata: Some(HashMap::from([
        ("watch_changes".to_string(), serde_json::json!(true)),
    ])),
};

// 3. Metrics Collection
let metrics_collector = SharedResource {
    name: "metrics_collector".to_string(),
    component: "MetricsCollectorActor".to_string(),
    metadata: Some(HashMap::from([
        ("export_interval".to_string(), serde_json::json!(30)),
        ("export_format".to_string(), serde_json::json!("prometheus")),
    ])),
};

// 4. Authentication Service
let auth_service = SharedResource {
    name: "auth_service".to_string(),
    component: "AuthServiceActor".to_string(),
    metadata: Some(HashMap::from([
        ("token_expiry".to_string(), serde_json::json!(3600)),
        ("jwt_secret".to_string(), serde_json::json!("${JWT_SECRET}")),
    ])),
};
}

Namespace Management

Automatic Namespacing

Graphs are automatically namespaced during composition:

#![allow(unused)]
fn main() {
// Original process names in individual graphs:
// collector.graph.json: "data_collector"
// processor.graph.json: "data_processor"  
// trainer.graph.json: "model_trainer"

// After composition with namespace prefixes:
// "data/data_collector"     (from collector graph in data/ folder)
// "data/data_processor"     (from processor graph in data/ folder)
// "ml/model_trainer"        (from trainer graph in ml/ folder)

// Access in composed graph:
let composed_export = composed_graph.export();
assert!(composed_export.processes.contains_key("data/data_collector"));
assert!(composed_export.processes.contains_key("ml/model_trainer"));
}

Custom Namespace Mapping

Control how namespaces are applied:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{NamespaceMapping, NamespaceStrategy};

let namespace_mapping = NamespaceMapping {
    graph_mappings: HashMap::from([
        ("collector".to_string(), "ingestion".to_string()),
        ("processor".to_string(), "processing".to_string()),
        ("trainer".to_string(), "machine_learning".to_string()),
    ]),
    strategy: NamespaceStrategy::CustomMapping,
    collision_resolution: CollisionResolution::Prefix,
};

let composer = GraphComposer::with_namespace_mapping(namespace_mapping);
}

Advanced Composition

Conditional Composition

Compose graphs based on conditions:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::ConditionalComposition;

let conditional_composition = ConditionalComposition {
    base_sources: vec![
        GraphSource::JsonFile("core/processor.graph.json".to_string()),
    ],
    conditional_sources: vec![
        ConditionalSource {
            condition: Condition::EnvironmentVariable("ENABLE_ML".to_string()),
            sources: vec![
                GraphSource::JsonFile("ml/trainer.graph.json".to_string()),
                GraphSource::JsonFile("ml/predictor.graph.json".to_string()),
            ],
        },
        ConditionalSource {
            condition: Condition::ConfigValue("features.analytics".to_string()),
            sources: vec![
                GraphSource::JsonFile("analytics/collector.graph.json".to_string()),
            ],
        },
    ],
};

let composed_graph = composer.compose_conditional(conditional_composition).await?;
}

Templated Composition

Use templates for dynamic composition:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::CompositionTemplate;

let template = CompositionTemplate {
    template_file: "templates/data_pipeline.yaml".to_string(),
    parameters: HashMap::from([
        ("input_source".to_string(), serde_json::json!("kafka")),
        ("output_destination".to_string(), serde_json::json!("postgres")),
        ("enable_validation".to_string(), serde_json::json!(true)),
    ]),
};

let composition = composer.render_template(template).await?;
let composed_graph = composer.compose_graphs(composition).await?;
}

Layered Composition

Build compositions in layers:

#![allow(unused)]
fn main() {
// Base layer: Core functionality
let base_composition = GraphComposition {
    sources: vec![
        GraphSource::JsonFile("core/base.graph.json".to_string()),
    ],
    // ... base configuration
};

// Feature layer: Additional features
let feature_layer = GraphComposition {
    sources: vec![
        GraphSource::JsonFile("features/analytics.graph.json".to_string()),
        GraphSource::JsonFile("features/monitoring.graph.json".to_string()),
    ],
    // ... feature connections
};

// Environment layer: Environment-specific configuration
let env_layer = GraphComposition {
    sources: vec![
        GraphSource::JsonFile("env/production.graph.json".to_string()),
    ],
    // ... environment-specific resources
};

// Compose layers
let base_graph = composer.compose_graphs(base_composition).await?;
let feature_graph = composer.compose_layers(base_graph, feature_layer).await?;
let final_graph = composer.compose_layers(feature_graph, env_layer).await?;
}

Validation and Testing

Composition Validation

Validate composed graphs before execution:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::CompositionValidator;

let validator = CompositionValidator::new();

// Validate composition structure
let validation_result = validator.validate_composition(&composition).await?;

if !validation_result.is_valid() {
    println!("❌ Composition validation failed:");
    for error in &validation_result.errors {
        println!("  - {}", error);
    }
    for warning in &validation_result.warnings {
        println!("  ⚠️  {}", warning);
    }
} else {
    println!("✅ Composition validation passed");
}
}

Testing Composed Graphs

Test the composed graph before deployment:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::CompositionTester;

let tester = CompositionTester::new();

// Create test scenarios
let test_scenarios = vec![
    TestScenario {
        name: "data_flow_test".to_string(),
        inputs: HashMap::from([
            ("data/collector".to_string(), vec![
                Message::String("test_data".to_string())
            ]),
        ]),
        expected_outputs: HashMap::from([
            ("ml/predictor".to_string(), vec![
                Message::Object(serde_json::json!({"prediction": 0.95}))
            ]),
        ]),
        timeout_ms: 5000,
    },
];

// Run tests
let test_results = tester.run_tests(&composed_graph, test_scenarios).await?;

for result in &test_results {
    if result.passed {
        println!("✅ Test '{}' passed", result.scenario_name);
    } else {
        println!("❌ Test '{}' failed: {}", result.scenario_name, result.error.as_ref().unwrap());
    }
}
}

Performance Optimization

Lazy Loading

Only load necessary graphs:

#![allow(unused)]
fn main() {
let config = CompositionConfig {
    lazy_loading: true,
    load_on_demand: true,
    cache_loaded_graphs: true,
    max_concurrent_loads: 4,
};

let composer = GraphComposer::with_config(config);
}

Parallel Composition

Compose large numbers of graphs in parallel:

#![allow(unused)]
fn main() {
let config = CompositionConfig {
    parallel_composition: true,
    max_parallel_graphs: 8,
    composition_timeout_ms: 30000,
};

let composer = GraphComposer::with_config(config);
}

Memory Management

Control memory usage during composition:

#![allow(unused)]
fn main() {
let config = CompositionConfig {
    max_memory_usage_mb: 1024,
    cleanup_intermediate_results: true,
    stream_large_graphs: true,
};

let composer = GraphComposer::with_config(config);
}

Real-World Examples

Data Processing Pipeline

#![allow(unused)]
fn main() {
// Compose a complete data processing pipeline
async fn create_data_pipeline() -> Result<Graph, CompositionError> {
    let mut composer = GraphComposer::new();
    
    let composition = GraphComposition {
        sources: vec![
            GraphSource::JsonFile("ingestion/api_collector.graph.json".to_string()),
            GraphSource::JsonFile("processing/data_cleaner.graph.json".to_string()),
            GraphSource::JsonFile("processing/transformer.graph.json".to_string()),
            GraphSource::JsonFile("storage/database_writer.graph.json".to_string()),
        ],
        connections: vec![
            CompositionConnection {
                from: CompositionEndpoint {
                    process: "ingestion/api_collector".to_string(),
                    port: "RawData".to_string(),
                    index: None,
                },
                to: CompositionEndpoint {
                    process: "processing/data_cleaner".to_string(),
                    port: "Input".to_string(),
                    index: None,
                },
                metadata: None,
            },
            CompositionConnection {
                from: CompositionEndpoint {
                    process: "processing/data_cleaner".to_string(),
                    port: "CleanedData".to_string(),
                    index: None,
                },
                to: CompositionEndpoint {
                    process: "processing/transformer".to_string(),
                    port: "Input".to_string(),
                    index: None,
                },
                metadata: None,
            },
            CompositionConnection {
                from: CompositionEndpoint {
                    process: "processing/transformer".to_string(),
                    port: "TransformedData".to_string(),
                    index: None,
                },
                to: CompositionEndpoint {
                    process: "storage/database_writer".to_string(),
                    port: "Input".to_string(),
                    index: None,
                },
                metadata: None,
            },
        ],
        shared_resources: vec![
            SharedResource {
                name: "logger".to_string(),
                component: "LoggerActor".to_string(),
                metadata: Some(HashMap::from([
                    ("level".to_string(), serde_json::json!("info")),
                ])),
            },
        ],
        properties: HashMap::from([
            ("name".to_string(), serde_json::json!("data_processing_pipeline")),
            ("version".to_string(), serde_json::json!("1.0.0")),
        ]),
        case_sensitive: Some(false),
        metadata: None,
    };
    
    composer.compose_graphs(composition).await
}
}

ML Training Pipeline

#![allow(unused)]
fn main() {
// Compose ML training and inference pipeline
async fn create_ml_pipeline() -> Result<Graph, CompositionError> {
    let workspace = WorkspaceDiscovery::new(WorkspaceConfig {
        root_path: PathBuf::from("./ml_workspace"),
        ..Default::default()
    }).discover_workspace().await?;
    
    let mut connection_builder = GraphConnectionBuilder::new(workspace);
    
    // Build ML pipeline connections
    connection_builder
        .connect_interface(
            "data_preprocessor",
            "processed_data_output",
            "feature_engineer",
            "raw_data_input"
        )?
        .connect_interface(
            "feature_engineer",
            "features_output",
            "model_trainer",
            "training_data_input"
        )?
        .connect_interface(
            "model_trainer",
            "trained_model_output",
            "model_evaluator",
            "model_input"
        )?
        .connect_interface(
            "model_trainer",
            "trained_model_output",
            "inference_service",
            "model_input"
        )?;
    
    let connections = connection_builder.build();
    
    let composition = GraphComposition {
        sources: workspace.graphs.into_iter()
            .map(|g| GraphSource::GraphExport(g.graph))
            .collect(),
        connections,
        shared_resources: vec![
            SharedResource {
                name: "model_registry".to_string(),
                component: "ModelRegistryActor".to_string(),
                metadata: Some(HashMap::from([
                    ("storage_backend".to_string(), serde_json::json!("s3")),
                ])),
            },
            SharedResource {
                name: "metrics_tracker".to_string(),
                component: "MetricsTrackerActor".to_string(),
                metadata: Some(HashMap::from([
                    ("export_interval".to_string(), serde_json::json!(60)),
                ])),
            },
        ],
        properties: HashMap::from([
            ("name".to_string(), serde_json::json!("ml_training_pipeline")),
            ("description".to_string(), serde_json::json!("Complete ML training and inference pipeline")),
        ]),
        case_sensitive: Some(false),
        metadata: None,
    };
    
    let mut composer = GraphComposer::new();
    composer.compose_graphs(composition).await
}
}

Best Practices

1. Plan Your Composition

  • Design graph boundaries thoughtfully
  • Keep related functionality together
  • Plan for reusability across compositions

2. Use Clear Naming

#![allow(unused)]
fn main() {
// Good: Descriptive endpoint names
CompositionEndpoint {
    process: "data_ingestion/api_collector".to_string(),
    port: "ValidatedApiData".to_string(),
    index: None,
}

// Avoid: Generic names
CompositionEndpoint {
    process: "graph1/proc1".to_string(),
    port: "Output".to_string(),
    index: None,
}
}

3. Document Connections

#![allow(unused)]
fn main() {
CompositionConnection {
    from: CompositionEndpoint { /* ... */ },
    to: CompositionEndpoint { /* ... */ },
    metadata: Some(HashMap::from([
        ("description".to_string(), serde_json::json!("Cleaned data flows to ML feature engineering")),
        ("data_type".to_string(), serde_json::json!("CleanedDataRecord")),
        ("expected_rate".to_string(), serde_json::json!("1000 records/minute")),
    ])),
}
}

4. Validate Early and Often

#![allow(unused)]
fn main() {
// Validate before composing
let validation_result = validator.validate_composition(&composition).await?;
assert!(validation_result.is_valid());

// Test after composing
let test_results = tester.run_tests(&composed_graph, test_scenarios).await?;
assert!(test_results.iter().all(|r| r.passed));
}

5. Use Shared Resources Wisely

  • Share stateless services (logging, config)
  • Be cautious with stateful shared resources
  • Consider resource contention and bottlenecks

Next Steps

Dependency Resolution

Learn how to handle complex dependencies between graphs in multi-graph compositions.

Overview

Dependency resolution in multi-graph systems involves:

  • Automatic dependency detection: Analyze graph dependencies from metadata
  • Topological ordering: Ensure graphs are loaded in dependency order
  • Circular dependency detection: Identify and resolve circular dependencies
  • Version constraints: Handle version compatibility between dependent graphs
  • Interface matching: Verify compatible interfaces between graphs
  • Missing dependency handling: Graceful handling of unresolved dependencies

Basic Dependency Resolution

Dependency Resolver

The core component for handling graph dependencies:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{DependencyResolver, DependencyError};

let resolver = DependencyResolver::new();

// Load graphs with dependencies
let graphs = vec![
    graph_export_a,  // depends on graph_b
    graph_export_b,  // no dependencies
    graph_export_c,  // depends on graph_a and graph_b
];

// Resolve dependency order
let ordered_graphs = resolver.resolve_dependencies(&graphs)?;

// Graphs are now ordered: [graph_b, graph_a, graph_c]
for graph in &ordered_graphs {
    let name = graph.properties.get("name").and_then(|v| v.as_str()).unwrap_or("unnamed");
    println!("Loading graph: {}", name);
}
}

Dependency Declaration

Declare dependencies in graph metadata:

{
  "properties": {
    "name": "ml_trainer",
    "version": "1.2.0",
    "dependencies": [
      "data_processor",
      "feature_engineer"
    ]
  },
  "graph_dependencies": [
    {
      "graph_name": "data_processor",
      "namespace": "data/processing",
      "version_constraint": ">=1.0.0",
      "required": true,
      "description": "Requires processed data for training"
    },
    {
      "graph_name": "feature_engineer",
      "namespace": "ml/features",
      "version_constraint": "^2.1.0",
      "required": true,
      "description": "Requires feature engineering pipeline"
    }
  ]
}

Advanced Dependency Resolution

Version Constraints

Handle version compatibility:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{VersionConstraint, VersionResolver};

// Define version constraints
let constraints = vec![
    VersionConstraint {
        graph_name: "data_processor".to_string(),
        constraint: ">=1.0.0".to_string(),
        required: true,
    },
    VersionConstraint {
        graph_name: "ml_core".to_string(),
        constraint: "^2.0.0".to_string(),  // Compatible with 2.x.x
        required: true,
    },
    VersionConstraint {
        graph_name: "analytics".to_string(),
        constraint: "~1.5.0".to_string(),  // Compatible with 1.5.x
        required: false,
    },
];

let version_resolver = VersionResolver::new();
let resolution_result = version_resolver.resolve_versions(&graphs, &constraints)?;

if resolution_result.has_conflicts() {
    println!("❌ Version conflicts detected:");
    for conflict in &resolution_result.conflicts {
        println!("  {} requires {} but {} is available", 
            conflict.dependent, conflict.required_version, conflict.available_version);
    }
} else {
    println!("✅ All version constraints satisfied");
}
}

Interface Compatibility

Verify interface compatibility between dependent graphs:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{InterfaceResolver, InterfaceCompatibility};

let interface_resolver = InterfaceResolver::new();

// Analyze interface compatibility
let compatibility_result = interface_resolver.analyze_compatibility(&ordered_graphs)?;

for incompatibility in &compatibility_result.incompatibilities {
    match incompatibility.severity {
        Severity::Error => {
            println!("❌ Interface incompatibility: {} → {}", 
                incompatibility.provider, incompatibility.consumer);
            println!("   Expected: {}", incompatibility.expected_signature);
            println!("   Actual: {}", incompatibility.actual_signature);
        },
        Severity::Warning => {
            println!("⚠️  Interface warning: {} → {}", 
                incompatibility.provider, incompatibility.consumer);
            println!("   {}", incompatibility.description);
        },
    }
}
}

Conditional Dependencies

Handle dependencies that are only required under certain conditions:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{ConditionalDependency, DependencyCondition};

// Define conditional dependencies in graph metadata
let conditional_deps = vec![
    ConditionalDependency {
        graph_name: "ml_trainer".to_string(),
        condition: DependencyCondition::EnvironmentVariable("ENABLE_ML".to_string()),
        version_constraint: Some(">=2.0.0".to_string()),
        required: true,
    },
    ConditionalDependency {
        graph_name: "analytics_dashboard".to_string(),
        condition: DependencyCondition::ConfigValue("features.analytics".to_string()),
        version_constraint: None,
        required: false,
    },
];

// Resolve conditional dependencies
let resolution_context = ResolutionContext {
    environment_variables: HashMap::from([
        ("ENABLE_ML".to_string(), "true".to_string()),
    ]),
    config_values: HashMap::from([
        ("features.analytics".to_string(), serde_json::json!(true)),
    ]),
};

let resolved_deps = resolver.resolve_conditional_dependencies(
    &conditional_deps, 
    &resolution_context
)?;
}

Circular Dependency Detection

Identifying Cycles

Detect and report circular dependencies:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::CircularDependencyDetector;

let cycle_detector = CircularDependencyDetector::new();
let cycle_result = cycle_detector.detect_cycles(&graphs)?;

if cycle_result.has_cycles() {
    println!("❌ Circular dependencies detected:");
    for cycle in &cycle_result.cycles {
        println!("  🔄 {}", cycle.join(" → "));
        
        // Suggest resolution strategies
        let suggestions = cycle_detector.suggest_resolutions(&cycle)?;
        for suggestion in suggestions {
            println!("    💡 {}", suggestion);
        }
    }
} else {
    println!("✅ No circular dependencies found");
}
}

Cycle Resolution Strategies

Automatic strategies for resolving circular dependencies:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{CycleResolutionStrategy, DependencyBreaker};

let cycle_breaker = DependencyBreaker::new();

// Strategy 1: Optional dependency promotion
let resolution1 = cycle_breaker.resolve_by_optional_promotion(&cycle)?;

// Strategy 2: Interface extraction
let resolution2 = cycle_breaker.resolve_by_interface_extraction(&cycle)?;

// Strategy 3: Dependency inversion
let resolution3 = cycle_breaker.resolve_by_dependency_inversion(&cycle)?;

// Apply the best resolution strategy
let best_resolution = cycle_breaker.select_best_resolution(vec![
    resolution1, resolution2, resolution3
])?;

println!("🔧 Applying resolution: {}", best_resolution.description);
let resolved_graphs = cycle_breaker.apply_resolution(&graphs, &best_resolution)?;
}

Missing Dependency Handling

Graceful Degradation

Handle missing dependencies gracefully:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{MissingDependencyHandler, DegradationStrategy};

let missing_handler = MissingDependencyHandler::new();

// Configure degradation strategies
let strategies = HashMap::from([
    ("optional_dependencies".to_string(), DegradationStrategy::Skip),
    ("required_dependencies".to_string(), DegradationStrategy::Fail),
    ("soft_dependencies".to_string(), DegradationStrategy::Substitute),
]);

missing_handler.configure_strategies(strategies);

// Handle missing dependencies
let resolution_result = missing_handler.handle_missing_dependencies(
    &graphs,
    &missing_deps
)?;

for action in &resolution_result.actions {
    match action {
        DegradationAction::Skipped(graph_name) => {
            println!("⏭️  Skipped optional dependency: {}", graph_name);
        },
        DegradationAction::Substituted(original, substitute) => {
            println!("🔄 Substituted {} with {}", original, substitute);
        },
        DegradationAction::Failed(graph_name, reason) => {
            println!("❌ Failed to resolve required dependency: {} ({})", graph_name, reason);
        },
    }
}
}

Dependency Substitution

Provide alternatives for missing dependencies:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::DependencySubstitution;

let substitutions = vec![
    DependencySubstitution {
        original: "premium_ml_engine".to_string(),
        substitute: "basic_ml_engine".to_string(),
        compatibility_level: CompatibilityLevel::Partial,
        feature_differences: vec![
            "Advanced model optimization not available".to_string(),
            "Reduced prediction accuracy".to_string(),
        ],
    },
    DependencySubstitution {
        original: "enterprise_analytics".to_string(),
        substitute: "community_analytics".to_string(),
        compatibility_level: CompatibilityLevel::Full,
        feature_differences: vec![],
    },
];

missing_handler.register_substitutions(substitutions);

// Apply substitutions during resolution
let result = missing_handler.resolve_with_substitutions(&graphs)?;
}

Dependency Analysis and Reporting

Dependency Graph Visualization

Generate dependency graphs for analysis:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::{DependencyAnalyzer, DependencyGraph};

let analyzer = DependencyAnalyzer::new();

// Generate dependency graph
let dep_graph = analyzer.build_dependency_graph(&graphs)?;

// Export to various formats
dep_graph.export_to_dot("dependencies.dot")?;         // Graphviz DOT
dep_graph.export_to_json("dependencies.json")?;       // JSON format
dep_graph.export_to_mermaid("dependencies.md")?;      // Mermaid diagram

// Analyze graph properties
let analysis = analyzer.analyze_dependency_structure(&dep_graph)?;

println!("📊 Dependency Analysis:");
println!("  Graphs: {}", analysis.total_graphs);
println!("  Dependencies: {}", analysis.total_dependencies);
println!("  Max depth: {}", analysis.max_dependency_depth);
println!("  Strongly connected components: {}", analysis.scc_count);
}

Impact Analysis

Analyze the impact of dependency changes:

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::ImpactAnalyzer;

let impact_analyzer = ImpactAnalyzer::new();

// Analyze impact of changing a graph
let impact = impact_analyzer.analyze_change_impact(
    &dep_graph,
    "data_processor",  // Graph being changed
    "2.0.0"            // New version
)?;

println!("🎯 Impact Analysis for data_processor v2.0.0:");
println!("  Directly affected graphs: {}", impact.direct_dependents.len());
println!("  Transitively affected graphs: {}", impact.transitive_dependents.len());
println!("  Breaking changes detected: {}", impact.breaking_changes.len());

for change in &impact.breaking_changes {
    println!("  ⚠️  {}: {}", change.affected_graph, change.description);
}
}

Real-World Examples

Data Processing Pipeline Dependencies

#![allow(unused)]
fn main() {
// Example: Complex data processing pipeline with dependencies
async fn resolve_data_pipeline_dependencies() -> Result<Vec<GraphExport>, DependencyError> {
    let graphs = vec![
        // Base data collector (no dependencies)
        load_graph("data/ingestion/api_collector.graph.json").await?,
        
        // Data processor (depends on collector)
        load_graph("data/processing/cleaner.graph.json").await?,
        
        // Feature engineer (depends on processor)
        load_graph("ml/features/engineer.graph.json").await?,
        
        // ML trainer (depends on feature engineer)
        load_graph("ml/training/trainer.graph.json").await?,
        
        // Model validator (depends on trainer)
        load_graph("ml/validation/validator.graph.json").await?,
        
        // Inference service (depends on trainer, but not validator)
        load_graph("ml/inference/predictor.graph.json").await?,
        
        // Analytics dashboard (depends on multiple components)
        load_graph("analytics/dashboard.graph.json").await?,
    ];
    
    let resolver = DependencyResolver::new();
    let ordered_graphs = resolver.resolve_dependencies(&graphs)?;
    
    // Result order: collector → cleaner → engineer → trainer → [validator, predictor] → dashboard
    
    Ok(ordered_graphs)
}
}

ML Pipeline with Version Constraints

#![allow(unused)]
fn main() {
// Example: ML pipeline with strict version requirements
async fn resolve_ml_pipeline_with_versions() -> Result<Vec<GraphExport>, DependencyError> {
    let graphs = load_ml_graphs().await?;
    
    let version_constraints = vec![
        VersionConstraint {
            graph_name: "tensorflow_runtime".to_string(),
            constraint: ">=2.8.0".to_string(),
            required: true,
        },
        VersionConstraint {
            graph_name: "data_validator".to_string(),
            constraint: "^1.5.0".to_string(),
            required: true,
        },
        VersionConstraint {
            graph_name: "model_optimizer".to_string(),
            constraint: "~2.1.0".to_string(),
            required: false,
        },
    ];
    
    let resolver = DependencyResolver::with_version_constraints(version_constraints);
    
    // Resolve dependencies with version checking
    let resolution_result = resolver.resolve_with_versions(&graphs)?;
    
    if resolution_result.has_conflicts() {
        // Handle version conflicts
        for conflict in &resolution_result.conflicts {
            eprintln!("Version conflict: {} requires {} but {} is available",
                conflict.dependent, conflict.required_version, conflict.available_version);
        }
        return Err(DependencyError::VersionConflict(resolution_result.conflicts));
    }
    
    Ok(resolution_result.ordered_graphs)
}
}

Handling Optional Dependencies

#![allow(unused)]
fn main() {
// Example: System with optional features and dependencies
async fn resolve_with_optional_features() -> Result<Vec<GraphExport>, DependencyError> {
    let base_graphs = load_core_graphs().await?;
    let optional_graphs = load_optional_graphs().await?;
    
    let resolver = DependencyResolver::new();
    
    // Configure optional dependency handling
    let config = DependencyResolutionConfig {
        allow_missing_optional: true,
        substitute_missing: true,
        fail_on_missing_required: true,
    };
    
    resolver.configure(config);
    
    // Define substitutions for missing optional dependencies
    let substitutions = vec![
        DependencySubstitution {
            original: "premium_feature_a".to_string(),
            substitute: "basic_feature_a".to_string(),
            compatibility_level: CompatibilityLevel::Partial,
            feature_differences: vec![
                "Advanced analytics not available".to_string(),
            ],
        },
    ];
    
    resolver.register_substitutions(substitutions);
    
    // Resolve with graceful handling of missing optional dependencies
    let all_graphs = [base_graphs, optional_graphs].concat();
    let resolution_result = resolver.resolve_with_graceful_degradation(&all_graphs)?;
    
    // Report what was included/excluded
    for action in &resolution_result.degradation_actions {
        match action {
            DegradationAction::Skipped(graph) => {
                println!("⏭️  Skipped optional feature: {}", graph);
            },
            DegradationAction::Substituted(original, substitute) => {
                println!("🔄 Using {} instead of {}", substitute, original);
            },
        }
    }
    
    Ok(resolution_result.ordered_graphs)
}
}

Testing Dependency Resolution

Unit Testing Dependencies

Test dependency resolution logic:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    
    #[tokio::test]
    async fn test_simple_dependency_resolution() {
        let graph_a = create_test_graph("graph_a", vec![]);
        let graph_b = create_test_graph("graph_b", vec!["graph_a"]);
        let graph_c = create_test_graph("graph_c", vec!["graph_b"]);
        
        let graphs = vec![graph_c, graph_a, graph_b]; // Intentionally unordered
        
        let resolver = DependencyResolver::new();
        let ordered = resolver.resolve_dependencies(&graphs).unwrap();
        
        assert_eq!(get_graph_name(&ordered[0]), "graph_a");
        assert_eq!(get_graph_name(&ordered[1]), "graph_b");
        assert_eq!(get_graph_name(&ordered[2]), "graph_c");
    }
    
    #[tokio::test]
    async fn test_circular_dependency_detection() {
        let graph_a = create_test_graph("graph_a", vec!["graph_b"]);
        let graph_b = create_test_graph("graph_b", vec!["graph_c"]);
        let graph_c = create_test_graph("graph_c", vec!["graph_a"]);
        
        let graphs = vec![graph_a, graph_b, graph_c];
        
        let resolver = DependencyResolver::new();
        let result = resolver.resolve_dependencies(&graphs);
        
        assert!(matches!(result, Err(DependencyError::CircularDependency(_))));
    }
    
    #[tokio::test]
    async fn test_version_constraint_validation() {
        let graph_a = create_test_graph_with_version("graph_a", "1.0.0", vec![]);
        let graph_b = create_test_graph_with_version("graph_b", "2.0.0", vec![
            ("graph_a", ">=1.5.0")
        ]);
        
        let graphs = vec![graph_a, graph_b];
        
        let resolver = DependencyResolver::new();
        let result = resolver.resolve_dependencies(&graphs);
        
        assert!(matches!(result, Err(DependencyError::VersionConflict(_))));
    }
}
}

Integration Testing

Test complete dependency resolution workflows:

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_complete_workspace_dependency_resolution() {
    let workspace_path = "test_workspace";
    setup_test_workspace(workspace_path).await;
    
    let discovery = WorkspaceDiscovery::new(WorkspaceConfig {
        root_path: PathBuf::from(workspace_path),
        ..Default::default()
    });
    
    let workspace = discovery.discover_workspace().await.unwrap();
    
    let resolver = DependencyResolver::new();
    let ordered_graphs = resolver.resolve_dependencies(&workspace.graphs).await.unwrap();
    
    // Verify correct ordering
    verify_dependency_order(&ordered_graphs);
    
    // Verify all required dependencies are satisfied
    verify_all_dependencies_satisfied(&ordered_graphs);
    
    cleanup_test_workspace(workspace_path).await;
}
}

Best Practices

1. Explicit Dependency Declaration

Always declare dependencies explicitly in graph metadata:

{
  "properties": {
    "name": "my_graph",
    "dependencies": ["required_graph_1", "required_graph_2"]
  },
  "graph_dependencies": [
    {
      "graph_name": "required_graph_1",
      "version_constraint": ">=1.0.0",
      "required": true,
      "description": "Provides core data processing functionality"
    }
  ]
}

2. Use Semantic Versioning

Follow semantic versioning for graph versions:

{
  "properties": {
    "version": "2.1.3"  // MAJOR.MINOR.PATCH
  },
  "graph_dependencies": [
    {
      "graph_name": "data_processor",
      "version_constraint": "^2.0.0"  // Compatible with 2.x.x
    }
  ]
}

3. Design for Loose Coupling

Minimize dependencies between graphs:

#![allow(unused)]
fn main() {
// Good: Minimal, well-defined dependencies
let graph_deps = vec![
    GraphDependency {
        graph_name: "core_processor".to_string(),
        required: true,
        // Only depends on stable core functionality
    },
];

// Avoid: Tight coupling with many dependencies
let graph_deps = vec![
    // Too many dependencies make the graph fragile
    GraphDependency { graph_name: "helper1".to_string(), required: true },
    GraphDependency { graph_name: "helper2".to_string(), required: true },
    GraphDependency { graph_name: "helper3".to_string(), required: true },
    GraphDependency { graph_name: "helper4".to_string(), required: true },
];
}

4. Test Dependency Changes

Always test the impact of dependency changes:

#![allow(unused)]
fn main() {
// Before making changes, analyze impact
let impact = analyzer.analyze_change_impact(&dep_graph, "my_graph", "2.0.0")?;

if impact.has_breaking_changes() {
    println!("⚠️  Breaking changes detected - review carefully");
    for change in &impact.breaking_changes {
        println!("  - {}", change.description);
    }
}
}

5. Document Dependencies

Document why dependencies exist and what they provide:

{
  "graph_dependencies": [
    {
      "graph_name": "ml_core",
      "version_constraint": ">=2.0.0",
      "required": true,
      "description": "Provides tensor operations and model training infrastructure required for neural network training"
    }
  ]
}

Next Steps

Setting Up Distributed Networks

This guide covers how to set up and configure distributed Reflow networks for cross-network actor communication.

Overview

Distributed networks allow multiple Reflow instances to communicate with each other, enabling:

  • Cross-network workflows: Actors in different networks can send messages to each other
  • Resource sharing: Share computational resources across multiple machines
  • Scalability: Scale workflows beyond a single machine's capabilities
  • Fault tolerance: Continue operation even if some network nodes fail

Basic Setup

1. Server Network Configuration

First, set up a server network that will accept connections:

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::{DistributedNetwork, DistributedConfig};
use reflow_network::network::NetworkConfig;

let server_config = DistributedConfig {
    network_id: "main_server".to_string(),
    instance_id: "server_001".to_string(),
    bind_address: "0.0.0.0".to_string(),
    bind_port: 8080,
    discovery_endpoints: vec![],
    auth_token: Some("secure_token".to_string()),
    max_connections: 100,
    heartbeat_interval_ms: 30000,
    local_network_config: NetworkConfig::default(),
};

let mut server_network = DistributedNetwork::new(server_config).await?;
}

2. Client Network Configuration

Set up a client network that connects to the server:

#![allow(unused)]
fn main() {
let client_config = DistributedConfig {
    network_id: "client_worker".to_string(),
    instance_id: "client_001".to_string(),
    bind_address: "127.0.0.1".to_string(),
    bind_port: 8081,
    discovery_endpoints: vec!["http://discovery.example.com:3000".to_string()],
    auth_token: Some("secure_token".to_string()),
    max_connections: 10,
    heartbeat_interval_ms: 30000,
    local_network_config: NetworkConfig::default(),
};

let mut client_network = DistributedNetwork::new(client_config).await?;
}

3. Start Networks

Start both networks and establish connection:

#![allow(unused)]
fn main() {
// Start server first
server_network.start().await?;
println!("✅ Server network started on port 8080");

// Start client
client_network.start().await?;
println!("✅ Client network started on port 8081");

// Connect client to server
client_network.connect_to_network("127.0.0.1:8080").await?;
println!("🔗 Client connected to server");
}

Configuration Options

DistributedConfig Fields

FieldTypeDescriptionExample
network_idStringUnique identifier for this network"data_processing_cluster"
instance_idStringUnique identifier for this instance"worker_001"
bind_addressStringIP address to bind server to"0.0.0.0" or "127.0.0.1"
bind_portu16Port number for server8080
discovery_endpointsVec<String>URLs of discovery services["http://discovery:3000"]
auth_tokenOption<String>Authentication tokenSome("secret_token")
max_connectionsusizeMaximum concurrent connections100
heartbeat_interval_msu64Heartbeat interval in milliseconds30000
local_network_configNetworkConfigLocal network configurationNetworkConfig::default()

Security Configuration

#![allow(unused)]
fn main() {
let secure_config = DistributedConfig {
    // ... other fields
    auth_token: Some("your_secure_token_here".to_string()),
    max_connections: 50, // Limit connections for security
    heartbeat_interval_ms: 15000, // More frequent heartbeats
};
}

High-Performance Configuration

#![allow(unused)]
fn main() {
let performance_config = DistributedConfig {
    // ... other fields
    max_connections: 1000,
    heartbeat_interval_ms: 60000, // Less frequent heartbeats
    local_network_config: NetworkConfig {
        max_buffer_size: 1024 * 1024, // 1MB buffer
        enable_compression: true,
        // ... other performance settings
    },
};
}

Network Topologies

Star Topology (Hub and Spoke)

#![allow(unused)]
fn main() {
// Central hub
let hub_config = DistributedConfig {
    network_id: "central_hub".to_string(),
    bind_port: 8080,
    max_connections: 100,
    // ... other fields
};

// Multiple spokes connect to hub
let spoke_configs = vec![
    ("data_processor", 8081),
    ("ml_trainer", 8082),
    ("analytics", 8083),
];

for (name, port) in spoke_configs {
    let spoke_config = DistributedConfig {
        network_id: name.to_string(),
        bind_port: port,
        discovery_endpoints: vec!["http://hub:8080".to_string()],
        // ... other fields
    };
}
}

Mesh Topology (Peer-to-Peer)

#![allow(unused)]
fn main() {
// Each node connects to multiple others
let mesh_discovery = vec![
    "http://node1:8080".to_string(),
    "http://node2:8081".to_string(),
    "http://node3:8082".to_string(),
];

let node_config = DistributedConfig {
    network_id: "mesh_node_1".to_string(),
    discovery_endpoints: mesh_discovery,
    // ... other fields
};
}

Discovery Service Integration

Using External Discovery Service

#![allow(unused)]
fn main() {
let config_with_discovery = DistributedConfig {
    network_id: "auto_discovery_client".to_string(),
    discovery_endpoints: vec![
        "http://consul.service.consul:8500".to_string(),
        "http://etcd.cluster.local:2379".to_string(),
    ],
    // ... other fields
};
}

Built-in Discovery

#![allow(unused)]
fn main() {
// Server acts as discovery endpoint for others
let discovery_server_config = DistributedConfig {
    network_id: "discovery_server".to_string(),
    bind_port: 8080,
    discovery_endpoints: vec![], // Empty - this is the discovery server
    // ... other fields
};

// Clients use server for discovery
let discovery_client_config = DistributedConfig {
    network_id: "discovery_client".to_string(),
    discovery_endpoints: vec!["http://discovery_server:8080".to_string()],
    // ... other fields
};
}

Error Handling

Connection Errors

#![allow(unused)]
fn main() {
match client_network.connect_to_network("127.0.0.1:8080").await {
    Ok(_) => println!("✅ Connected successfully"),
    Err(e) => {
        eprintln!("❌ Connection failed: {}", e);
        // Implement retry logic
        tokio::time::sleep(Duration::from_secs(5)).await;
        // Retry connection...
    }
}
}

Network Startup Errors

#![allow(unused)]
fn main() {
match server_network.start().await {
    Ok(_) => println!("✅ Network started"),
    Err(e) => {
        eprintln!("❌ Failed to start network: {}", e);
        match e.to_string().as_str() {
            s if s.contains("Address already in use") => {
                eprintln!("Port {} is already in use", server_config.bind_port);
                // Try different port
            },
            s if s.contains("Permission denied") => {
                eprintln!("Permission denied - try running as administrator or use port > 1024");
            },
            _ => eprintln!("Unknown error: {}", e),
        }
    }
}
}

Monitoring and Diagnostics

Network Status

#![allow(unused)]
fn main() {
// Check network configuration
let config = server_network.get_config();
println!("Network ID: {}", config.network_id);
println!("Listening on: {}:{}", config.bind_address, config.bind_port);

// Monitor connections (if available in future API)
// let connections = server_network.get_active_connections().await?;
// println!("Active connections: {}", connections.len());
}

Health Checks

#![allow(unused)]
fn main() {
// Implement health check endpoint
async fn health_check(network: &DistributedNetwork) -> bool {
    // Check if network is responsive
    match network.ping_network("target_network").await {
        Ok(_) => true,
        Err(_) => false,
    }
}
}

Best Practices

1. Network Naming

#![allow(unused)]
fn main() {
// Good: Descriptive, hierarchical names
"company.department.service"
"prod.ml.training"
"dev.data.processing"

// Avoid: Generic or conflicting names
"network"
"server"
"client"
}

2. Security

#![allow(unused)]
fn main() {
// Use strong authentication tokens
let auth_token = generate_secure_token(); // Use proper token generation

// Limit connections based on expected load
max_connections: calculate_expected_connections(),

// Use appropriate heartbeat intervals
heartbeat_interval_ms: match environment {
    Environment::Local => 10000,     // Fast for development
    Environment::LAN => 30000,       // Normal for LAN
    Environment::WAN => 60000,       // Slower for WAN
},
}

3. Resource Management

#![allow(unused)]
fn main() {
// Proper shutdown sequence
async fn shutdown_gracefully(mut network: DistributedNetwork) -> Result<(), anyhow::Error> {
    // Stop accepting new connections
    network.stop_accepting_connections().await?;
    
    // Wait for existing operations to complete
    tokio::time::sleep(Duration::from_secs(5)).await;
    
    // Shutdown network
    network.shutdown().await?;
    
    Ok(())
}
}

4. Development vs Production

#![allow(unused)]
fn main() {
// Development configuration
let dev_config = DistributedConfig {
    bind_address: "127.0.0.1".to_string(), // Local only
    heartbeat_interval_ms: 10000,          // Fast heartbeats
    max_connections: 10,                   // Low limit
    auth_token: None,                      // No auth for dev
    // ...
};

// Production configuration
let prod_config = DistributedConfig {
    bind_address: "0.0.0.0".to_string(),   // Accept external connections
    heartbeat_interval_ms: 30000,          // Balanced heartbeats
    max_connections: 1000,                 // Higher limit
    auth_token: Some(env::var("AUTH_TOKEN")?), // Required auth
    // ...
};
}

Troubleshooting

Common Issues

  1. Port Already in Use

    # Check what's using the port
    lsof -i :8080
    # Use different port or kill conflicting process
    
  2. Connection Refused

    #![allow(unused)]
    fn main() {
    // Check firewall settings
    // Verify correct IP/port combination
    // Ensure server is started before client connects
    }
  3. Authentication Failures

    #![allow(unused)]
    fn main() {
    // Verify auth_token matches between networks
    // Check token is not None when required
    }
  4. High Memory Usage

    #![allow(unused)]
    fn main() {
    // Reduce max_connections
    // Increase heartbeat_interval_ms
    // Monitor for connection leaks
    }

Next Steps

Remote Actors

Learn how to register, manage, and interact with remote actors across distributed networks.

Overview

Remote actors allow you to use actors from other Reflow networks as if they were local. This enables:

  • Cross-network workflows: Chain actors across multiple networks
  • Resource distribution: Use specialized actors on different machines
  • Load balancing: Distribute work across multiple network instances
  • Service isolation: Keep different services in separate networks

Basic Remote Actor Usage

1. Register Remote Actors

After establishing a network connection, register remote actors:

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::DistributedNetwork;

// Assume networks are already connected
let mut client_network = DistributedNetwork::new(client_config).await?;
client_network.start().await?;
client_network.connect_to_network("server:8080").await?;

// Register a remote actor
client_network.register_remote_actor(
    "data_processor",      // Remote actor ID
    "server_network"       // Remote network ID
).await?;

println!("✅ Remote actor registered as proxy");
}

2. Use Remote Actors in Workflows

Remote actors appear as proxy actors in your local network:

#![allow(unused)]
fn main() {
// Get local network reference
let local_network = client_network.get_local_network();
let mut network = local_network.write();

// Add remote actor to workflow (appears as local)
network.add_node("remote_processor", "data_processor@server_network", None)?;

// Create workflow with local and remote actors
network.add_node("local_generator", "data_generator", None)?;

// Connect local actor to remote actor
network.add_connection(Connector {
    from: ConnectionPoint {
        actor: "local_generator".to_string(),
        port: "Output".to_string(),
        ..Default::default()
    },
    to: ConnectionPoint {
        actor: "remote_processor".to_string(),  // This is the remote actor!
        port: "Input".to_string(),
        ..Default::default()
    },
})?;
}

3. Send Messages to Remote Actors

Send messages directly to remote actors:

#![allow(unused)]
fn main() {
use reflow_network::message::Message;

// Send message to remote actor
client_network.send_to_remote_actor(
    "server_network",      // Target network
    "data_processor",      // Target actor
    "Input",               // Target port
    Message::String("Hello from client!".to_string().into())
).await?;
}

Advanced Registration

Register with Custom Local Names

Avoid naming conflicts by using custom local names:

#![allow(unused)]
fn main() {
// Register with custom alias to avoid conflicts
let local_alias = client_network.register_remote_actor_with_alias(
    "server_data_processor",  // Custom local name
    "data_processor",         // Remote actor name
    "server_network"          // Remote network
).await?;

println!("Remote actor available as: {}", local_alias);
}

Batch Registration

Register multiple remote actors at once:

#![allow(unused)]
fn main() {
let remote_actors = vec![
    ("data_processor", "server_network"),
    ("validator", "server_network"),
    ("transformer", "processing_network"),
];

for (actor_id, network_id) in remote_actors {
    match client_network.register_remote_actor(actor_id, network_id).await {
        Ok(_) => println!("✅ Registered {}/{}", network_id, actor_id),
        Err(e) => eprintln!("❌ Failed to register {}/{}: {}", network_id, actor_id, e),
    }
}
}

Conditional Registration

Register actors based on availability:

#![allow(unused)]
fn main() {
// Check if network is available before registering
if client_network.is_network_connected("server_network").await {
    client_network.register_remote_actor("data_processor", "server_network").await?;
} else {
    eprintln!("Server network not available, using local fallback");
    // Use local actor instead
}
}

Remote Actor Lifecycle

1. Registration Process

#![allow(unused)]
fn main() {
// 1. Network connection must be established first
client_network.connect_to_network("server:8080").await?;

// 2. Register creates a local proxy actor
client_network.register_remote_actor("remote_actor", "server_network").await?;

// 3. Proxy actor is added to local network
let local_network = client_network.get_local_network();
let network = local_network.read();
assert!(network.has_actor("remote_actor@server_network"));
}

2. Message Flow

sequenceDiagram
    participant LA as Local Actor
    participant P as Proxy Actor
    participant B as Network Bridge
    participant RN as Remote Network
    participant RA as Remote Actor

    LA->>P: Send Message
    P->>B: Forward Message
    B->>RN: Network Transport
    RN->>RA: Deliver Message
    RA->>RN: Response (if any)
    RN->>B: Network Transport
    B->>P: Forward Response
    P->>LA: Deliver Response

3. Cleanup and Unregistration

#![allow(unused)]
fn main() {
// Unregister remote actor when no longer needed
client_network.unregister_remote_actor("data_processor@server_network").await?;

// Or unregister all actors from a network
client_network.unregister_all_from_network("server_network").await?;
}

Error Handling

Registration Errors

#![allow(unused)]
fn main() {
match client_network.register_remote_actor("processor", "server").await {
    Ok(_) => println!("✅ Registration successful"),
    Err(e) => {
        match e.to_string().as_str() {
            s if s.contains("Network not connected") => {
                eprintln!("❌ Must connect to network first");
                // Establish connection and retry
            },
            s if s.contains("Actor not found") => {
                eprintln!("❌ Remote actor 'processor' doesn't exist");
                // Check available actors or use different name
            },
            s if s.contains("Name conflict") => {
                eprintln!("❌ Actor name conflicts with local actor");
                // Use different alias or handle conflict
            },
            _ => eprintln!("❌ Registration failed: {}", e),
        }
    }
}
}

Message Delivery Errors

#![allow(unused)]
fn main() {
match client_network.send_to_remote_actor("server", "processor", "Input", message).await {
    Ok(_) => println!("✅ Message sent"),
    Err(e) => {
        match e.to_string().as_str() {
            s if s.contains("Network disconnected") => {
                eprintln!("❌ Connection lost, attempting reconnection...");
                // Implement reconnection logic
            },
            s if s.contains("Actor not available") => {
                eprintln!("❌ Remote actor is not responding");
                // Use fallback actor or retry later
            },
            s if s.contains("Timeout") => {
                eprintln!("❌ Message delivery timed out");
                // Implement retry logic
            },
            _ => eprintln!("❌ Message delivery failed: {}", e),
        }
    }
}
}

Performance Considerations

Connection Pooling

#![allow(unused)]
fn main() {
// Configure connection pooling for better performance
let config = DistributedConfig {
    max_connections: 50,           // Pool multiple connections
    heartbeat_interval_ms: 30000,  // Balance between responsiveness and overhead
    // ... other settings
};
}

Message Batching

#![allow(unused)]
fn main() {
// Send multiple messages efficiently
let messages = vec![
    ("Input", Message::String("msg1".to_string().into())),
    ("Input", Message::String("msg2".to_string().into())),
    ("Input", Message::String("msg3".to_string().into())),
];

// Batch send (if supported by future API)
// client_network.send_batch_to_remote_actor("server", "processor", messages).await?;
}

Caching and Local Fallbacks

#![allow(unused)]
fn main() {
// Implement local caching for remote actor responses
struct CachedRemoteActor {
    network: Arc<DistributedNetwork>,
    cache: Arc<RwLock<HashMap<String, Message>>>,
    fallback_actor: Option<String>,
}

impl CachedRemoteActor {
    async fn send_with_fallback(&self, message: Message) -> Result<Message, anyhow::Error> {
        // Try remote actor first
        match self.network.send_to_remote_actor("server", "processor", "Input", message.clone()).await {
            Ok(response) => Ok(response),
            Err(_) => {
                // Fall back to local actor if available
                if let Some(fallback) = &self.fallback_actor {
                    println!("⚠️  Using local fallback actor: {}", fallback);
                    // Send to local actor instead
                    self.send_to_local_actor(fallback, message).await
                } else {
                    Err(anyhow::anyhow!("Remote actor unavailable and no fallback configured"))
                }
            }
        }
    }
}
}

Remote Actor Discovery

Automatic Discovery

#![allow(unused)]
fn main() {
// Discover all available actors on a remote network
let available_actors = client_network.discover_remote_actors("server_network").await?;

for actor_info in available_actors {
    println!("Available actor: {} (capabilities: {:?})", 
        actor_info.name, 
        actor_info.capabilities
    );
    
    // Register if it matches our needs
    if actor_info.capabilities.contains(&"data_processing".to_string()) {
        client_network.register_remote_actor(&actor_info.name, "server_network").await?;
    }
}
}

Selective Registration by Capability

#![allow(unused)]
fn main() {
// Register only actors with specific capabilities
let required_capabilities = vec!["ml_training", "gpu_compute"];

let actors = client_network.discover_remote_actors("ml_cluster").await?;
for actor in actors {
    let has_required_caps = required_capabilities.iter()
        .any(|cap| actor.capabilities.contains(&cap.to_string()));
    
    if has_required_caps {
        client_network.register_remote_actor(&actor.name, "ml_cluster").await?;
        println!("✅ Registered ML actor: {}", actor.name);
    }
}
}

Monitoring Remote Actors

Health Checking

#![allow(unused)]
fn main() {
// Check if remote actor is responsive
async fn check_remote_actor_health(
    network: &DistributedNetwork,
    network_id: &str,
    actor_id: &str
) -> bool {
    match network.ping_remote_actor(network_id, actor_id).await {
        Ok(_) => {
            println!("✅ Remote actor {}/{} is healthy", network_id, actor_id);
            true
        },
        Err(e) => {
            eprintln!("❌ Remote actor {}/{} health check failed: {}", network_id, actor_id, e);
            false
        }
    }
}
}

Performance Monitoring

#![allow(unused)]
fn main() {
// Monitor remote actor performance
struct RemoteActorMetrics {
    actor_id: String,
    network_id: String,
    total_messages: u64,
    successful_messages: u64,
    average_latency_ms: f64,
    last_response_time: Option<chrono::DateTime<chrono::Utc>>,
}

impl RemoteActorMetrics {
    async fn record_message_sent(&mut self) {
        self.total_messages += 1;
        // Record timing for latency calculation
    }
    
    async fn record_response_received(&mut self, latency: Duration) {
        self.successful_messages += 1;
        self.last_response_time = Some(chrono::Utc::now());
        
        // Update rolling average
        let latency_ms = latency.as_millis() as f64;
        self.average_latency_ms = (self.average_latency_ms + latency_ms) / 2.0;
    }
    
    fn success_rate(&self) -> f64 {
        if self.total_messages == 0 {
            0.0
        } else {
            (self.successful_messages as f64) / (self.total_messages as f64)
        }
    }
}
}

Best Practices

1. Network Design

#![allow(unused)]
fn main() {
// Good: Organize actors by function and location
"auth_service@auth_cluster"
"data_processor@processing_cluster"  
"ml_trainer@gpu_cluster"

// Avoid: Generic names that don't indicate purpose
"actor1@server"
"service@network"
}

2. Error Resilience

#![allow(unused)]
fn main() {
// Implement circuit breaker pattern for remote actors
struct CircuitBreaker {
    failure_count: u32,
    failure_threshold: u32,
    timeout_duration: Duration,
    last_failure_time: Option<Instant>,
    state: CircuitState,
}

enum CircuitState {
    Closed,   // Normal operation
    Open,     // Failing, don't try
    HalfOpen, // Testing if service recovered
}

impl CircuitBreaker {
    async fn call_remote_actor(&mut self, network: &DistributedNetwork) -> Result<Message, anyhow::Error> {
        match self.state {
            CircuitState::Open => {
                if self.should_attempt_reset() {
                    self.state = CircuitState::HalfOpen;
                } else {
                    return Err(anyhow::anyhow!("Circuit breaker is open"));
                }
            },
            _ => {}
        }
        
        match network.send_to_remote_actor("server", "actor", "Input", Message::String("test".to_string().into())).await {
            Ok(response) => {
                self.on_success();
                Ok(response)
            },
            Err(e) => {
                self.on_failure();
                Err(e)
            }
        }
    }
}
}

3. Resource Management

#![allow(unused)]
fn main() {
// Properly clean up remote actor registrations
async fn cleanup_remote_actors(network: &mut DistributedNetwork) -> Result<(), anyhow::Error> {
    // Get list of registered remote actors
    let remote_actors = network.list_remote_actors().await?;
    
    // Unregister all remote actors
    for (actor_id, network_id) in remote_actors {
        network.unregister_remote_actor(&format!("{}@{}", actor_id, network_id)).await?;
        println!("🧹 Unregistered remote actor: {}@{}", actor_id, network_id);
    }
    
    Ok(())
}
}

Troubleshooting

Common Issues

  1. Remote Actor Not Found

    #![allow(unused)]
    fn main() {
    // Verify actor exists on remote network
    let actors = client_network.list_actors_on_network("server_network").await?;
    println!("Available actors: {:?}", actors);
    }
  2. Registration Fails

    #![allow(unused)]
    fn main() {
    // Check network connection status
    if !client_network.is_connected_to("server_network").await {
        client_network.connect_to_network("server:8080").await?;
    }
    }
  3. Messages Not Delivered

    #![allow(unused)]
    fn main() {
    // Check message serialization
    let message = Message::String("test".to_string().into());
    match serde_json::to_string(&message) {
        Ok(_) => println!("✅ Message is serializable"),
        Err(e) => eprintln!("❌ Message serialization failed: {}", e),
    }
    }
  4. High Latency

    #![allow(unused)]
    fn main() {
    // Monitor network latency
    let start = Instant::now();
    client_network.ping_network("server_network").await?;
    let latency = start.elapsed();
    println!("Network latency: {:?}", latency);
    }

Next Steps

Discovery & Registration

Learn how to use network discovery services and automatic actor registration in distributed Reflow networks.

Overview

Discovery and registration services enable:

  • Automatic network discovery: Find available networks without manual configuration
  • Service registration: Advertise your network's capabilities to others
  • Dynamic actor discovery: Automatically find and register remote actors
  • Health monitoring: Track network and actor availability
  • Load balancing: Distribute connections across available instances

Discovery Service Types

1. Built-in Discovery

Use Reflow's built-in discovery where one network acts as a registry:

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::{DistributedNetwork, DistributedConfig};

// Discovery server (registry)
let registry_config = DistributedConfig {
    network_id: "discovery_registry".to_string(),
    instance_id: "registry_001".to_string(),
    bind_address: "0.0.0.0".to_string(),
    bind_port: 8090,
    discovery_endpoints: vec![], // Empty - this IS the discovery server
    // ... other config
};

let mut registry_network = DistributedNetwork::new(registry_config).await?;
registry_network.start().await?;
println!("🔍 Discovery registry started on port 8090");
}

2. Client Networks Using Registry

#![allow(unused)]
fn main() {
// Client networks connect to registry for discovery
let client_config = DistributedConfig {
    network_id: "worker_network".to_string(),
    instance_id: "worker_001".to_string(),
    bind_address: "127.0.0.1".to_string(),
    bind_port: 8091,
    discovery_endpoints: vec!["http://registry:8090".to_string()],
    // ... other config
};

let mut client_network = DistributedNetwork::new(client_config).await?;
client_network.start().await?;
}

3. External Discovery Services

Integrate with external service discovery systems:

#![allow(unused)]
fn main() {
// Using Consul
let consul_config = DistributedConfig {
    network_id: "consul_client".to_string(),
    discovery_endpoints: vec![
        "http://consul.service.consul:8500/v1/agent/services".to_string()
    ],
    // ... other config
};

// Using etcd
let etcd_config = DistributedConfig {
    network_id: "etcd_client".to_string(),
    discovery_endpoints: vec![
        "http://etcd.cluster.local:2379/v2/keys/reflow/services".to_string()
    ],
    // ... other config
};

// Using Kubernetes DNS
let k8s_config = DistributedConfig {
    network_id: "k8s_service".to_string(),
    discovery_endpoints: vec![
        "http://reflow-discovery.default.svc.cluster.local:8080".to_string()
    ],
    // ... other config
};
}

Network Registration

Basic Registration

Networks automatically register themselves when started:

#![allow(unused)]
fn main() {
let network_config = DistributedConfig {
    network_id: "ml_processing_cluster".to_string(),
    instance_id: "gpu_worker_001".to_string(),
    bind_address: "0.0.0.0".to_string(),
    bind_port: 8080,
    discovery_endpoints: vec!["http://discovery:8090".to_string()],
    // ... other config
};

let mut network = DistributedNetwork::new(network_config).await?;

// Registration happens automatically on start
network.start().await?;
// Network is now registered with discovery service
}

Registration with Metadata

Include additional metadata during registration:

#![allow(unused)]
fn main() {
// Register with capabilities and metadata
let registration_metadata = serde_json::json!({
    "capabilities": ["ml_training", "gpu_compute", "data_processing"],
    "resources": {
        "cpu_cores": 32,
        "gpu_count": 4,
        "memory_gb": 128
    },
    "version": "1.2.0",
    "tags": ["ml", "gpu", "production"],
    "health_check_url": "http://worker:8080/health"
});

// This metadata is included in registration (implementation detail)
// The discovery service can use this for intelligent routing
}

Manual Registration Control

Control registration timing and behavior:

#![allow(unused)]
fn main() {
// Start network without auto-registration
let mut network = DistributedNetwork::new(config).await?;
network.start().await?;

// Perform initialization
setup_local_actors(&mut network).await?;
run_health_checks(&network).await?;

// Register manually when ready
network.register_with_discovery().await?;
println!("✅ Network registered and ready for connections");
}

Network Discovery

Discover Available Networks

Find networks that are currently available:

#![allow(unused)]
fn main() {
// Discover all available networks
let discovered_networks = client_network.discover_networks().await?;

for network_info in discovered_networks {
    println!("🌐 Found network: {} ({})", 
        network_info.network_id, 
        network_info.endpoint
    );
    println!("   Capabilities: {:?}", network_info.capabilities);
    println!("   Last seen: {}", network_info.last_seen);
}
}

Filtered Discovery

Find networks with specific capabilities:

#![allow(unused)]
fn main() {
// Discover networks with ML capabilities
let ml_networks = client_network.discover_networks_with_capability("ml_training").await?;

for network in ml_networks {
    println!("🧠 ML Network: {} at {}", network.network_id, network.endpoint);
    
    // Connect to ML networks
    client_network.connect_to_network(&network.endpoint).await?;
}
}

Discovery by Tags

Find networks using tag-based filtering:

#![allow(unused)]
fn main() {
// Discover production GPU networks
let gpu_networks = client_network.discover_networks_by_tags(vec!["gpu", "production"]).await?;

for network in gpu_networks {
    if network.is_healthy() {
        client_network.connect_to_network(&network.endpoint).await?;
        println!("✅ Connected to GPU network: {}", network.network_id);
    }
}
}

Automatic Actor Discovery

Discover Actors on Connected Networks

Once connected to a network, discover its available actors:

#![allow(unused)]
fn main() {
// Connect to a network first
client_network.connect_to_network("ml_cluster:8080").await?;

// Discover actors on that network
let actors = client_network.discover_actors_on_network("ml_cluster").await?;

for actor in actors {
    println!("🎭 Actor: {} ({})", actor.name, actor.component_type);
    println!("   Capabilities: {:?}", actor.capabilities);
    println!("   Ports: in={:?}, out={:?}", actor.inports, actor.outports);
}
}

Automatic Registration

Register all discovered actors automatically:

#![allow(unused)]
fn main() {
// Discover and register all compatible actors
let discovered_actors = client_network.discover_actors_on_network("data_cluster").await?;

for actor in discovered_actors {
    // Only register actors we can use
    if actor.capabilities.contains(&"data_processing".to_string()) {
        match client_network.register_remote_actor(&actor.name, "data_cluster").await {
            Ok(_) => println!("✅ Registered actor: {}", actor.name),
            Err(e) => eprintln!("❌ Failed to register {}: {}", actor.name, e),
        }
    }
}
}

Selective Auto-Registration

Register actors based on complex criteria:

#![allow(unused)]
fn main() {
async fn smart_actor_registration(
    network: &mut DistributedNetwork,
    remote_network_id: &str
) -> Result<Vec<String>, anyhow::Error> {
    let actors = network.discover_actors_on_network(remote_network_id).await?;
    let mut registered_actors = Vec::new();
    
    for actor in actors {
        // Complex registration logic
        let should_register = match actor.component_type.as_str() {
            "DataProcessorActor" => {
                // Only register if we don't have local data processors
                !network.has_local_actor_of_type("DataProcessorActor").await
            },
            "MLTrainerActor" => {
                // Only register GPU trainers
                actor.capabilities.contains(&"gpu_compute".to_string())
            },
            "DatabaseActor" => {
                // Register if it's a different database type than our local ones
                let local_dbs = network.get_local_database_types().await;
                !local_dbs.contains(&actor.get_database_type())
            },
            _ => false, // Don't auto-register unknown types
        };
        
        if should_register {
            let alias = network.register_remote_actor(&actor.name, remote_network_id).await?;
            registered_actors.push(alias);
            println!("🤖 Smart-registered: {} as {}", actor.name, alias);
        }
    }
    
    Ok(registered_actors)
}
}

Health Monitoring

Network Health Checks

Monitor the health of discovered networks:

#![allow(unused)]
fn main() {
// Periodic health monitoring
async fn monitor_network_health(network: &DistributedNetwork) -> Result<(), anyhow::Error> {
    let mut interval = tokio::time::interval(Duration::from_secs(30));
    
    loop {
        interval.tick().await;
        
        let connected_networks = network.get_connected_networks().await;
        for network_id in connected_networks {
            match network.ping_network(&network_id).await {
                Ok(latency) => {
                    println!("✅ Network {} healthy ({}ms)", network_id, latency.as_millis());
                },
                Err(e) => {
                    eprintln!("❌ Network {} unhealthy: {}", network_id, e);
                    
                    // Attempt reconnection
                    if let Ok(network_info) = network.get_network_info(&network_id).await {
                        match network.reconnect_to_network(&network_info.endpoint).await {
                            Ok(_) => println!("🔄 Reconnected to {}", network_id),
                            Err(e) => eprintln!("🔌 Reconnection failed: {}", e),
                        }
                    }
                }
            }
        }
    }
}
}

Actor Health Monitoring

Monitor remote actor availability:

#![allow(unused)]
fn main() {
async fn monitor_remote_actors(network: &DistributedNetwork) -> Result<(), anyhow::Error> {
    let remote_actors = network.list_registered_remote_actors().await;
    
    for (actor_alias, actor_ref) in remote_actors {
        match network.ping_remote_actor(&actor_ref.network_id, &actor_ref.actor_id).await {
            Ok(_) => {
                println!("✅ Remote actor {} is responsive", actor_alias);
            },
            Err(e) => {
                eprintln!("❌ Remote actor {} is unresponsive: {}", actor_alias, e);
                
                // Try to re-register the actor
                match network.refresh_remote_actor(&actor_alias).await {
                    Ok(_) => println!("🔄 Refreshed remote actor: {}", actor_alias),
                    Err(e) => {
                        eprintln!("🚫 Failed to refresh {}: {}", actor_alias, e);
                        // Consider removing the actor or marking it as unavailable
                    }
                }
            }
        }
    }
    
    Ok(())
}
}

Load Balancing and Failover

Discover Multiple Instances

Find multiple instances of the same service:

#![allow(unused)]
fn main() {
// Find all instances of a specific service type
let data_processors = client_network.discover_networks_with_capability("data_processing").await?;

println!("Found {} data processing networks:", data_processors.len());
for (i, network) in data_processors.iter().enumerate() {
    println!("  {}. {} at {} (load: {}%)", 
        i + 1, 
        network.network_id, 
        network.endpoint,
        network.cpu_usage.unwrap_or(0.0)
    );
}
}

Load-Balanced Registration

Register actors from multiple networks for load balancing:

#![allow(unused)]
fn main() {
// Register the same actor type from multiple networks
let processing_networks = vec!["cluster_1", "cluster_2", "cluster_3"];

for (i, network_id) in processing_networks.iter().enumerate() {
    if client_network.is_network_available(network_id).await {
        let alias = format!("data_processor_{}", i + 1);
        client_network.register_remote_actor_with_alias(
            &alias,
            "data_processor", 
            network_id
        ).await?;
        println!("⚖️  Registered load-balanced actor: {}", alias);
    }
}
}

Failover Registration

Implement failover with primary and backup actors:

#![allow(unused)]
fn main() {
struct FailoverActorRegistry {
    network: Arc<DistributedNetwork>,
    primary_actors: HashMap<String, String>,    // service -> primary actor alias
    backup_actors: HashMap<String, Vec<String>>, // service -> backup actor aliases
}

impl FailoverActorRegistry {
    async fn register_with_failover(&mut self, 
        service_name: &str, 
        actor_type: &str
    ) -> Result<(), anyhow::Error> {
        let networks = self.network.discover_networks_with_capability(actor_type).await?;
        
        if networks.is_empty() {
            return Err(anyhow::anyhow!("No networks found with capability: {}", actor_type));
        }
        
        // Primary: Use the network with lowest load
        let primary_network = networks.iter()
            .min_by(|a, b| a.cpu_usage.partial_cmp(&b.cpu_usage).unwrap())
            .unwrap();
        
        let primary_alias = format!("{}_primary", service_name);
        self.network.register_remote_actor_with_alias(
            &primary_alias,
            actor_type,
            &primary_network.network_id
        ).await?;
        
        self.primary_actors.insert(service_name.to_string(), primary_alias);
        
        // Backups: Register from other networks
        let mut backup_aliases = Vec::new();
        for (i, network) in networks.iter().skip(1).enumerate() {
            let backup_alias = format!("{}_backup_{}", service_name, i + 1);
            self.network.register_remote_actor_with_alias(
                &backup_alias,
                actor_type,
                &network.network_id
            ).await?;
            backup_aliases.push(backup_alias);
        }
        
        self.backup_actors.insert(service_name.to_string(), backup_aliases);
        
        println!("🛡️  Registered failover service: {} with {} backups", 
            service_name, backup_aliases.len());
        
        Ok(())
    }
    
    async fn handle_primary_failure(&self, service_name: &str) -> Result<String, anyhow::Error> {
        if let Some(backups) = self.backup_actors.get(service_name) {
            if let Some(first_backup) = backups.first() {
                // Promote first backup to primary
                println!("🔄 Promoting backup to primary for service: {}", service_name);
                return Ok(first_backup.clone());
            }
        }
        Err(anyhow::anyhow!("No backup available for service: {}", service_name))
    }
}
}

Configuration Management

Discovery Service Configuration

Configure discovery service behavior:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone)]
pub struct DiscoveryConfig {
    pub refresh_interval_ms: u64,
    pub health_check_interval_ms: u64,
    pub max_discovery_retries: u32,
    pub discovery_timeout_ms: u64,
    pub enable_auto_registration: bool,
    pub registration_metadata: serde_json::Value,
}

impl Default for DiscoveryConfig {
    fn default() -> Self {
        DiscoveryConfig {
            refresh_interval_ms: 30000,      // 30 seconds
            health_check_interval_ms: 15000, // 15 seconds
            max_discovery_retries: 3,
            discovery_timeout_ms: 5000,      // 5 seconds
            enable_auto_registration: true,
            registration_metadata: serde_json::json!({
                "version": "1.0.0",
                "capabilities": []
            }),
        }
    }
}
}

Environment-Specific Discovery

Configure discovery for different environments:

#![allow(unused)]
fn main() {
fn create_discovery_config(environment: &str) -> DiscoveryConfig {
    match environment {
        "development" => DiscoveryConfig {
            refresh_interval_ms: 10000,  // Faster refresh for dev
            health_check_interval_ms: 5000,
            discovery_timeout_ms: 2000,  // Shorter timeout
            enable_auto_registration: true,
            registration_metadata: serde_json::json!({
                "environment": "development",
                "auto_discovery": true
            }),
            ..Default::default()
        },
        "production" => DiscoveryConfig {
            refresh_interval_ms: 60000,  // Slower refresh for prod
            health_check_interval_ms: 30000,
            discovery_timeout_ms: 10000, // Longer timeout
            enable_auto_registration: false, // Manual control
            registration_metadata: serde_json::json!({
                "environment": "production",
                "manual_registration": true
            }),
            ..Default::default()
        },
        _ => DiscoveryConfig::default(),
    }
}
}

Error Handling

Discovery Errors

Handle common discovery and registration errors:

#![allow(unused)]
fn main() {
async fn robust_discovery(network: &DistributedNetwork) -> Result<Vec<NetworkInfo>, anyhow::Error> {
    let mut retries = 3;
    let mut last_error = None;
    
    while retries > 0 {
        match network.discover_networks().await {
            Ok(networks) => {
                if networks.is_empty() {
                    println!("⚠️  No networks discovered, retrying...");
                } else {
                    return Ok(networks);
                }
            },
            Err(e) => {
                eprintln!("❌ Discovery attempt failed: {}", e);
                last_error = Some(e);
                
                // Wait before retry
                tokio::time::sleep(Duration::from_secs(2)).await;
            }
        }
        
        retries -= 1;
    }
    
    Err(last_error.unwrap_or_else(|| anyhow::anyhow!("Discovery failed after retries")))
}
}

Registration Conflicts

Handle registration conflicts gracefully:

#![allow(unused)]
fn main() {
async fn safe_actor_registration(
    network: &mut DistributedNetwork,
    actor_name: &str,
    remote_network: &str
) -> Result<String, anyhow::Error> {
    match network.register_remote_actor(actor_name, remote_network).await {
        Ok(alias) => Ok(alias),
        Err(e) if e.to_string().contains("name conflict") => {
            // Try with numbered suffix
            for i in 1..=10 {
                let attempt_name = format!("{}_{}", actor_name, i);
                match network.register_remote_actor_with_alias(
                    &attempt_name, 
                    actor_name, 
                    remote_network
                ).await {
                    Ok(alias) => {
                        println!("✅ Registered with conflict resolution: {}", alias);
                        return Ok(alias);
                    },
                    Err(_) => continue,
                }
            }
            Err(anyhow::anyhow!("Could not resolve naming conflict for: {}", actor_name))
        },
        Err(e) => Err(e),
    }
}
}

Best Practices

1. Discovery Strategy

#![allow(unused)]
fn main() {
// Good: Use hierarchical discovery with fallbacks
let discovery_endpoints = vec![
    "http://local-discovery:8090",      // Local first
    "http://regional-discovery:8090",   // Regional second
    "http://global-discovery:8090",     // Global fallback
];

// Configure discovery timeouts appropriately
let config = DiscoveryConfig {
    discovery_timeout_ms: 5000,    // 5 seconds max
    max_discovery_retries: 3,      // Try 3 times
    refresh_interval_ms: 30000,    // Refresh every 30s
    ..Default::default()
};
}

2. Health Monitoring

#![allow(unused)]
fn main() {
// Implement comprehensive health monitoring
async fn comprehensive_health_check(network: &DistributedNetwork) -> HealthStatus {
    let mut status = HealthStatus::new();
    
    // Check discovery service connectivity
    status.discovery_healthy = network.ping_discovery_service().await.is_ok();
    
    // Check connected networks
    let networks = network.get_connected_networks().await;
    for network_id in networks {
        let network_healthy = network.ping_network(&network_id).await.is_ok();
        status.network_health.insert(network_id, network_healthy);
    }
    
    // Check remote actors
    let actors = network.list_registered_remote_actors().await;
    for (alias, actor_ref) in actors {
        let actor_healthy = network.ping_remote_actor(
            &actor_ref.network_id, 
            &actor_ref.actor_id
        ).await.is_ok();
        status.actor_health.insert(alias, actor_healthy);
    }
    
    status
}
}

3. Resource Cleanup

#![allow(unused)]
fn main() {
// Proper cleanup on shutdown
async fn graceful_shutdown(mut network: DistributedNetwork) -> Result<(), anyhow::Error> {
    // Stop discovery refresh
    network.stop_discovery_refresh().await?;
    
    // Unregister from discovery service
    network.unregister_from_discovery().await?;
    
    // Clean up remote actor registrations
    let remote_actors = network.list_registered_remote_actors().await;
    for (alias, _) in remote_actors {
        network.unregister_remote_actor(&alias).await?;
    }
    
    // Disconnect from all networks
    let connected = network.get_connected_networks().await;
    for network_id in connected {
        network.disconnect_from_network(&network_id).await?;
    }
    
    // Finally shutdown the network
    network.shutdown().await?;
    
    Ok(())
}
}

Integration Examples

Docker Swarm Integration

# docker-compose.yml
version: '3.8'
services:
  reflow-discovery:
    image: reflow:latest
    command: --mode discovery --port 8090
    ports:
      - "8090:8090"
    deploy:
      replicas: 1
      
  reflow-worker:
    image: reflow:latest
    command: --mode worker --discovery http://reflow-discovery:8090
    deploy:
      replicas: 3
    depends_on:
      - reflow-discovery

Kubernetes Integration

# reflow-discovery-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: reflow-discovery
spec:
  selector:
    app: reflow-discovery
  ports:
    - port: 8090
      targetPort: 8090
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reflow-discovery
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reflow-discovery
  template:
    metadata:
      labels:
        app: reflow-discovery
    spec:
      containers:
      - name: reflow
        image: reflow:latest
        args: ["--mode", "discovery", "--port", "8090"]
        ports:
        - containerPort: 8090

Next Steps

Conflict Resolution

Learn how to handle actor name conflicts in distributed Reflow networks.

Overview

Name conflicts occur when multiple networks have actors with identical names. This guide covers:

  • Understanding conflict types: Different scenarios that cause conflicts
  • Resolution strategies: Automatic and manual approaches to resolve conflicts
  • Prevention techniques: Best practices to avoid conflicts
  • Hierarchical namespacing: Advanced organization patterns

Conflict Types

1. Local-Remote Conflicts

Conflict between a local actor and a remote actor with the same name:

#![allow(unused)]
fn main() {
// Local network has "data_processor"
network.register_local_actor("data_processor", DataProcessorActor::new())?;

// Trying to register remote actor with same name
match client_network.register_remote_actor("data_processor", "server_network").await {
    Err(e) if e.to_string().contains("name conflict") => {
        println!("❌ Conflict: local 'data_processor' vs remote 'data_processor'");
    },
    _ => {}
}
}

2. Remote-Remote Conflicts

Multiple remote networks have actors with the same name:

#![allow(unused)]
fn main() {
// Both networks have "authentication_service"
client_network.register_remote_actor("authentication_service", "primary_auth").await?;

// This will conflict:
match client_network.register_remote_actor("authentication_service", "backup_auth").await {
    Err(e) => println!("❌ Conflict: primary_auth/authentication_service vs backup_auth/authentication_service"),
    _ => {}
}
}

3. Alias Conflicts

Custom aliases that conflict with existing names:

#![allow(unused)]
fn main() {
// Register with alias that conflicts with local actor
match client_network.register_remote_actor_with_alias(
    "local_actor_name",  // This alias conflicts!
    "remote_actor",
    "remote_network"
).await {
    Err(e) => println!("❌ Alias conflicts with existing local actor"),
    _ => {}
}
}

Resolution Strategies

1. Automatic Aliasing

Let the system automatically generate unique aliases:

#![allow(unused)]
fn main() {
use reflow_network::distributed_network::ConflictResolutionStrategy;

// Automatic resolution with numbered suffixes
let alias = client_network.register_remote_actor_with_strategy(
    "data_processor",
    "server_network", 
    ConflictResolutionStrategy::AutoAlias
).await?;

// Results in aliases like:
// - "data_processor" (if no conflict)
// - "data_processor_1" (first conflict)
// - "data_processor_2" (second conflict)

println!("✅ Registered as: {}", alias);
}

2. Network Prefixing

Prefix remote actors with their network name:

#![allow(unused)]
fn main() {
let alias = client_network.register_remote_actor_with_strategy(
    "data_processor",
    "server_network",
    ConflictResolutionStrategy::NetworkPrefix
).await?;

// Results in: "server_network_data_processor"
println!("✅ Network-prefixed actor: {}", alias);
}

3. Fully Qualified Names

Use complete network::actor notation:

#![allow(unused)]
fn main() {
let alias = client_network.register_remote_actor_with_strategy(
    "data_processor", 
    "server_network",
    ConflictResolutionStrategy::FullyQualified
).await?;

// Results in: "server_network::data_processor"
println!("✅ Fully qualified actor: {}", alias);
}

4. Manual Aliases

Provide explicit custom aliases:

#![allow(unused)]
fn main() {
let alias = client_network.register_remote_actor_with_strategy(
    "data_processor",
    "server_network", 
    ConflictResolutionStrategy::ManualAlias("server_data_proc".to_string())
).await?;

// Results in: "server_data_proc"
println!("✅ Custom alias: {}", alias);
}

5. Fail on Conflict

Explicitly handle conflicts in application code:

#![allow(unused)]
fn main() {
match client_network.register_remote_actor_with_strategy(
    "data_processor",
    "server_network",
    ConflictResolutionStrategy::Fail
).await {
    Ok(alias) => println!("✅ No conflict, registered as: {}", alias),
    Err(e) => {
        println!("❌ Registration failed due to conflict: {}", e);
        // Handle conflict manually
        handle_naming_conflict(&mut client_network, "data_processor", "server_network").await?;
    }
}
}

Advanced Conflict Resolution

Intelligent Conflict Detection

Detect and analyze conflicts before registration:

#![allow(unused)]
fn main() {
async fn analyze_potential_conflicts(
    network: &DistributedNetwork,
    actor_name: &str,
    remote_network_id: &str
) -> Result<ConflictAnalysis, anyhow::Error> {
    let mut analysis = ConflictAnalysis::new();
    
    // Check local conflicts
    if network.has_local_actor(actor_name).await {
        analysis.local_conflicts.push(LocalConflict {
            actor_name: actor_name.to_string(),
            actor_type: network.get_local_actor_type(actor_name).await?,
        });
    }
    
    // Check remote conflicts
    let remote_actors = network.list_registered_remote_actors().await;
    for (alias, actor_ref) in remote_actors {
        if alias == actor_name {
            analysis.remote_conflicts.push(RemoteConflict {
                alias,
                actor_ref,
            });
        }
    }
    
    // Suggest resolutions
    analysis.suggested_resolutions = suggest_resolutions(&analysis, actor_name, remote_network_id);
    
    Ok(analysis)
}

#[derive(Debug)]
struct ConflictAnalysis {
    local_conflicts: Vec<LocalConflict>,
    remote_conflicts: Vec<RemoteConflict>,
    suggested_resolutions: Vec<SuggestedResolution>,
}

#[derive(Debug)]
struct SuggestedResolution {
    strategy: ConflictResolutionStrategy,
    resulting_alias: String,
    confidence: f32,
    description: String,
}
}

Multi-Network Batch Registration

Handle conflicts when registering actors from multiple networks:

#![allow(unused)]
fn main() {
async fn batch_register_with_conflict_resolution(
    network: &mut DistributedNetwork,
    registrations: Vec<(String, String)>  // (actor_name, network_id)
) -> Result<BatchRegistrationResult, anyhow::Error> {
    let mut results = BatchRegistrationResult::new();
    let mut name_usage = HashMap::new();
    
    // Analyze all potential conflicts first
    for (actor_name, network_id) in &registrations {
        name_usage.entry(actor_name.clone())
            .or_insert_with(Vec::new)
            .push(network_id.clone());
    }
    
    // Register with conflict resolution
    for (actor_name, network_id) in registrations {
        let strategy = if name_usage[&actor_name].len() > 1 {
            // Multiple networks have same actor name
            ConflictResolutionStrategy::NetworkPrefix
        } else if network.has_local_actor(&actor_name).await {
            // Conflicts with local actor
            ConflictResolutionStrategy::FullyQualified
        } else {
            // No conflicts expected
            ConflictResolutionStrategy::AutoAlias
        };
        
        match network.register_remote_actor_with_strategy(&actor_name, &network_id, strategy).await {
            Ok(alias) => {
                results.successful.push(SuccessfulRegistration {
                    actor_name: actor_name.clone(),
                    network_id: network_id.clone(),
                    alias,
                    strategy_used: strategy,
                });
            },
            Err(e) => {
                results.failed.push(FailedRegistration {
                    actor_name,
                    network_id,
                    error: e.to_string(),
                });
            }
        }
    }
    
    Ok(results)
}
}

Context-Aware Resolution

Choose resolution strategies based on actor types and usage patterns:

#![allow(unused)]
fn main() {
async fn smart_conflict_resolution(
    network: &mut DistributedNetwork,
    actor_name: &str,
    network_id: &str,
    actor_metadata: &ActorMetadata
) -> Result<String, anyhow::Error> {
    // Analyze actor characteristics
    let strategy = match actor_metadata.actor_type.as_str() {
        "DatabaseActor" => {
            // For databases, use descriptive prefixes
            let db_type = actor_metadata.get_database_type().unwrap_or("db");
            ConflictResolutionStrategy::ManualAlias(
                format!("{}_{}", db_type, actor_name)
            )
        },
        "MLTrainerActor" => {
            // For ML trainers, include model type
            let model_type = actor_metadata.get_model_type().unwrap_or("model");
            ConflictResolutionStrategy::ManualAlias(
                format!("{}_trainer_{}", model_type, network_id)
            )
        },
        "AuthenticationActor" => {
            // For auth services, indicate primary/backup
            let is_primary = actor_metadata.is_primary_service().unwrap_or(false);
            let role = if is_primary { "primary" } else { "backup" };
            ConflictResolutionStrategy::ManualAlias(
                format!("auth_{}_{}", role, network_id)
            )
        },
        _ => {
            // Default strategy for other types
            if network.has_local_actor(actor_name).await {
                ConflictResolutionStrategy::NetworkPrefix
            } else {
                ConflictResolutionStrategy::AutoAlias
            }
        }
    };
    
    network.register_remote_actor_with_strategy(actor_name, network_id, strategy).await
}
}

Hierarchical Namespacing

Subgraph Organization

Organize remote actors in hierarchical namespaces:

#![allow(unused)]
fn main() {
// Instead of flat aliases, use hierarchical organization
let mount_config = SubgraphMountConfig {
    mount_point: "server".to_string(),
    network_id: "server_network".to_string(),
    include_patterns: vec!["*".to_string()],
    exclude_patterns: vec!["internal_*".to_string()],
};

// Mount entire network as subgraph
network.mount_network_as_subgraph(mount_config).await?;

// Actors are now accessible as:
// - "server/data_processor"
// - "server/validator" 
// - "server/transformer"

// Use in workflows
network.add_node("remote_proc", "server/data_processor", None)?;
}

Nested Organization

Create deeply nested hierarchies for complex setups:

#![allow(unused)]
fn main() {
// Mount multiple networks with organized structure
let mount_configs = vec![
    SubgraphMountConfig {
        mount_point: "ml/training".to_string(),
        network_id: "ml_training_cluster".to_string(),
        // ...
    },
    SubgraphMountConfig {
        mount_point: "ml/inference".to_string(), 
        network_id: "ml_inference_cluster".to_string(),
        // ...
    },
    SubgraphMountConfig {
        mount_point: "data/processing".to_string(),
        network_id: "data_processing_cluster".to_string(),
        // ...
    },
];

for config in mount_configs {
    network.mount_network_as_subgraph(config).await?;
}

// Result: Clean hierarchical structure
// ml/training/model_trainer
// ml/training/feature_engineer
// ml/inference/predictor
// ml/inference/batch_scorer
// data/processing/cleaner
// data/processing/transformer
}

Conflict Prevention

1. Naming Conventions

Establish clear naming conventions to prevent conflicts:

#![allow(unused)]
fn main() {
// Good: Descriptive, domain-specific names
"user_authentication_service"
"payment_data_processor"
"ml_model_trainer_gpu"
"postgres_connection_pool"

// Avoid: Generic names likely to conflict
"processor"
"handler"
"service"
"actor"
"worker"
}

2. Network-Aware Registration

Include network identity in actor names during registration:

#![allow(unused)]
fn main() {
// Register with network context
async fn register_with_network_context(
    network: &mut DistributedNetwork,
    actor_name: &str,
    remote_network_id: &str
) -> Result<String, anyhow::Error> {
    // Auto-generate context-aware names
    let network_context = remote_network_id.split('_').next().unwrap_or(remote_network_id);
    let contextual_name = format!("{}_{}", network_context, actor_name);
    
    network.register_remote_actor_with_alias(
        &contextual_name,
        actor_name,
        remote_network_id
    ).await
}
}

3. Capability-Based Naming

Name actors based on their capabilities rather than generic terms:

#![allow(unused)]
fn main() {
// Analyze actor capabilities and suggest names
async fn suggest_capability_based_name(
    actor_metadata: &ActorMetadata
) -> String {
    let capabilities = &actor_metadata.capabilities;
    
    let primary_capability = capabilities.first().unwrap_or(&"generic".to_string());
    let secondary_capability = capabilities.get(1);
    
    match (primary_capability.as_str(), secondary_capability) {
        ("ml_training", Some(sec)) if sec.contains("gpu") => "gpu_ml_trainer".to_string(),
        ("data_processing", Some(sec)) if sec.contains("stream") => "stream_data_processor".to_string(),
        ("database", Some(sec)) => format!("{}_database", sec),
        (primary, _) => format!("{}_service", primary),
    }
}
}

Error Handling

Conflict Resolution Errors

Handle errors during conflict resolution:

#![allow(unused)]
fn main() {
async fn handle_conflict_resolution_error(
    error: &anyhow::Error,
    actor_name: &str,
    network_id: &str
) -> ConflictResolutionAction {
    if error.to_string().contains("maximum retries exceeded") {
        ConflictResolutionAction::UseFullyQualified
    } else if error.to_string().contains("invalid alias") {
        ConflictResolutionAction::GenerateAlternative
    } else if error.to_string().contains("network disconnected") {
        ConflictResolutionAction::RetryLater
    } else {
        ConflictResolutionAction::FailRegistration
    }
}

enum ConflictResolutionAction {
    UseFullyQualified,
    GenerateAlternative,
    RetryLater,
    FailRegistration,
}
}

Registration Rollback

Implement rollback for failed batch registrations:

#![allow(unused)]
fn main() {
async fn register_with_rollback(
    network: &mut DistributedNetwork,
    registrations: Vec<(String, String)>
) -> Result<Vec<String>, anyhow::Error> {
    let mut successful_aliases = Vec::new();
    
    for (actor_name, network_id) in registrations {
        match network.register_remote_actor(&actor_name, &network_id).await {
            Ok(alias) => {
                successful_aliases.push(alias);
            },
            Err(e) => {
                // Rollback previous registrations
                for alias in &successful_aliases {
                    if let Err(rollback_err) = network.unregister_remote_actor(alias).await {
                        eprintln!("⚠️  Rollback failed for {}: {}", alias, rollback_err);
                    }
                }
                return Err(e);
            }
        }
    }
    
    Ok(successful_aliases)
}
}

Best Practices

1. Proactive Conflict Analysis

#![allow(unused)]
fn main() {
// Analyze potential conflicts before registration
async fn plan_registrations(
    network: &DistributedNetwork,
    planned_registrations: &[(String, String)]
) -> RegistrationPlan {
    let mut plan = RegistrationPlan::new();
    
    for (actor_name, network_id) in planned_registrations {
        let analysis = analyze_potential_conflicts(network, actor_name, network_id).await.unwrap();
        
        if analysis.has_conflicts() {
            plan.add_with_resolution(actor_name, network_id, analysis.best_resolution());
        } else {
            plan.add_direct(actor_name, network_id);
        }
    }
    
    plan
}
}

2. Documentation and Tracking

#![allow(unused)]
fn main() {
// Track and document conflict resolutions
struct ConflictResolutionLog {
    entries: Vec<ConflictLogEntry>,
}

struct ConflictLogEntry {
    timestamp: chrono::DateTime<chrono::Utc>,
    original_name: String,
    resolved_alias: String,
    strategy_used: ConflictResolutionStrategy,
    reason: String,
}

impl ConflictResolutionLog {
    fn log_resolution(&mut self, 
        original_name: String, 
        resolved_alias: String, 
        strategy: ConflictResolutionStrategy,
        reason: String
    ) {
        self.entries.push(ConflictLogEntry {
            timestamp: chrono::Utc::now(),
            original_name,
            resolved_alias,
            strategy_used: strategy,
            reason,
        });
    }
    
    fn generate_report(&self) -> String {
        // Generate human-readable conflict resolution report
        format!("Conflict Resolution Report\n{:#?}", self.entries)
    }
}
}

3. Testing Conflict Scenarios

#![allow(unused)]
fn main() {
#[cfg(test)]
mod conflict_tests {
    use super::*;
    
    #[tokio::test]
    async fn test_local_remote_conflict_resolution() {
        let mut network = create_test_network().await;
        
        // Register local actor
        network.register_local_actor("processor", TestActor::new()).unwrap();
        
        // Try to register remote actor with same name
        let alias = network.register_remote_actor_with_strategy(
            "processor",
            "remote_network",
            ConflictResolutionStrategy::AutoAlias
        ).await.unwrap();
        
        assert_eq!(alias, "processor_1");
    }
    
    #[tokio::test] 
    async fn test_multiple_remote_conflicts() {
        let mut network = create_test_network().await;
        
        // Register multiple remote actors with same name
        let alias1 = network.register_remote_actor("auth", "network1").await.unwrap();
        let alias2 = network.register_remote_actor_with_strategy(
            "auth", 
            "network2",
            ConflictResolutionStrategy::NetworkPrefix
        ).await.unwrap();
        
        assert_eq!(alias1, "auth");
        assert_eq!(alias2, "network2_auth");
    }
}
}

Next Steps

WebAssembly API - Getting Started

Complete guide to using Reflow's WebAssembly bindings for browser-based workflow automation.

Overview

Reflow's WebAssembly (WASM) bindings provide a complete JavaScript interface for running actor-based workflows in web browsers. The API maintains the same conceptual model as the native Rust implementation while offering browser-friendly interfaces.

Core Architecture

┌─────────────────────────────────────────────────────┐
│                 Browser Application                 │
├─────────────────────────────────────────────────────┤
│ JavaScript Actor Classes                            │
│ ├─ MyActor.run(context)                            │
│ ├─ AnotherActor.run(context)                       │
│ └─ CustomActor.run(context)                        │
├─────────────────────────────────────────────────────┤
│ Browser JavaScript Bindings                           │
│ ├─ Graph, GraphNetwork, GraphHistory               │
│ ├─ Network, MemoryState, ActorRunContext           │
│ └─ BrowserActorContext, JsBrowserActor                   │
├─────────────────────────────────────────────────────┤
│ WebAssembly Runtime                                 │
│ ├─ Rust Actor System (compiled to WASM)           │
│ ├─ Graph Management & Validation                   │
│ └─ Network Execution Engine                        │
└─────────────────────────────────────────────────────┘

Quick Start

1. Installation & Setup

# Clone the repository
git clone https://github.com/offbit-ai/reflow
cd reflow

# Build WASM bindings
cd crates/reflow_network
wasm-pack build --target web --out-dir pkg

2. Basic HTML Setup

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Reflow WASM Example</title>
</head>
<body>
    <h1>Reflow WebAssembly Example</h1>
    <button id="runWorkflow">Run Workflow</button>
    <pre id="output"></pre>

    <script type="module">
        import init, { 
            Graph,
            GraphNetwork,
            MemoryState,
            init_panic_hook 
        } from './pkg/reflow_network.js';

        // Initialize WASM
        await init();
        init_panic_hook();

        console.log('✅ Reflow WASM initialized successfully!');
    </script>
</body>
</html>

3. Your First Actor

class HelloWorldActor {
    constructor() {
        this.inports = ["input"];
        this.outports = ["output"];
        this.state = null; // Managed by WASM
        this.config = { greeting: "Hello" };
    }

    /**
     * Actor execution method
     * @param {ActorRunContext} context - Execution context
     */
    run(context) {
        // Get input data
        const input = context.input.input;
        
        // Access state
        const count = context.state.get('count') || 0;
        context.state.set('count', count + 1);
        
        // Process and send output
        const greeting = `${this.config.greeting}, ${input}! (execution #${count + 1})`;
        context.send({ output: greeting });
        
        console.log(`HelloWorldActor: ${greeting}`);
    }
}

4. Create and Run a Graph

async function createAndRunWorkflow() {
    // Create a graph
    const graph = new Graph("HelloWorkflow", true, {
        description: "A simple greeting workflow",
        version: "1.0.0"
    });

    // Add nodes
    graph.addNode("greeter", "HelloWorldActor", {
        x: 100, y: 100,
        description: "Greets the input"
    });

    // Add initial data
    graph.addInitial("World", "greeter", "input", {
        description: "Initial greeting target"
    });

    // Create network
    const network = new GraphNetwork(graph);
    
    // Register actor
    network.registerActor("HelloWorldActor", new HelloWorldActor());
    
    // Start and run
    await network.start();
    
    console.log("🚀 Workflow started!");
}

// Run the workflow
document.getElementById('runWorkflow').addEventListener('click', createAndRunWorkflow);

Core API Classes

Graph

The Graph class represents a workflow definition with nodes, connections, and metadata.

// Create a new graph
const graph = new Graph(name, caseSensitive, properties);

// Basic operations
graph.addNode(nodeId, actorType, metadata);
graph.removeNode(nodeId);
graph.addConnection(fromNode, fromPort, toNode, toPort, metadata);
graph.removeConnection(fromNode, fromPort, toNode, toPort);

// Graph-level ports
graph.addInport(publicName, nodeId, portId, metadata);
graph.addOutport(publicName, nodeId, portId, metadata);

// Initial data
graph.addInitial(data, nodeId, portId, metadata);

// Export/Import
const graphData = graph.toJSON();
const loadedGraph = Graph.load(graphData, metadata);

GraphNetwork

The GraphNetwork class executes graphs with registered actors.

// Create from graph
const network = new GraphNetwork(graph);

// Register actors
network.registerActor("ActorType", new ActorImplementation());

// Network lifecycle
await network.start();
network.shutdown();

// Monitoring
network.next((event) => {
    console.log("Network event:", event);
});

// Direct execution
const result = await network.executeActor("nodeId", inputData);

MemoryState

The MemoryState class provides persistent state management across actor executions.

// Create state
const state = new MemoryState();

// Basic operations
state.set(key, value);
const value = state.get(key);
const exists = state.has(key);
state.remove(key);
state.clear();

// Bulk operations
const allData = state.getAll();
state.setAll(dataObject);

// Utilities
const size = state.size();
const keys = state.keys();
const values = state.values();

ActorRunContext

The ActorRunContext provides actors with access to inputs, state, and output channels.

class MyActor {
    run(context) {
        // Access inputs
        const inputData = context.input.portName;
        
        // State management
        context.state.set('key', 'value');
        const value = context.state.get('key');
        
        // Send outputs
        context.send({
            outputPort: resultData
        });
        
        // Access configuration
        const config = this.config;
    }
}

Event System

Network Events

Monitor network execution with the event system:

network.next((event) => {
    switch (event._type) {
        case "FlowTrace":
            console.log(`Data flow: ${event.from.actorId}:${event.from.port} → ${event.to.actorId}:${event.to.port}`);
            console.log("Data:", event.from.data);
            break;
            
        case "ActorStarted":
            console.log(`Actor started: ${event.actorId}`);
            break;
            
        case "ActorStopped":
            console.log(`Actor stopped: ${event.actorId}`);
            break;
            
        case "NetworkStarted":
            console.log("Network execution started");
            break;
            
        case "NetworkStopped":
            console.log("Network execution stopped");
            break;
            
        case "ProcessError":
            console.error(`Error in ${event.actorId}:`, event.error);
            break;
            
        default:
            console.log("Other event:", event);
    }
});

Graph Events

Monitor graph modifications:

graph.subscribe((event) => {
    switch (event.type) {
        case "nodeAdded":
            console.log(`Node added: ${event.nodeId}`);
            break;
            
        case "nodeRemoved":
            console.log(`Node removed: ${event.nodeId}`);
            break;
            
        case "connectionAdded":
            console.log(`Connection: ${event.from} → ${event.to}`);
            break;
            
        case "connectionRemoved":
            console.log(`Connection removed: ${event.from} → ${event.to}`);
            break;
    }
});

Advanced Features

Graph History with Undo/Redo

// Create graph with history support
const [graph, history] = Graph.withHistoryAndLimit(50);

// Make changes
graph.addNode("processor", "ProcessorActor", { x: 200, y: 100 });
graph.addConnection("input", "output", "processor", "input");

// Update history
history.processEvents(graph);

// Check state
const state = history.getState();
console.log("Can undo:", state.can_undo);
console.log("Can redo:", state.can_redo);
console.log("Undo stack size:", state.undo_size);

// Perform operations
if (state.can_undo) {
    history.undo(graph);
}

if (history.getState().can_redo) {
    history.redo(graph);
}

// Clear history
history.clear();

Direct Actor Execution

Test actors individually without full network setup:

// Execute actor directly
const actor = new MyActor();
const result = await network.executeActor("nodeId", {
    input: "test data",
    config: { mode: "debug" }
});

console.log("Direct execution result:", result);

Batch Graph Operations

Efficiently modify graphs with multiple operations:

// Batch multiple operations
const operations = [
    () => graph.addNode("node1", "Actor1", { x: 100, y: 100 }),
    () => graph.addNode("node2", "Actor2", { x: 200, y: 100 }),
    () => graph.addConnection("node1", "output", "node2", "input"),
    () => graph.addInitial("start", "node1", "trigger")
];

// Execute all operations
operations.forEach(op => op());

// Process all changes at once
history.processEvents(graph);

Data Types and Serialization

Supported Data Types

The WASM bridge supports these JavaScript types:

// Primitive types
const stringData = "Hello World";
const numberData = 42;
const booleanData = true;
const nullData = null;

// Objects and arrays
const objectData = {
    id: 123,
    name: "Example",
    tags: ["tag1", "tag2"],
    metadata: {
        created: new Date().toISOString(),
        version: "1.0"
    }
};

const arrayData = [1, 2, 3, "mixed", { nested: true }];

// Send through actor context
context.send({
    output: {
        primitive: numberData,
        object: objectData,
        array: arrayData
    }
});

Serialization Best Practices

// ✅ Good: Structured data
const goodData = {
    type: "sensor_reading",
    value: 23.5,
    timestamp: Date.now(),
    metadata: {
        sensor_id: "temp_01",
        location: "warehouse_a"
    }
};

// ❌ Avoid: Large JSON strings
const badData = JSON.stringify(largeObject);

// ✅ Good: Split large data
const chunkedData = {
    chunk_id: 1,
    total_chunks: 5,
    data: partialData
};

Error Handling

Comprehensive Error Handling

try {
    // Initialize WASM
    await init();
    init_panic_hook();
    
    // Create and start network
    const network = new GraphNetwork(graph);
    network.registerActor("MyActor", new MyActor());
    await network.start();
    
} catch (error) {
    console.error("Error during initialization:", error);
    
    // Handle specific error types
    if (error.message.includes("WASM")) {
        alert("Failed to load WebAssembly. Please check browser compatibility.");
    } else if (error.message.includes("Actor")) {
        alert("Actor registration failed. Please check actor implementation.");
    } else {
        alert("Unexpected error. Please refresh the page.");
    }
}

// Network-level error handling
network.next((event) => {
    if (event._type === "ProcessError") {
        console.error(`Actor ${event.actorId} failed:`, event.error);
        
        // Implement recovery logic
        handleActorError(event.actorId, event.error);
    }
});

function handleActorError(actorId, error) {
    // Log error details
    console.error(`Processing error in ${actorId}:`, error);
    
    // Attempt recovery
    if (error.includes("timeout")) {
        // Restart actor or increase timeout
    } else if (error.includes("validation")) {
        // Fix input data and retry
    }
}

Actor Error Handling

class RobustActor {
    run(context) {
        try {
            // Main processing logic
            const input = context.input.input;
            const result = this.processData(input);
            context.send({ output: result });
            
        } catch (error) {
            console.error(`Error in ${this.constructor.name}:`, error);
            
            // Send error information
            context.send({
                error: {
                    message: error.message,
                    timestamp: Date.now(),
                    input: context.input
                }
            });
        }
    }
    
    processData(input) {
        // Validate input
        if (!input || typeof input !== 'object') {
            throw new Error("Invalid input: expected object");
        }
        
        // Process with error handling
        return {
            processed: true,
            data: input,
            timestamp: Date.now()
        };
    }
}

Performance Optimization

Memory Management

// Clean up resources properly
function cleanup() {
    // Shutdown network
    if (network) {
        network.shutdown();
    }
    
    // Clear state
    if (state) {
        state.clear();
    }
    
    // Remove event listeners
    if (unsubscribe) {
        unsubscribe();
    }
}

// Set up cleanup on page unload
window.addEventListener('beforeunload', cleanup);

Efficient State Usage

class OptimizedActor {
    run(context) {
        // Read state once
        const state = context.state.getAll();
        
        // Modify locally
        state.counter = (state.counter || 0) + 1;
        state.lastUpdate = Date.now();
        
        // Write back once
        context.state.setAll(state);
        
        // Process and send output
        context.send({ output: state.counter });
    }
}

Batch Processing

class BatchProcessor {
    constructor() {
        this.inports = ["input"];
        this.outports = ["output"];
        this.config = { batchSize: 10 };
    }
    
    run(context) {
        // Accumulate inputs
        const batch = context.state.get('batch') || [];
        batch.push(context.input.input);
        
        if (batch.length >= this.config.batchSize) {
            // Process entire batch
            const results = batch.map(item => this.processItem(item));
            
            // Send batch results
            context.send({ output: results });
            
            // Clear batch
            context.state.set('batch', []);
        } else {
            // Store for next execution
            context.state.set('batch', batch);
        }
    }
    
    processItem(item) {
        return { processed: item, timestamp: Date.now() };
    }
}

Development Tools

Debug Mode

// Enable debug logging
function enableDebugMode(network) {
    let eventCount = 0;
    
    network.next((event) => {
        eventCount++;
        console.group(`Event #${eventCount}: ${event._type}`);
        console.log("Full event:", event);
        
        if (event._type === "FlowTrace") {
            console.log(`From: ${event.from.actorId}:${event.from.port}`);
            console.log(`To: ${event.to.actorId}:${event.to.port}`);
            console.log("Data:", event.from.data);
        }
        
        console.groupEnd();
    });
    
    // Network information
    console.log("Registered actors:", network.getActorNames());
    console.log("Active actors:", network.getActiveActors());
    console.log("Total actor count:", network.getActorCount());
}

Graph Inspection

function inspectGraph(graph) {
    const data = graph.toJSON();
    
    console.group("Graph Inspection");
    console.log("Graph name:", data.properties?.name || "Unnamed");
    console.log("Case sensitive:", data.caseSensitive);
    console.log("Processes:", Object.keys(data.processes || {}));
    console.log("Connections:", data.connections?.length || 0);
    console.log("Inports:", Object.keys(data.inports || {}));
    console.log("Outports:", Object.keys(data.outports || {}));
    console.log("Initial data:", data.initializers?.length || 0);
    console.log("Full structure:", data);
    console.groupEnd();
}

Next Steps

The WebAssembly API provides a powerful foundation for building browser-based workflow applications. Start with the examples above and explore the detailed API documentation for advanced usage patterns.

Browser Actors Guide

Complete guide to creating and managing actors in browser environments using Reflow's WebAssembly bindings.

Overview

Browser actors in Reflow follow the same conceptual model as native Rust actors but use a JavaScript interface optimized for web environments. They support stateful processing, real-time event handling, and seamless integration with web APIs.

Actor Lifecycle in Browser

┌─────────────────────────────────────────────────────┐
│                Actor Lifecycle                      │
├─────────────────────────────────────────────────────┤
│ 1. Construction                                     │
│    ├─ new MyActor()                                │
│    ├─ Define inports/outports                      │
│    └─ Initialize configuration                     │
├─────────────────────────────────────────────────────┤
│ 2. Registration                                     │
│    ├─ network.registerActor("MyActor", instance)   │
│    └─ WASM bridge creates wrapper                  │
├─────────────────────────────────────────────────────┤
│ 3. Execution                                        │
│    ├─ run(context) called with inputs              │
│    ├─ Access state through context.state           │
│    ├─ Process data with JavaScript logic           │
│    └─ Send outputs via context.send()              │
├─────────────────────────────────────────────────────┤
│ 4. State Persistence                               │
│    ├─ State stored in WASM memory                  │
│    └─ Survives across multiple executions          │
└─────────────────────────────────────────────────────┘

Basic Actor Structure

Minimal Actor

class MinimalActor {
    constructor() {
        // Required: Define input and output ports
        this.inports = ["input"];
        this.outports = ["output"];
        
        // Optional: Actor configuration
        this.config = {};
        
        // State is managed by WASM bridge
        this.state = null;
    }

    /**
     * Main execution method called by the runtime
     * @param {ActorRunContext} context - Execution context
     */
    run(context) {
        // Get input data
        const input = context.input.input;
        
        // Simple processing
        const output = `Processed: ${input}`;
        
        // Send result
        context.send({ output });
    }
}

Stateful Actor

class CounterActor {
    constructor() {
        this.inports = ["increment", "reset"];
        this.outports = ["count", "status"];
        this.config = { 
            step: 1,
            maxCount: 100 
        };
    }

    run(context) {
        // Get current count from persistent state
        let count = context.state.get('count') || 0;
        
        // Handle different input ports
        if (context.input.increment !== undefined) {
            count += this.config.step;
            
            // Check bounds
            if (count >= this.config.maxCount) {
                count = this.config.maxCount;
                context.send({ 
                    status: "Maximum count reached" 
                });
            }
            
            // Update state
            context.state.set('count', count);
            
            // Send current count
            context.send({ count });
        }
        
        if (context.input.reset !== undefined) {
            count = 0;
            context.state.set('count', count);
            context.send({ 
                count,
                status: "Counter reset"
            });
        }
    }
}

Configurable Actor

class ConfigurableProcessor {
    constructor() {
        this.inports = ["data", "config"];
        this.outports = ["processed", "error"];
        
        // Default configuration
        this.config = {
            mode: "transform",
            batchSize: 1,
            timeout: 5000,
            filters: [],
            outputFormat: "json"
        };
    }

    run(context) {
        // Update configuration if provided
        if (context.input.config) {
            this.updateConfig(context.input.config);
        }
        
        // Process data
        if (context.input.data) {
            try {
                const result = this.processData(context.input.data, context);
                context.send({ processed: result });
            } catch (error) {
                context.send({ 
                    error: {
                        message: error.message,
                        input: context.input.data,
                        timestamp: Date.now()
                    }
                });
            }
        }
    }
    
    updateConfig(newConfig) {
        // Merge with existing configuration
        this.config = { ...this.config, ...newConfig };
        console.log("Updated configuration:", this.config);
    }
    
    processData(data, context) {
        switch (this.config.mode) {
            case "transform":
                return this.transformData(data);
            case "filter":
                return this.filterData(data);
            case "aggregate":
                return this.aggregateData(data, context);
            default:
                throw new Error(`Unknown processing mode: ${this.config.mode}`);
        }
    }
    
    transformData(data) {
        return {
            transformed: true,
            original: data,
            timestamp: Date.now(),
            format: this.config.outputFormat
        };
    }
    
    filterData(data) {
        if (!Array.isArray(data)) {
            data = [data];
        }
        
        return data.filter(item => {
            return this.config.filters.every(filter => 
                this.applyFilter(item, filter)
            );
        });
    }
    
    aggregateData(data, context) {
        // Get previous aggregated data from state
        const previous = context.state.get('aggregated') || [];
        const combined = previous.concat(Array.isArray(data) ? data : [data]);
        
        // Keep only recent data based on batchSize
        const recent = combined.slice(-this.config.batchSize);
        context.state.set('aggregated', recent);
        
        return {
            count: recent.length,
            sum: recent.reduce((acc, val) => acc + (typeof val === 'number' ? val : 0), 0),
            average: recent.length > 0 ? recent.reduce((acc, val) => acc + (typeof val === 'number' ? val : 0), 0) / recent.length : 0,
            latest: recent[recent.length - 1]
        };
    }
    
    applyFilter(item, filter) {
        // Simple filter implementation
        if (filter.field && filter.value) {
            return item[filter.field] === filter.value;
        }
        return true;
    }
}

Advanced Actor Patterns

Asynchronous Web API Actor

class WebAPIActor {
    constructor() {
        this.inports = ["url", "config"];
        this.outports = ["data", "error"];
        this.config = {
            method: "GET",
            timeout: 10000,
            retries: 3
        };
    }

    async run(context) {
        const url = context.input.url;
        const config = { ...this.config, ...context.input.config };
        
        if (!url) {
            context.send({ error: "URL is required" });
            return;
        }

        try {
            const data = await this.fetchWithRetry(url, config);
            context.send({ data });
        } catch (error) {
            context.send({ 
                error: {
                    message: error.message,
                    url,
                    timestamp: Date.now()
                }
            });
        }
    }

    async fetchWithRetry(url, config) {
        let lastError;
        
        for (let attempt = 1; attempt <= config.retries; attempt++) {
            try {
                const controller = new AbortController();
                const timeoutId = setTimeout(() => controller.abort(), config.timeout);
                
                const response = await fetch(url, {
                    method: config.method,
                    headers: config.headers,
                    body: config.body,
                    signal: controller.signal
                });
                
                clearTimeout(timeoutId);
                
                if (!response.ok) {
                    throw new Error(`HTTP ${response.status}: ${response.statusText}`);
                }
                
                return await response.json();
                
            } catch (error) {
                lastError = error;
                
                if (attempt < config.retries) {
                    // Exponential backoff
                    const delay = Math.pow(2, attempt) * 1000;
                    await new Promise(resolve => setTimeout(resolve, delay));
                }
            }
        }
        
        throw lastError;
    }
}

Timer Actor

class TimerActor {
    constructor() {
        this.inports = ["start", "stop", "interval"];
        this.outports = ["tick", "status"];
        this.config = { defaultInterval: 1000 };
        
        // Store timer reference
        this.timerId = null;
    }

    run(context) {
        if (context.input.start !== undefined) {
            this.startTimer(context);
        }
        
        if (context.input.stop !== undefined) {
            this.stopTimer(context);
        }
        
        if (context.input.interval !== undefined) {
            this.updateInterval(context.input.interval, context);
        }
    }
    
    startTimer(context) {
        // Stop existing timer if running
        this.stopTimer(context, false);
        
        const interval = context.state.get('interval') || this.config.defaultInterval;
        let tickCount = context.state.get('tickCount') || 0;
        
        this.timerId = setInterval(() => {
            tickCount++;
            context.state.set('tickCount', tickCount);
            
            // Send tick event
            context.send({
                tick: {
                    count: tickCount,
                    timestamp: Date.now(),
                    interval: interval
                }
            });
        }, interval);
        
        context.state.set('running', true);
        context.send({ status: `Timer started with ${interval}ms interval` });
    }
    
    stopTimer(context, sendStatus = true) {
        if (this.timerId) {
            clearInterval(this.timerId);
            this.timerId = null;
        }
        
        context.state.set('running', false);
        
        if (sendStatus) {
            const tickCount = context.state.get('tickCount') || 0;
            context.send({ 
                status: `Timer stopped after ${tickCount} ticks` 
            });
        }
    }
    
    updateInterval(newInterval, context) {
        context.state.set('interval', newInterval);
        
        // Restart timer with new interval if currently running
        if (context.state.get('running')) {
            this.startTimer(context);
        }
    }
}

File Reader Actor (Browser)

class FileReaderActor {
    constructor() {
        this.inports = ["file", "options"];
        this.outports = ["content", "progress", "error"];
        this.config = {
            readAs: "text", // "text", "dataURL", "arrayBuffer"
            encoding: "utf-8",
            chunkSize: 64 * 1024 // 64KB chunks for progress
        };
    }

    run(context) {
        const file = context.input.file;
        const options = { ...this.config, ...context.input.options };
        
        if (!file || !file instanceof File) {
            context.send({ error: "Valid File object required" });
            return;
        }

        this.readFile(file, options, context);
    }

    readFile(file, options, context) {
        const reader = new FileReader();
        
        // Track progress
        reader.onprogress = (event) => {
            if (event.lengthComputable) {
                const progress = (event.loaded / event.total) * 100;
                context.send({ 
                    progress: {
                        loaded: event.loaded,
                        total: event.total,
                        percentage: progress
                    }
                });
            }
        };
        
        reader.onload = (event) => {
            const result = event.target.result;
            context.send({
                content: {
                    data: result,
                    filename: file.name,
                    size: file.size,
                    type: file.type,
                    lastModified: file.lastModified,
                    readAs: options.readAs
                }
            });
        };
        
        reader.onerror = (event) => {
            context.send({
                error: {
                    message: "Failed to read file",
                    filename: file.name,
                    error: event.target.error
                }
            });
        };

        // Choose reading method based on options
        switch (options.readAs) {
            case "text":
                reader.readAsText(file, options.encoding);
                break;
            case "dataURL":
                reader.readAsDataURL(file);
                break;
            case "arrayBuffer":
                reader.readAsArrayBuffer(file);
                break;
            default:
                context.send({ error: `Unsupported read method: ${options.readAs}` });
        }
    }
}

State Management Patterns

Complex State Actor

class StatefulProcessor {
    constructor() {
        this.inports = ["data", "command"];
        this.outports = ["result", "state", "error"];
        this.config = {};
    }

    run(context) {
        // Handle commands
        if (context.input.command) {
            this.handleCommand(context.input.command, context);
        }
        
        // Process data
        if (context.input.data) {
            this.processData(context.input.data, context);
        }
    }
    
    handleCommand(command, context) {
        switch (command.action) {
            case "get_state":
                context.send({ 
                    state: context.state.getAll() 
                });
                break;
                
            case "set_state":
                if (command.data) {
                    context.state.setAll(command.data);
                    context.send({ 
                        result: "State updated successfully" 
                    });
                }
                break;
                
            case "clear_state":
                context.state.clear();
                context.send({ 
                    result: "State cleared" 
                });
                break;
                
            case "get_stats":
                this.sendStatistics(context);
                break;
                
            default:
                context.send({ 
                    error: `Unknown command: ${command.action}` 
                });
        }
    }
    
    processData(data, context) {
        // Update processing statistics
        const stats = context.state.get('stats') || {
            processedCount: 0,
            totalSize: 0,
            lastProcessed: null,
            errors: 0
        };
        
        try {
            // Simulate processing
            const processed = this.transform(data);
            
            // Update statistics
            stats.processedCount++;
            stats.totalSize += JSON.stringify(data).length;
            stats.lastProcessed = Date.now();
            
            context.state.set('stats', stats);
            context.send({ result: processed });
            
        } catch (error) {
            stats.errors++;
            context.state.set('stats', stats);
            
            context.send({
                error: {
                    message: error.message,
                    data: data,
                    timestamp: Date.now()
                }
            });
        }
    }
    
    transform(data) {
        return {
            original: data,
            transformed: Array.isArray(data) ? data.map(x => x * 2) : data,
            timestamp: Date.now()
        };
    }
    
    sendStatistics(context) {
        const stats = context.state.get('stats') || {};
        const stateSize = context.state.size();
        
        context.send({
            state: {
                statistics: stats,
                stateSize: stateSize,
                stateKeys: context.state.keys(),
                uptime: Date.now() - (stats.firstProcessed || Date.now())
            }
        });
    }
}

Cache Actor

class CacheActor {
    constructor() {
        this.inports = ["get", "set", "delete", "clear"];
        this.outports = ["value", "status", "stats"];
        this.config = {
            maxSize: 100,
            ttlMs: 300000 // 5 minutes
        };
    }

    run(context) {
        if (context.input.get) {
            this.getValue(context.input.get, context);
        }
        
        if (context.input.set) {
            this.setValue(context.input.set, context);
        }
        
        if (context.input.delete) {
            this.deleteValue(context.input.delete, context);
        }
        
        if (context.input.clear) {
            this.clearCache(context);
        }
    }
    
    getValue(request, context) {
        const cache = context.state.get('cache') || {};
        const entry = cache[request.key];
        
        if (!entry) {
            context.send({ 
                value: { 
                    key: request.key, 
                    found: false 
                } 
            });
            return;
        }
        
        // Check TTL
        if (entry.expires && Date.now() > entry.expires) {
            delete cache[request.key];
            context.state.set('cache', cache);
            
            context.send({ 
                value: { 
                    key: request.key, 
                    found: false, 
                    expired: true 
                } 
            });
            return;
        }
        
        // Update access time
        entry.lastAccessed = Date.now();
        context.state.set('cache', cache);
        
        context.send({
            value: {
                key: request.key,
                value: entry.value,
                found: true,
                created: entry.created,
                lastAccessed: entry.lastAccessed
            }
        });
    }
    
    setValue(request, context) {
        const cache = context.state.get('cache') || {};
        
        // Enforce size limit
        const keys = Object.keys(cache);
        if (keys.length >= this.config.maxSize && !cache[request.key]) {
            // Remove oldest entry
            const oldest = keys.reduce((min, key) => 
                (!min || cache[key].lastAccessed < cache[min].lastAccessed) ? key : min
            );
            delete cache[oldest];
        }
        
        // Set new value
        const now = Date.now();
        cache[request.key] = {
            value: request.value,
            created: now,
            lastAccessed: now,
            expires: request.ttl ? now + request.ttl : now + this.config.ttlMs
        };
        
        context.state.set('cache', cache);
        
        context.send({
            status: {
                operation: "set",
                key: request.key,
                success: true,
                cacheSize: Object.keys(cache).length
            }
        });
    }
    
    deleteValue(request, context) {
        const cache = context.state.get('cache') || {};
        const existed = cache[request.key] !== undefined;
        
        delete cache[request.key];
        context.state.set('cache', cache);
        
        context.send({
            status: {
                operation: "delete",
                key: request.key,
                existed: existed,
                cacheSize: Object.keys(cache).length
            }
        });
    }
    
    clearCache(context) {
        const cache = context.state.get('cache') || {};
        const count = Object.keys(cache).length;
        
        context.state.set('cache', {});
        
        context.send({
            status: {
                operation: "clear",
                clearedCount: count,
                cacheSize: 0
            }
        });
    }
}

Integration with Browser APIs

Geolocation Actor

class GeolocationActor {
    constructor() {
        this.inports = ["getCurrentPosition", "watchPosition", "clearWatch"];
        this.outports = ["position", "error"];
        this.config = {
            enableHighAccuracy: false,
            timeout: 10000,
            maximumAge: 600000 // 10 minutes
        };
        
        this.watchId = null;
    }

    run(context) {
        if (!navigator.geolocation) {
            context.send({ error: "Geolocation is not supported" });
            return;
        }
        
        if (context.input.getCurrentPosition) {
            this.getCurrentPosition(context);
        }
        
        if (context.input.watchPosition) {
            this.startWatching(context);
        }
        
        if (context.input.clearWatch) {
            this.stopWatching(context);
        }
    }
    
    getCurrentPosition(context) {
        const options = { ...this.config, ...context.input.getCurrentPosition };
        
        navigator.geolocation.getCurrentPosition(
            (position) => {
                context.send({
                    position: {
                        latitude: position.coords.latitude,
                        longitude: position.coords.longitude,
                        accuracy: position.coords.accuracy,
                        altitude: position.coords.altitude,
                        heading: position.coords.heading,
                        speed: position.coords.speed,
                        timestamp: position.timestamp
                    }
                });
            },
            (error) => {
                context.send({
                    error: {
                        code: error.code,
                        message: error.message,
                        timestamp: Date.now()
                    }
                });
            },
            options
        );
    }
    
    startWatching(context) {
        this.stopWatching(context, false);
        
        const options = { ...this.config, ...context.input.watchPosition };
        
        this.watchId = navigator.geolocation.watchPosition(
            (position) => {
                context.send({
                    position: {
                        latitude: position.coords.latitude,
                        longitude: position.coords.longitude,
                        accuracy: position.coords.accuracy,
                        altitude: position.coords.altitude,
                        heading: position.coords.heading,
                        speed: position.coords.speed,
                        timestamp: position.timestamp,
                        isWatching: true
                    }
                });
            },
            (error) => {
                context.send({
                    error: {
                        code: error.code,
                        message: error.message,
                        timestamp: Date.now(),
                        isWatching: true
                    }
                });
            },
            options
        );
        
        context.state.set('watching', true);
    }
    
    stopWatching(context, sendConfirmation = true) {
        if (this.watchId !== null) {
            navigator.geolocation.clearWatch(this.watchId);
            this.watchId = null;
        }
        
        context.state.set('watching', false);
        
        if (sendConfirmation) {
            context.send({
                position: {
                    message: "Stopped watching position",
                    timestamp: Date.now(),
                    isWatching: false
                }
            });
        }
    }
}

Testing and Debugging Actors

Test Helper Functions

// Actor testing utilities
class ActorTester {
    constructor(ActorClass) {
        this.ActorClass = ActorClass;
        this.actor = new ActorClass();
        this.mockState = new Map();
        this.outputs = [];
    }
    
    // Create a mock context for testing
    createMockContext(inputs) {
        const self = this;
        
        return {
            input: inputs,
            state: {
                get: (key) => self.mockState.get(key),
                set: (key, value) => self.mockState.set(key, value),
                has: (key) => self.mockState.has(key),
                remove: (key) => self.mockState.delete(key),
                clear: () => self.mockState.clear(),
                getAll: () => Object.fromEntries(self.mockState),
                setAll: (obj) => {
                    self.mockState.clear();
                    Object.entries(obj).forEach(([k, v]) => self.mockState.set(k, v));
                },
                size: () => self.mockState.size,
                keys: () => Array.from(self.mockState.keys()),
                values: () => Array.from(self.mockState.values())
            },
            send: (outputs) => {
                self.outputs.push({
                    timestamp: Date.now(),
                    outputs: outputs
                });
            }
        };
    }
    
    // Test actor with given inputs
    test(inputs, expectedOutputs) {
        this.outputs = [];
        const context = this.createMockContext(inputs);
        
        // Run the actor
        const result = this.actor.run(context);
        
        // Handle async actors
        if (result instanceof Promise) {
            return result.then(() => this.verifyOutputs(expectedOutputs));
        } else {
            return this.verifyOutputs(expectedOutputs);
        }
    }
    
    verifyOutputs(expectedOutputs) {
        const results = {
            passed: true,
            outputs: this.outputs,
            state: Object.fromEntries(this.mockState),
            errors: []
        };
        
        if (expectedOutputs) {
            // Simple verification - can be enhanced
            if (this.outputs.length !== expectedOutputs.length) {
                results.passed = false;
                results.errors.push(`Expected ${expectedOutputs.length} outputs, got ${this.outputs.length}`);
            }
        }
        
        return results;
    }
}

// Example usage
async function testCounterActor() {
    const tester = new ActorTester(CounterActor);
    
    // Test increment
    const result1 = await tester.test({ increment: 1 });
    console.log("Increment test:", result1);
    
    // Test reset
    const result2 = await tester.test({ reset: true });
    console.log("Reset test:", result2);
}

Debug Actor Wrapper

class DebugActorWrapper {
    constructor(actor, name) {
        this.actor = actor;
        this.name = name || actor.constructor.name;
        this.executionCount = 0;
        this.totalExecutionTime = 0;
    }
    
    get inports() { return this.actor.inports; }
    get outports() { return this.actor.outports; }
    get config() { return this.actor.config; }
    set config(value) { this.actor.config = value; }
    
    run(context) {
        this.executionCount++;
        const startTime = performance.now();
        
        console.group(`🎭 ${this.name} #${this.executionCount}`);
        console.log("Inputs:", context.input);
        console.log("State before:", context.state.getAll());
        
        // Wrap the send method to log outputs
        const originalSend = context.send;
        context.send = (outputs) => {
            console.log("Outputs:", outputs);
            originalSend(outputs);
        };
        
        try {
            const result = this.actor.run(context);
            
            const endTime = performance.now();
            const executionTime = endTime - startTime;
            this.totalExecutionTime += executionTime;
            
            console.log("State after:", context.state.getAll());
            console.log(`Execution time: ${executionTime.toFixed(2)}ms`);
            console.log(`Average time: ${(this.totalExecutionTime / this.executionCount).toFixed(2)}ms`);
            console.groupEnd();
            
            return result;
            
        } catch (error) {
            console.error("Actor error:", error);
            console.groupEnd();
            throw error;
        }
    }
}

// Usage
const debugCounter = new DebugActorWrapper(new CounterActor(), "MyCounter");
network.registerActor("CounterActor", debugCounter);

Performance Optimization

Efficient Actor Patterns

// ✅ Good: Minimal state operations
class EfficientActor {
    run(context) {
        // Read state once
        const state = context.state.getAll();
        
        // Modify locally
        state.counter = (state.counter || 0) + 1;
        state.lastUpdate = Date.now();
        
        // Write once
        context.state.setAll(state);
        
        context.send({ output: state.counter });
    }
}

// ❌ Avoid: Multiple state operations
class InefficientActor {
    run(context) {
        // Multiple gets/sets are slower
        const counter = context.state.get('counter') || 0;
        context.state.set('counter', counter + 1);
        
        const lastUpdate = Date.now();
        context.state.set('lastUpdate', lastUpdate);
        
        context.send({ output: counter + 1 });
    }
}

// ✅ Good: Batch processing
class BatchActor {
    constructor() {
        this.inports = ["input"];
        this.outports = ["output"];
        this.config = { batchSize: 10 };
    }
    
    run(context) {
        const batch = context.state.get('batch') || [];
        batch.push(context.input.input);
        
        if (batch.length >= this.config.batchSize) {
            // Process entire batch at once
            const results = this.processBatch(batch);
            context.send({ output: results });
            context.state.set('batch', []);
        } else {
            context.state.set('batch', batch);
        }
    }
    
    processBatch(items) {
        return items.map(item => ({ processed: item, timestamp: Date.now() }));
    }
}

Next Steps

Browser actors provide a powerful way to create interactive, stateful workflows that run entirely in the browser. Use the patterns and examples above to build robust, performant actor-based applications.

Zeal IDE Integration

Reflow connects to Zeal IDE via the ZIP (Zeal Integration Protocol) for template registration, real-time event streaming, and trace session submission. This connection is established by the ZipSession module in reflow_server.

Architecture

graph LR
    subgraph "Reflow Node"
        ENG[ExecutionEngine]
        EB[EventBridge]
        TC[TraceCollector]
        ZIP[ZipSession]
    end

    subgraph "Zeal IDE"
        TR[Template Registry]
        WS[/ws/zip WebSocket]
        TA[TracesAPI]
    end

    ENG -->|EngineEvent| EB
    EB -->|forward| TC
    EB -->|forward| ZIP

    ZIP -->|register templates| TR
    ZIP -->|ZIP events| WS
    TC -->|trace sessions| TA

Configuration

The ZIP session is activated when zeal_url is set in ServerConfig:

#![allow(unused)]
fn main() {
let config = ServerConfig {
    zeal_url: Some("http://localhost:3000".to_string()),
    namespace: "reflow".to_string(),
    node_id: "reflow-abc12345".to_string(),
    // ...
};
}

Or via environment/config file:

{
  "zeal_url": "http://localhost:3000",
  "namespace": "reflow",
  "node_id": "reflow-node-1"
}

When zeal_url is None, the server runs in headless mode with no Zeal connection.

ZIP Session Lifecycle

1. Template Registration

On startup, the session registers all available actor templates with Zeal:

POST /api/templates/register
{
  "namespace": "reflow",
  "templates": [
    {
      "id": "tpl_http_request",
      "title": "http request",
      "category": "reflow",
      "icon": "cpu",
      "runtime": { "executor": "reflow", "version": "0.1.0" }
    },
    // ... 10 native templates + 6,697 API actor templates
  ]
}

Native actors are registered with basic metadata. API actors include additional fields:

  • subcategory — service name (e.g., "Slack")
  • icon — brand icon
  • required_env_vars — authentication keys needed
  • ports — typed input/output port declarations

2. WebSocket Connection

After registration, the session opens a WebSocket to Zeal's /ws/zip endpoint:

ws://localhost:3000/ws/zip

The URL is derived from the HTTP URL by replacing http(s):// with ws(s):// and appending the ZIP WebSocket path.

3. Event Streaming

During workflow execution, the EventBridge forwards EngineEvents to the ZipSession, which translates them into ZipExecutionEvents and sends them as JSON text frames:

EngineEventTypeZipExecutionEvent
StartedExecutionStarted
NodeExecutingNodeExecuting
ActorCompletedNodeCompleted (with duration, output_size)
ActorFailedNodeFailed (with error details)
CompletedExecutionCompleted (with summary stats)
FailedExecutionFailed (with error)

Events like MessageSent and NetworkIdle have no direct ZIP mapping and are silently dropped.

4. Shutdown

The session shuts down gracefully when shutdown() is called, which notifies the event loop via a tokio::sync::Notify.

TraceCollector

In addition to real-time WebSocket events, the TraceCollector submits detailed per-node trace data to Zeal's TracesAPI over HTTP:

Session Lifecycle

  1. BeginPOST /api/traces/sessions creates a trace session for the execution
  2. Submit EventsPOST /api/traces/sessions/{id}/events submits batched TraceEvents (batch size: 50)
  3. CompletePOST /api/traces/sessions/{id}/complete finalizes with a SessionSummary

Trace Events

Each EngineEvent is translated into a TraceEvent with:

  • timestamp — event time
  • node_id — actor/node identifier
  • event_typeInput, Output, or Error
  • dataTraceData with size, data type, and optional preview
  • duration — processing time (for completed nodes)
  • error — error details (for failed nodes)

Session Summary

On completion, a summary is submitted:

#![allow(unused)]
fn main() {
SessionSummary {
    total_nodes: 10,
    successful_nodes: 8,
    failed_nodes: 2,
    total_duration: 1500,        // ms
    total_data_processed: 45000, // bytes
}
}

EventBridge

The EventBridge is the glue between the execution engine and the observability consumers. One bridge task is spawned per execution:

#![allow(unused)]
fn main() {
// In REST API handler after starting execution
if let Some(bridge) = &state.event_bridge {
    bridge.attach(workflow_id, execution_id, event_rx);
}
}

The bridge:

  1. Begins a trace session via TraceCollector
  2. Drains the engine's flume::Receiver<EngineEvent> channel
  3. Forwards each event to both TraceCollector (for HTTP traces) and ZipSession (for WebSocket)
  4. Tracks terminal state (success/failure)
  5. Completes the trace session when the channel closes

Next Steps

REST API

The Reflow server exposes an HTTP and WebSocket API for headless workflow execution. This is the primary interface for clients that don't use the Zeal IDE.

Built with Axum, the API provides workflow execution, status monitoring, Zeal format conversion, and real-time WebSocket streaming.

Base URL

http://{bind_address}:{port}

Default: http://0.0.0.0:8080

Endpoints

Health Check

GET /health

Returns server health status.

Response:

{
  "success": true,
  "data": "Server is healthy",
  "timestamp": 1710300000000
}

Start Workflow

POST /workflows

Starts a workflow execution from a Reflow graph JSON.

Request Body:

{
  "graph_json": {
    "processes": {},
    "connections": [],
    "inports": {},
    "outports": {},
    "groups": [],
    "properties": {}
  },
  "input": {},
  "metadata": {
    "execution_id": "exec-001",
    "workflow_id": "workflow-001",
    "source": "api",
    "webhook_url": null,
    "enable_tracing": true
  }
}

Response:

{
  "success": true,
  "data": {
    "execution_id": "exec-001",
    "status": "started",
    "message": "Workflow started in background"
  },
  "timestamp": 1710300000000
}

When an EventBridge is configured (Zeal connection active), execution events are automatically forwarded to TraceCollector and ZipSession.

Get Execution Status

GET /workflows/{execution_id}

Returns the current state of an execution.

Response:

{
  "success": true,
  "data": {
    "id": "exec-001",
    "status": "completed",
    "result": { ... }
  },
  "timestamp": 1710300000000
}

Status values: queued, running, completed, failed, cancelled

Cancel Workflow

POST /workflows/{execution_id}/cancel

Cancels a running workflow execution.

Response:

{
  "success": true,
  "data": {
    "execution_id": "exec-001",
    "status": "cancelled"
  },
  "timestamp": 1710300000000
}

Execute Zeal Workflow

POST /zeal/workflows

Accepts a Zeal-format workflow, converts it to Reflow graph format, and executes it.

Request Body:

{
  "workflow": {
    "id": "wf-001",
    "name": "My Workflow",
    "graphs": [
      {
        "nodes": [...],
        "connections": [...]
      }
    ]
  },
  "input": {}
}

Response:

{
  "success": true,
  "data": {
    "execution_id": "auto-generated-id",
    "status": "started",
    "message": "Zeal workflow converted and started"
  },
  "timestamp": 1710300000000
}

Convert Zeal Workflow

POST /zeal/convert

Converts a Zeal workflow to Reflow graph format without executing it. Useful for inspection and debugging.

Request Body: A ZealWorkflow JSON object.

Response:

{
  "success": true,
  "data": {
    "reflow_graph": { ... },
    "required_actors": ["tpl_http_request", "tpl_if_branch"],
    "conversion_metadata": {
      "source_workflow_id": "wf-001",
      "source_workflow_name": "My Workflow",
      "node_count": 5,
      "connection_count": 4
    }
  },
  "timestamp": 1710300000000
}

WebSocket API

GET /ws

Upgrades to a WebSocket connection for real-time workflow interaction.

Message Format

All messages are JSON with a type field:

{ "type": "message_type", ... }

Start Workflow (WebSocket)

{
  "type": "start_workflow",
  "data": {
    "graph_json": { ... },
    "input": { ... },
    "metadata": {
      "execution_id": "exec-001",
      "workflow_id": "workflow-001",
      "source": "websocket"
    }
  }
}

Response:

{
  "type": "workflow_started",
  "success": true,
  "execution_id": "exec-001"
}

Subscribe to Events

{
  "type": "subscribe_workflow",
  "execution_id": "exec-001"
}

Acknowledgement:

{
  "type": "subscription_ack",
  "success": true,
  "execution_id": "exec-001"
}

Event stream (per network event):

{
  "type": "network_event",
  "execution_id": "exec-001",
  "event": { ... },
  "timestamp": 1710300000000
}

Cancel Workflow (WebSocket)

{
  "type": "cancel_workflow",
  "execution_id": "exec-001"
}

Response:

{
  "type": "workflow_cancelled",
  "success": true,
  "execution_id": "exec-001"
}

Server Configuration

#![allow(unused)]
fn main() {
pub struct ServerConfig {
    pub port: u16,                              // default: 8080
    pub bind_address: String,                   // default: "0.0.0.0"
    pub max_connections: usize,                 // default: 1000
    pub cors_enabled: bool,                     // default: true
    pub rate_limit_requests_per_minute: usize,  // default: 100
    pub zeal_url: Option<String>,               // Zeal IDE URL (enables ZIP)
    pub namespace: String,                      // default: "reflow"
    pub node_id: String,                        // default: "reflow-{uuid8}"
}
}

CORS is enabled with a permissive policy by default via tower_http::cors::CorsLayer.

Error Responses

All error responses follow the ApiResponse format:

{
  "success": false,
  "error": "Error description",
  "timestamp": 1710300000000
}

HTTP status codes:

  • 400 — Bad request (invalid workflow format)
  • 404 — Execution not found
  • 500 — Internal server error

Next Steps

Standard Component Library

Reflow's standard library provides native actor implementations exposed through reflow_components. These are the templates discoverable via get_actor_for_template(template_id) and get_template_mapping().

If you are building an application, depend on reflow_rt — it re-exports the catalog as reflow_rt::components and owns the feature gates (gpu, av-core, ml, camera-native, video-encode, window-events, browser-events, api_services, …).

Script execution (JavaScript, Python, SQL, etc.) is handled by dynASB via ComponentSpec::Script — this crate only contains native actors.

Registry usage

#![allow(unused)]
fn main() {
use reflow_rt::prelude::*;

let actor = get_actor_for_template("tpl_http_request")
    .expect("template registered");

net.register_actor_arc("tpl_http_request", actor)?;
net.add_node("call", "tpl_http_request", Some(/* config */))?;
}

get_template_mapping() returns HashMap<String, String> of template ID → actor struct name for tools and editors.

Feature gates

FeatureWhat it enables
av-coreAudio / DSP actors (biquad, compressor, FFT, gain, spectrum, etc.)
gpuwgpu-backed rendering: scene render, SDF ray march, shader graph, post-processing
window-eventstpl_*_input and tpl_window_event
browser-events / browserBrowser automation actors
camera-nativeNative camera capture (tpl_camera_capture)
video-encodeNative H.264 video encoding (tpl_video_encoder)
mlCV preprocess, inference boundary, decode actors, taskpacks
api_services~6,700 generated API actors across ~90 third-party services

Complete template catalog

The tables below are organized by the sections in registry.rs. Feature-gated actors are noted in their section heading — they are only resolvable when the matching feature is enabled.

The API-services catalog (api_* templates) is not listed here because of its size; see api-actors.md for the full list.

For the media / ML pipeline stack see ml-stack.md and media-actors.md.


Asset DB

Template IDActorPurpose
tpl_asset_storeAssetStoreActorasset store
tpl_asset_loadAssetLoadActorasset load
tpl_asset_queryAssetQueryActorasset query

Scene Systems (ECS — read/write AssetDB components)

Template IDActorPurpose
tpl_scene_physicsScenePhysicsSystemActorscene physics
tpl_scene_cameraSceneCameraSystemActorscene camera
tpl_scene_light_collectorSceneLightCollectorActorscene light collector
tpl_scene_materialSceneMaterialSystemActorscene material
tpl_scene_billboardSceneBillboardSystemActorscene billboard
tpl_scene_skyboxSceneSkyboxSystemActorscene skybox
tpl_scene_weatherSceneWeatherSystemActorscene weather

Universal Systems (motion design, interactive animation, design engineering)

Template IDActorPurpose
tpl_tween_systemTweenSystemActortween system
tpl_timeline_systemTimelineSystemActortimeline system
tpl_state_machine_systemStateMachineSystemActorstate machine system
tpl_behavior_systemBehaviorSystemActorbehavior system
tpl_layout_syncLayoutSyncSystemActorlayout sync
tpl_text_renderTextRenderSystemActortext render
tpl_text_sdfTextSdfSystemActortext sdf

Integration

Template IDActorPurpose
tpl_http_requestHttpRequestActorhttp request
tpl_browser_screencastBrowserScreencastActorbrowser screencast

Flow Control

Template IDActorPurpose
tpl_fsmFsmActorfsm
tpl_hit_testHitTestActorhit test
tpl_signalSignalActorsignal
tpl_subscriberSubscriberActorsubscriber
tpl_if_branchConditionalBranchActorif branch
tpl_switchSwitchCaseActorswitch
tpl_loopLoopActorloop

Scene

Template IDActorPurpose
tpl_componentComponentNodeActorcomponent
tpl_prefabPrefabActorprefab
tpl_instanceInstanceActorinstance
tpl_scene_graphSceneGraphActorscene graph
tpl_terrainTerrainActorterrain

Input Events (feature-gated)

Template IDActorPurpose
tpl_keyboard_inputKeyboardInputActorkeyboard input
tpl_mouse_inputMouseInputActormouse input
tpl_gamepad_inputGamepadInputActorgamepad input
tpl_touch_inputTouchInputActortouch input
tpl_window_eventWindowEventActorwindow event

Triggers

Template IDActorPurpose
tpl_interval_triggerIntervalTriggerActorinterval trigger
tpl_cron_triggerCronTriggerActorcron trigger

Server

Template IDActorPurpose
tpl_server_requestServerRequestActorserver request
tpl_server_responseServerResponseActorserver response

Flow Utilities

Template IDActorPurpose
tpl_mapMapActormap
tpl_filterFilterActorfilter
tpl_reduceReduceActorreduce
tpl_mergeMergeActormerge
tpl_splitSplitActorsplit
tpl_delayDelayActordelay
tpl_gateGateActorgate
tpl_collectCollectActorcollect
tpl_passthroughPassthroughActorpassthrough

Data Processing

Template IDActorPurpose
tpl_data_emitDataEmitActordata emit
tpl_data_transformerDataTransformActordata transformer
tpl_data_operationsDataOperationsActordata operations
tpl_generatorGeneratorActorgenerator

Logic

Template IDActorPurpose
tpl_rules_engineRulesEngineActorrules engine

Media

Template IDActorPurpose
tpl_image_inputImageInputActorimage input
tpl_audio_inputAudioInputActoraudio input
tpl_video_inputVideoInputActorvideo input
tpl_camera_captureCameraCaptureActorcamera capture

Math

Template IDActorPurpose
tpl_math_addMathAddActormath add
tpl_math_subtractMathSubtractActormath subtract
tpl_math_multiplyMathMultiplyActormath multiply
tpl_math_divideMathDivideActormath divide
tpl_math_moduloMathModuloActormath modulo
tpl_math_powerMathPowerActormath power
tpl_math_sqrtMathSqrtActormath sqrt
tpl_math_absoluteMathAbsoluteActormath absolute
tpl_math_clampMathClampActormath clamp
tpl_math_min_maxMathMinMaxActormath min max
tpl_math_roundMathRoundActormath round
tpl_math_randomMathRandomActormath random
tpl_math_averageMathAverageActormath average
tpl_math_sumMathSumActormath sum
tpl_math_statisticsMathStatisticsActormath statistics
tpl_math_expressionMathExpressionActormath expression

Vector3

Template IDActorPurpose
tpl_vec3Vec3Actorvec3
tpl_vec3_addVec3AddActorvec3 add
tpl_vec3_subtractVec3SubtractActorvec3 subtract
tpl_vec3_scaleVec3ScaleActorvec3 scale
tpl_vec3_dotVec3DotActorvec3 dot
tpl_vec3_crossVec3CrossActorvec3 cross
tpl_vec3_normalizeVec3NormalizeActorvec3 normalize
tpl_vec3_lengthVec3LengthActorvec3 length
tpl_vec3_distanceVec3DistanceActorvec3 distance
tpl_vec3_lerpVec3LerpActorvec3 lerp
tpl_vec3_reflectVec3ReflectActorvec3 reflect

Matrix4

Template IDActorPurpose
tpl_mat4_identityMat4IdentityActormat4 identity
tpl_mat4_multiplyMat4MultiplyActormat4 multiply
tpl_mat4_transformMat4TransformActormat4 transform
tpl_mat4_translateMat4TranslateActormat4 translate
tpl_mat4_scaleMat4ScaleActormat4 scale
tpl_mat4_rotate_xMat4RotateXActormat4 rotate x
tpl_mat4_rotate_yMat4RotateYActormat4 rotate y
tpl_mat4_rotate_zMat4RotateZActormat4 rotate z
tpl_mat4_look_atMat4LookAtActormat4 look at
tpl_mat4_perspectiveMat4PerspectiveActormat4 perspective

Quaternion

Template IDActorPurpose
tpl_quat_from_eulerQuatFromEulerActorquat from euler
tpl_quat_multiplyQuatMultiplyActorquat multiply
tpl_quat_slerpQuatSlerpActorquat slerp
tpl_quat_rotate_vec3QuatRotateVec3Actorquat rotate vec3

Procedural

Template IDActorPurpose
tpl_noise_generatorNoiseGeneratorActornoise generator

Procedural / Heightmap

Template IDActorPurpose
tpl_image_to_heightmapImageToHeightmapActorimage to heightmap
tpl_heightmap_to_imageHeightmapToImageActorheightmap to image
tpl_heightmap_to_meshHeightmapToMeshActorheightmap to mesh
tpl_voronoiVoronoiActorvoronoi
tpl_lsystemLSystemActorlsystem
tpl_particle_emitterParticleEmitterActorparticle emitter
tpl_triplanar_textureTriplanarTextureActortriplanar texture
tpl_mesh_combineMeshCombineActormesh combine
tpl_tube_meshTubeMeshActortube mesh
tpl_vertex_colorVertexColorActorvertex color
tpl_uv_textureUVTextureActoruv texture

Text / Utilities

Template IDActorPurpose
tpl_json_parserJsonParserActorjson parser
tpl_regex_matcherRegexMatcherActorregex matcher
tpl_date_timeDateTimeActordate time

Image Codecs

Template IDActorPurpose
tpl_image_decodeImageDecodeActorimage decode
tpl_image_encodeImageEncodeActorimage encode

File I/O

Template IDActorPurpose
tpl_file_loadFileLoadActorfile load
tpl_file_saveFileSaveActorfile save

Stream Display

Template IDActorPurpose
tpl_image_stream_displayImageStreamDisplayActorimage stream display
tpl_audio_stream_displayAudioStreamDisplayActoraudio stream display

Stream Operations

Template IDActorPurpose
tpl_bytes_to_streamBytesToStreamActorbytes to stream
tpl_stream_to_bytesStreamToBytesActorstream to bytes
tpl_stream_teeStreamTeeActorstream tee
tpl_stream_bufferStreamBufferActorstream buffer
tpl_stream_throttleStreamThrottleActorstream throttle
tpl_stream_statsStreamStatsActorstream stats

Image DSP

Template IDActorPurpose
tpl_grayscale_filterGrayscaleFilterActorgrayscale filter
tpl_brightness_contrastBrightnessContrastActorbrightness contrast
tpl_chroma_keyChromaKeyActorchroma key

Audio DSP

Template IDActorPurpose
tpl_audio_gainAudioGainActoraudio gain
tpl_biquad_filterBiquadFilterActorbiquad filter
tpl_compressorCompressorActorcompressor
tpl_audio_normalizeAudioNormalizeActoraudio normalize
tpl_noise_gateNoiseGateActornoise gate
tpl_de_esserDeEsserActorde esser
tpl_audio_spectrumAudioSpectrumActoraudio spectrum
tpl_silence_detectSilenceDetectActorsilence detect

Audio DSP (continued)

Template IDActorPurpose
tpl_equalizerEqualizerActorequalizer
tpl_limiterLimiterActorlimiter
tpl_dc_offsetDCOffsetActordc offset
tpl_envelope_followerEnvelopeFollowerActorenvelope follower
tpl_crossoverCrossoverActorcrossover
tpl_peak_detectPeakDetectActorpeak detect
tpl_ifftIFFTActorifft
tpl_convolveConvolveActorconvolve
tpl_noise_reductionNoiseReductionActornoise reduction
tpl_pitch_shiftPitchShiftActorpitch shift
tpl_time_stretchTimeStretchActortime stretch
tpl_correlatorCorrelatorActorcorrelator

Image DSP (continued)

Template IDActorPurpose
tpl_image_resizeImageResizeActorimage resize

SDF (always available — pure IR composition)

Template IDActorPurpose
tpl_sdf_sphereSdfSphereActorsdf sphere
tpl_sdf_boxSdfBoxActorsdf box
tpl_sdf_round_boxSdfRoundBoxActorsdf round box
tpl_sdf_ellipsoidSdfEllipsoidActorsdf ellipsoid
tpl_sdf_round_box_shellSdfRoundBoxShellActorsdf round box shell
tpl_sdf_cylinderSdfCylinderActorsdf cylinder
tpl_sdf_torusSdfTorusActorsdf torus
tpl_sdf_capsuleSdfCapsuleActorsdf capsule
tpl_sdf_coneSdfConeActorsdf cone
tpl_sdf_tapered_capsuleSdfTaperedCapsuleActorsdf tapered capsule
tpl_sdf_tube_pathSdfTubePathActorsdf tube path
tpl_sdf_planeSdfPlaneActorsdf plane
tpl_sdf_inf_repeatSdfInfRepeatActorsdf inf repeat
tpl_sdf_puddleSdfPuddleActorsdf puddle
tpl_sdf_unionSdfUnionActorsdf union
tpl_sdf_intersectionSdfIntersectionActorsdf intersection
tpl_sdf_differenceSdfDifferenceActorsdf difference
tpl_sdf_smooth_unionSdfSmoothUnionActorsdf smooth union
tpl_sdf_smooth_intersectionSdfSmoothIntersectionActorsdf smooth intersection
tpl_sdf_smooth_differenceSdfSmoothDifferenceActorsdf smooth difference
tpl_sdf_stamp_composeSdfStampComposeActorsdf stamp compose
tpl_sdf_translateSdfTranslateActorsdf translate
tpl_sdf_rotateSdfRotateActorsdf rotate
tpl_sdf_scaleSdfScaleActorsdf scale
tpl_sdf_twistSdfTwistActorsdf twist
tpl_sdf_bendSdfBendActorsdf bend
tpl_sdf_roundSdfRoundActorsdf round
tpl_sdf_shellSdfShellActorsdf shell
tpl_sdf_mirrorSdfMirrorActorsdf mirror
tpl_sdf_repeatSdfRepeatActorsdf repeat
tpl_sdf_displaceSdfDisplaceActorsdf displace
tpl_sdf_materialSdfMaterialActorsdf material
tpl_sdf_shade_slotSdfShadeSlotActorsdf shade slot
tpl_sdf_sceneSdfSceneActorsdf scene

SDF path (always available — pure IR composition)

Template IDActorPurpose
tpl_sdf_pathSdfPathActorsdf path

GPU compute (requires wgpu)

Template IDActorPurpose
tpl_sdf_live_renderSdfLiveRenderActorsdf live render
tpl_sdf_renderSdfRenderActorsdf render
tpl_sdf_marching_cubesSdfMarchingCubesActorsdf marching cubes
tpl_mesh_to_sdfMeshToSdfActormesh to sdf
tpl_scene_renderSceneRenderActorscene render
tpl_gpu_2d_renderGpu2DRenderActorgpu 2d render
tpl_font_loadFontLoadActorfont load
tpl_glyph_atlasGlyphAtlasActorglyph atlas

Post-processing

Template IDActorPurpose
tpl_tone_mapToneMapActortone map
tpl_bloomBloomPostProcessActorbloom
tpl_ssaoSSAOActorssao
tpl_shadow_mapShadowMapActorshadow map

Shader Graph (node-based materials)

Template IDActorPurpose
tpl_shader_compilerShaderCompilerActorshader compiler
tpl_shader_principled_bsdfShaderPrincipledBsdfActorshader principled bsdf
tpl_shader_material_outputShaderMaterialOutputActorshader material output
tpl_shader_const_floatShaderConstFloatActorshader const float
tpl_shader_const_colorShaderConstColorActorshader const color
tpl_shader_texcoordShaderTexCoordActorshader texcoord
tpl_shader_positionShaderPositionInputActorshader position
tpl_shader_normalShaderNormalInputActorshader normal
tpl_shader_timeShaderTimeInputActorshader time
tpl_shader_vertex_colorShaderVertexColorActorshader vertex color
tpl_shader_image_textureShaderImageTextureActorshader image texture
tpl_shader_noise_textureShaderNoiseTextureActorshader noise texture
tpl_shader_checker_textureShaderCheckerTextureActorshader checker texture
tpl_shader_mathShaderMathActorshader math
tpl_shader_color_mixShaderColorMixActorshader color mix
tpl_shader_color_rampShaderColorRampActorshader color ramp
tpl_shader_fresnelShaderFresnelActorshader fresnel
tpl_shader_normal_mapShaderNormalMapActorshader normal map
tpl_shader_bump_mapShaderBumpMapActorshader bump map
tpl_shader_mappingShaderMappingActorshader mapping
tpl_shader_separate_xyzShaderSeparateXYZActorshader separate xyz
tpl_shader_combine_xyzShaderCombineXYZActorshader combine xyz
tpl_shader_clampShaderClampActorshader clamp
tpl_shader_map_rangeShaderMapRangeActorshader map range
tpl_shader_voronoi_textureShaderVoronoiTextureActorshader voronoi texture
tpl_shader_gradient_textureShaderGradientTextureActorshader gradient texture
tpl_shader_brick_textureShaderBrickTextureActorshader brick texture
tpl_shader_musgrave_textureShaderMusgraveTextureActorshader musgrave texture
tpl_shader_wave_textureShaderWaveTextureActorshader wave texture

Animation

Template IDActorPurpose
tpl_skeletonSkeletonActorskeleton
tpl_animation_clipAnimationClipActoranimation clip
tpl_skin_bindSkinBindActorskin bind
tpl_animation_samplerAnimationSamplerActoranimation sampler
tpl_skinningSkinningActorskinning
tpl_animation_timeAnimationTimeActoranimation time
tpl_animation_mixerAnimationMixerActoranimation mixer
tpl_keyframeKeyframeActorkeyframe
tpl_animation_timelineAnimationTimelineActoranimation timeline
tpl_sprite_animationSpriteAnimationActorsprite animation
tpl_animation_blend_treeAnimationBlendTreeActoranimation blend tree
tpl_animation_fsmAnimationFsmActoranimation fsm
tpl_ik_solverIKSolverActorik solver
tpl_root_motionRootMotionActorroot motion
tpl_animation_layerAnimationLayerActoranimation layer
tpl_morph_targetMorphTargetActormorph target
tpl_animation_eventAnimationEventActoranimation event
tpl_character_controllerCharacterControllerActorcharacter controller

Video

Template IDActorPurpose
tpl_frame_buffer?frame buffer
tpl_render_frame_collectorRenderFrameCollectorActorrender frame collector
tpl_video_encoderVideoEncoderActorvideo encoder

Mesh export

Template IDActorPurpose
tpl_obj_exportObjExportActorobj export
tpl_stl_exportStlExportActorstl export
tpl_gltf_exportGltfExportActorgltf export

Model/scene import

Template IDActorPurpose
tpl_stl_importStlImportActorstl import
tpl_obj_importObjImportActorobj import
tpl_gltf_importGltfImportActorgltf import
tpl_mesh_importMeshImportActormesh import
tpl_scene_importSceneImportActorscene import
tpl_fbx_importFbxImportActorfbx import

2D Vector Graphics

Template IDActorPurpose
tpl_shape_2dShape2DActorshape 2d
tpl_vector_rasterizeVectorRasterizeActorvector rasterize
tpl_gaussian_blurGaussianBlurActorgaussian blur
tpl_blend_modeBlendModeActorblend mode
tpl_canvas_2dCanvas2DActorcanvas 2d
tpl_backgroundBackgroundActorbackground

Media / ML stack (feature-gated; mock-first inference boundary)

Template IDActorPurpose
tpl_cv_image_to_tensorImageToTensorActorcv image to tensor
tpl_cv_resize_letterboxResizeLetterboxActorcv resize letterbox
tpl_cv_video_stream_to_framesVideoStreamToFramesActorcv video stream to frames
tpl_cv_normalize_tensorNormalizeTensorActorcv normalize tensor
tpl_cv_tensor_crop_roiTensorCropRoiActorcv tensor crop roi
tpl_cv_detection_to_roiDetectionToRoiActorcv detection to roi
tpl_cv_temporal_smootherTemporalSmootherActorcv temporal smoother
tpl_ml_load_modelLoadModelActorml load model
tpl_ml_run_inferenceRunInferenceActorml run inference
tpl_ml_decode_detectionsDecodeDetectionsActorml decode detections
tpl_ml_decode_landmarksDecodeLandmarksActorml decode landmarks
tpl_ml_packet_probePacketProbeActorml packet probe

API Service Actors

Reflow includes 6,697 pre-generated actor templates spanning 88 API services. These actors are code-generated from OpenAPI specifications and provide native workflow nodes for interacting with third-party APIs.

Overview

API actors are gated behind the api-services feature in reflow_rt, which forwards to reflow_components/api_services. Each actor maps to a single API endpoint and is registered with Zeal as a template with:

  • A template ID (e.g., api_slack_send_message)
  • A human-readable title (e.g., "Send Message")
  • Input/output port declarations
  • Required environment variables for authentication
  • A brand icon and service category

Architecture

Code Generation

API actors are generated by the api_schema_gen tool which:

  1. Discovers API services from OpenAPI specifications
  2. Extracts endpoints, parameters, request/response schemas
  3. Generates Rust actor implementations with typed ports
  4. Outputs a registry module that maps template IDs to actor instances

Runtime Resolution

#![allow(unused)]
fn main() {
// Template resolution falls through from native actors to API actors
match template_id {
    "tpl_http_request" => Some(Arc::new(HttpRequestActor::new())),
    // ... native actors ...

    // Fall through to generated API actors
    #[cfg(feature = "api_services")]
    other => crate::api::api_registry::get_api_actor_for_template(other),
}
}

Template Metadata

Each API actor provides metadata for Zeal template registration:

#![allow(unused)]
fn main() {
pub struct ApiTemplateInfo {
    pub template_id: &'static str,    // e.g., "api_slack_send_message"
    pub title: &'static str,          // e.g., "Send Message"
    pub category: &'static str,       // e.g., "api"
    pub subcategory: &'static str,    // e.g., "Slack"
    pub description: &'static str,    // Endpoint description
    pub icon: &'static str,           // Brand icon name
    pub env_var: &'static str,        // e.g., "SLACK_API_KEY"
    pub inports: &'static [&'static str],
    pub outports: &'static [&'static str],
}
}

ZIP Template Registration

When connected to Zeal, all API actor templates are registered alongside native templates:

#![allow(unused)]
fn main() {
// In ZipSession::register_templates()
let api_infos = reflow_components::get_api_template_infos();
for info in api_infos {
    templates.push(NodeTemplate {
        id: info.template_id.to_string(),
        title: info.title.to_string(),
        category: info.category.to_string(),
        subcategory: Some(info.subcategory.to_string()),
        icon: info.icon.to_string(),
        runtime: Some(RuntimeRequirements {
            executor: "reflow".to_string(),
            required_env_vars: Some(vec![info.env_var.to_string()]),
            // ...
        }),
        // ...
    });
}
}

This registers all 6,697 actors with Zeal in a single batch request.

Services

The 88 supported API services include (non-exhaustive):

ServiceCategoryEnv Var
SlackCommunicationSLACK_API_KEY
GitHubDevelopmentGITHUB_TOKEN
StripePaymentsSTRIPE_API_KEY
TwilioCommunicationTWILIO_API_KEY
SendGridEmailSENDGRID_API_KEY
AWS S3Cloud StorageAWS_ACCESS_KEY
Google SheetsProductivityGOOGLE_API_KEY
JiraProject ManagementJIRA_API_KEY
HubSpotCRMHUBSPOT_API_KEY
OpenAIAIOPENAI_API_KEY

Each service generates multiple actors corresponding to its API endpoints.

Feature Flag

The api-services feature controls compilation of all generated API modules. Leaving it disabled reduces compile times for applications that do not need API-service actors:

# Server build with API actors
cargo build -p reflow_server --features api_services

# Fast test build without API actors
cargo test -p reflow_server --no-default-features

When disabled, stub types ensure the rest of the codebase compiles:

#![allow(unused)]
fn main() {
#[cfg(not(feature = "api_services"))]
pub fn get_api_template_infos() -> &'static [ApiTemplateInfo] { &[] }

#[cfg(not(feature = "api_services"))]
pub fn get_api_actor_for_template(_: &str) -> Option<Arc<dyn Actor>> { None }
}

Next Steps

Media Actors

Reflow provides native media actors for handling image, audio, and video input in workflows. These actors accept media data (raw bytes or URLs), extract metadata, and pass the enriched data downstream.

Actors

ImageInputActor (tpl_image_input)

Handles image input with metadata extraction.

Template ID: tpl_image_input

Ports:

  • Input: In — image data (binary or URL)
  • Output: Out — image with extracted metadata, Error

Extracted metadata:

  • Dimensions (width, height)
  • Format (JPEG, PNG, WebP, etc.)
  • File size
  • EXIF data (when available)

AudioInputActor (tpl_audio_input)

Handles audio input with metadata extraction.

Template ID: tpl_audio_input

Ports:

  • Input: In — audio data (binary or URL)
  • Output: Out — audio with extracted metadata, Error

Extracted metadata:

  • Duration
  • Format (MP3, WAV, OGG, etc.)
  • Sample rate
  • Channels
  • File size

VideoInputActor (tpl_video_input)

Handles video input with metadata extraction.

Template ID: tpl_video_input

Ports:

  • Input: In — video data (binary or URL)
  • Output: Out — video with extracted metadata, Error

Extracted metadata:

  • Duration
  • Resolution (width, height)
  • Format/codec
  • Frame rate
  • File size

CameraCaptureActor (tpl_camera_capture)

Produces a live video/raw-rgba stream from either a deterministic test-pattern source or, when compiled with native camera support, a platform camera device.

Template ID: tpl_camera_capture

Ports:

  • Input: start, stop
  • Output: stream, metadata, error

Common config:

  • backend: mock by default; use native or nokhwa when the camera-native feature is enabled
  • deviceId: camera index or device identifier
  • width, height, fps: requested capture format
  • frameCount: number of frames to emit; 0 means continuous capture
  • bufferSize: stream backpressure buffer size

Usage in Workflows

Media actors are registered as Zeal templates and appear in the Zeal IDE palette under the "reflow" category. They can be connected to other actors in a workflow graph:

[CameraCaptureActor] → [VideoStreamToFramesActor] → [ImageToTensorActor] → [RunInferenceActor]

Template Registration

Media actors are registered alongside other native actors during ZIP session startup. Each gets a template entry with:

#![allow(unused)]
fn main() {
NodeTemplate {
    id: "tpl_image_input",
    type_name: "tpl_image_input",
    title: "image input",
    category: "reflow",
    icon: "cpu",
    runtime: Some(RuntimeRequirements {
        executor: "reflow",
        // ...
    }),
}
}

Next Steps

Media / ML Stack

Reflow's Media / ML stack is a graph-driven layer for image, tensor, inference, and taskpack workflows. It is designed for MediaPipe-class pipelines without copying MediaPipe internals or baking model-specific behavior into the runtime.

The stack keeps the same Reflow principles as the rest of the component system:

  • Packets move through ordinary DAG edges as Message::Bytes, Message::StreamHandle, Message::Object, or Message::Encoded.
  • Synchronization uses actor-level await_inports(...) instead of special runtime message variants.
  • Model behavior comes from manifests, node config, tensor specs, ROI rules, thresholds, and decode parameters.
  • Taskpacks are reusable GraphExport subgraphs, not privileged runtime code.
  • Native inference backends are optional adapters behind reflow_litert::InferenceBackend.

Crates

CrateRole
reflow_media_typesShared packet contracts for frames, tensors, detections, landmarks, ROI, timestamps, and metadata.
reflow_media_codecHelpers for converting media and tensor packets to and from existing Reflow messages.
reflow_asset_registryModel manifest validation and local asset loading on top of the existing asset database conventions.
reflow_cv_opsGraph-friendly CV preprocess actors such as image-to-tensor, resize/letterbox, normalization, ROI crop, detection-to-ROI, and smoothing.
reflow_ml_opsML actors for loading model metadata/assets, running inference, decoding detections/landmarks, and probing packets.
reflow_litertBackend boundary plus deterministic mock inference by default. Optional real LiteRT support is enabled with external-litert.
reflow_taskpacksReusable taskpack graph builders, including the V1 hand-landmark-style pipeline.

LiteRT Integration

Reflow owns the backend boundary, not the native LiteRT implementation. By default, reflow_litert uses MockBackend, which keeps graph authoring, examples, and non-ML workflows free from native ML runtime requirements.

Real LiteRT execution is available through the optional external-litert feature. The adapter targets Offbit's published litert and litert-sys crates. With that feature enabled, models configured with backend: "litert" are loaded through the LiteRT adapter. Without it, Reflow returns a clear graph error instead of silently falling back to mock inference.

litertlm is a separate LiteRT-LM surface and is intentionally not part of the V1 vision inference adapter. It should land behind dedicated LLM/chat/generation actors when those graph contracts are scoped.

The graph contract does not change when switching from mock inference to LiteRT. Frames, tensors, model metadata, ROI packets, and decoded outputs continue to move through the same actors and ports.

Authoring Model

Graph authors should treat ML pipelines like any other Reflow graph:

frame
  -> preprocess
  -> load model
  -> run inference
  -> decode
  -> postprocess / smooth
  -> output

The inference actor does not know about hand landmarks, palm detection, or any other specific model family. Those details belong in model manifests, node config, or taskpack graph composition.

Feature Flags

reflow_rt exposes the catalog surface through the optional media and ml features:

reflow_rt = { version = "0.1", features = ["media", "ml"] }

The native LiteRT adapter is separate:

reflow_rt = { version = "0.1", features = ["media", "ml", "external-litert"] }

This split lets existing users opt into ML/CV graph templates without inheriting native LiteRT build and runtime requirements unless they explicitly need real LiteRT execution.

Hand Landmark Demo Assets

The native hand-landmark smoke demo uses the official MediaPipe hand model assets as two ordinary LiteRT model manifests rather than treating the MediaPipe task bundle as privileged runtime behavior:

RoleFileInputOutputs
Palm detectorpalm_detection_full.tflitef32[1, 192, 192, 3]f32[1, 2016, 18], f32[1, 2016, 1]
Hand landmark trackerhand_landmark_full.tflitef32[1, 224, 224, 3]f32[1, 63], f32[1, 1], f32[1, 1], f32[1, 63]

See examples/ml_hand_landmark_demo for download checksums, manifests, and a real LiteRT backend smoke test.

Deno Runtime

Reflow's Deno runtime enables JavaScript and TypeScript actors with secure, sandboxed execution.

Overview

The Deno runtime provides:

  • Secure sandbox with configurable permissions
  • TypeScript support out of the box
  • NPM package ecosystem access
  • Modern JavaScript features (ES2022+)
  • Async/await support for non-blocking operations

Basic Usage

Creating a JavaScript Actor

#![allow(unused)]
fn main() {
use reflow_script::{ScriptActor, LanguageEngine};
use reflow_network::{Network, NetworkConfig};
use reflow_network::connector::{Connector, ConnectionPoint, InitialPacket};
use reflow_network::message::Message;

// Create script actor with JavaScript/Deno engine
let script_content = r#"
function process(inputs, context) {
    const data = inputs.data;
    
    if (typeof data === 'string') {
        return {
            result: data.toUpperCase(),
            length: data.length,
            timestamp: new Date().toISOString()
        };
    }
    
    return { error: 'Expected string input' };
}
"#;

let actor = ScriptActor::new(LanguageEngine::JavaScript, script_content.to_string());

// Register and use in network
let mut network = Network::new(NetworkConfig::default());
network.register_actor("js_processor", actor)?;
network.add_node("script1", "js_processor")?;

// Connect to other actors
network.add_connection(Connector {
    from: ConnectionPoint {
        actor: "source_actor".to_owned(),
        port: "output".to_owned(),
        ..Default::default()
    },
    to: ConnectionPoint {
        actor: "script1".to_owned(),
        port: "data".to_owned(),
        ..Default::default()
    },
});
}

JavaScript Actor Script

// script.js - Simple transformation actor
function process(inputs, context) {
    const data = inputs.data;
    
    if (typeof data === 'string') {
        return {
            result: data.toUpperCase(),
            length: data.length,
            timestamp: new Date().toISOString()
        };
    }
    
    return { error: 'Expected string input' };
}

// Export for Reflow
exports.process = process;

Actor Function Signature

Input Parameters

function process(inputs, context) {
    // inputs: Object containing input port data
    // context: Actor execution context
}

Context Object

const context = {
    // Actor configuration
    config: {
        // Custom configuration values
    },
    
    // Utility functions
    log: (level, message) => {},
    
    // State management
    getState: () => {},
    setState: (state) => {},
    
    // Metrics
    incrementCounter: (name) => {},
    recordTimer: (name, duration) => {},
};

Return Values

// Success - return output object
return {
    output1: "value1",
    output2: 42,
    status: "success"
};

// Error - return error object
return {
    error: "Something went wrong",
    code: 500
};

// Async operations
async function process(inputs, context) {
    const result = await fetchData(inputs.url);
    return { data: result };
}

Data Types

Supported Types

// Primitive types
return {
    string: "hello",
    number: 42,
    boolean: true,
    null: null,
};

// Complex types
return {
    array: [1, 2, 3],
    object: { key: "value" },
    nested: {
        array: [{ id: 1 }, { id: 2 }],
        metadata: { timestamp: Date.now() }
    }
};

// Binary data
return {
    buffer: new Uint8Array([1, 2, 3, 4])
};

State Management

Persistent State

function process(inputs, context) {
    // Get current state
    const state = context.getState() || { counter: 0 };
    
    // Update state
    state.counter += 1;
    state.lastInput = inputs.data;
    
    // Save state
    context.setState(state);
    
    return {
        count: state.counter,
        data: state.lastInput
    };
}

Async Operations

HTTP Requests

async function process(inputs, context) {
    try {
        const response = await fetch(inputs.url, {
            method: 'GET',
            headers: {
                'Content-Type': 'application/json'
            }
        });
        
        if (!response.ok) {
            return { error: `HTTP ${response.status}` };
        }
        
        const data = await response.json();
        return { result: data };
        
    } catch (error) {
        return { error: error.message };
    }
}

File Operations

async function process(inputs, context) {
    try {
        // Read file (requires --allow-read permission)
        const content = await Deno.readTextFile(inputs.filename);
        
        // Process content
        const lines = content.split('\n').length;
        
        return {
            content: content,
            lineCount: lines
        };
        
    } catch (error) {
        return { error: `File error: ${error.message}` };
    }
}

NPM Package Support

Using External Packages

// Import from NPM
import { format } from "https://deno.land/x/date_fns/index.js";
import _ from "https://cdn.skypack.dev/lodash";

function process(inputs, context) {
    const now = new Date();
    const formatted = format(now, 'yyyy-MM-dd HH:mm:ss');
    
    const processed = _.map(inputs.data, item => ({
        ...item,
        timestamp: formatted
    }));
    
    return { result: processed };
}

Package Configuration

#![allow(unused)]
fn main() {
let config = ScriptConfig {
    environment: ScriptEnvironment::SYSTEM,
    runtime: ScriptRuntime::JavaScript,
    source: script_source,
    entry_point: "process".to_string(),
    packages: Some(vec![
        "https://deno.land/x/date_fns@v2.29.3/index.js".to_string(),
        "https://cdn.skypack.dev/lodash@4.17.21".to_string(),
    ]),
};
}

Error Handling

Error Patterns

function process(inputs, context) {
    try {
        // Validate inputs
        if (!inputs.data) {
            return { error: "Missing required 'data' input" };
        }
        
        if (typeof inputs.data !== 'string') {
            return { 
                error: "Invalid input type",
                expected: "string",
                received: typeof inputs.data
            };
        }
        
        // Process data
        const result = inputs.data.toLowerCase();
        
        if (result.length === 0) {
            return { 
                error: "Empty result",
                warning: "Input data was empty after processing"
            };
        }
        
        return { result: result };
        
    } catch (error) {
        // Log error for debugging
        context.log('error', `Processing failed: ${error.message}`);
        
        return {
            error: error.message,
            stack: error.stack,
            timestamp: new Date().toISOString()
        };
    }
}

Security and Permissions

Permission Configuration

#![allow(unused)]
fn main() {
use reflow_script::PermissionConfig;

let config = ScriptConfig {
    // ... other fields
    permissions: Some(PermissionConfig {
        allow_net: vec!["https://api.example.com".to_string()],
        allow_read: vec!["/tmp/data".to_string()],
        allow_write: vec!["/tmp/output".to_string()],
        allow_run: false,
        allow_env: false,
    }),
};
}

Safe Practices

function process(inputs, context) {
    // Validate and sanitize inputs
    const sanitized = sanitizeInput(inputs.userInput);
    
    // Use try-catch for external operations
    try {
        return processData(sanitized);
    } catch (error) {
        // Don't expose internal details
        return { error: "Processing failed" };
    }
}

function sanitizeInput(input) {
    if (typeof input !== 'string') return '';
    
    // Remove potentially dangerous characters
    return input
        .replace(/[<>]/g, '')
        .trim()
        .substring(0, 1000); // Limit length
}

Performance Optimization

Efficient Processing

// Use streaming for large data
async function process(inputs, context) {
    const results = [];
    
    // Process in chunks to avoid memory issues
    const chunkSize = 100;
    const data = inputs.data || [];
    
    for (let i = 0; i < data.length; i += chunkSize) {
        const chunk = data.slice(i, i + chunkSize);
        const processed = await processChunk(chunk);
        results.push(...processed);
        
        // Allow other actors to run
        if (i % 1000 === 0) {
            await new Promise(resolve => setTimeout(resolve, 0));
        }
    }
    
    return { results: results };
}

async function processChunk(chunk) {
    return chunk.map(item => ({
        ...item,
        processed: true,
        timestamp: Date.now()
    }));
}

Caching

// Simple in-memory cache
const cache = new Map();

function process(inputs, context) {
    const key = inputs.cacheKey;
    
    // Check cache first
    if (cache.has(key)) {
        context.log('info', `Cache hit for key: ${key}`);
        return { result: cache.get(key), fromCache: true };
    }
    
    // Expensive computation
    const result = expensiveOperation(inputs.data);
    
    // Store in cache with TTL
    cache.set(key, result);
    setTimeout(() => cache.delete(key), 60000); // 1 minute TTL
    
    return { result: result, fromCache: false };
}

Testing JavaScript Actors

Unit Testing

// test_actor.js
import { assertEquals } from "https://deno.land/std/testing/asserts.ts";

// Import your actor function
import { process } from "./my_actor.js";

Deno.test("actor processes string input", () => {
    const inputs = { data: "hello world" };
    const context = { 
        log: () => {},
        getState: () => ({}),
        setState: () => {}
    };
    
    const result = process(inputs, context);
    
    assertEquals(result.result, "HELLO WORLD");
    assertEquals(result.length, 11);
});

Deno.test("actor handles missing input", () => {
    const inputs = {};
    const context = { log: () => {} };
    
    const result = process(inputs, context);
    
    assertEquals(result.error, "Expected string input");
});

Integration Testing

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_javascript_actor_integration() {
    let script = include_str!("test_script.js");
    let config = ScriptConfig {
        environment: ScriptEnvironment::SYSTEM,
        runtime: ScriptRuntime::JavaScript,
        source: script.as_bytes().to_vec(),
        entry_point: "process".to_string(),
        packages: None,
    };
    
    let actor = ScriptActor::new(config);
    
    // Test actor behavior
    let inputs = HashMap::from([
        ("data".to_string(), Message::String("test".to_string()))
    ]);
    
    let result = test_actor_behavior(actor, inputs).await;
    assert!(result.is_ok());
}
}

Examples

Data Transformation

// Transform JSON data
function process(inputs, context) {
    const data = inputs.json_data;
    
    if (!Array.isArray(data)) {
        return { error: "Expected array input" };
    }
    
    const transformed = data.map(item => ({
        id: item.id,
        name: item.name?.toUpperCase(),
        email: item.email?.toLowerCase(),
        createdAt: new Date(item.created_at).toISOString(),
        tags: item.tags?.map(tag => tag.toLowerCase()) || []
    }));
    
    return {
        data: transformed,
        count: transformed.length,
        processedAt: new Date().toISOString()
    };
}

API Integration

async function process(inputs, context) {
    const { endpoint, payload, authToken } = inputs;
    
    try {
        const response = await fetch(endpoint, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': `Bearer ${authToken}`
            },
            body: JSON.stringify(payload)
        });
        
        const result = await response.json();
        
        return {
            status: response.status,
            data: result,
            success: response.ok
        };
        
    } catch (error) {
        return {
            error: error.message,
            success: false
        };
    }
}

Best Practices

Code Organization

// Separate concerns into functions
function process(inputs, context) {
    try {
        const validated = validateInputs(inputs);
        const processed = processData(validated);
        const formatted = formatOutput(processed);
        
        return { result: formatted };
    } catch (error) {
        return handleError(error, context);
    }
}

function validateInputs(inputs) {
    if (!inputs.data) throw new Error("Missing data");
    return inputs;
}

function processData(inputs) {
    // Main processing logic
    return inputs.data.map(transform);
}

function formatOutput(data) {
    return {
        items: data,
        timestamp: new Date().toISOString()
    };
}

function handleError(error, context) {
    context.log('error', error.message);
    return { error: "Processing failed" };
}

Resource Management

// Clean up resources
function process(inputs, context) {
    const resources = [];
    
    try {
        // Acquire resources
        const db = openDatabase(inputs.connectionString);
        resources.push(db);
        
        const file = openFile(inputs.filename);
        resources.push(file);
        
        // Use resources
        const result = processWithResources(db, file);
        
        return { result: result };
        
    } finally {
        // Always clean up
        resources.forEach(resource => {
            try {
                resource.close();
            } catch (e) {
                // Log but don't throw
                console.error("Cleanup error:", e);
            }
        });
    }
}

Next Steps

WebAssembly Runtime

The WebAssembly runtime in Reflow enables execution of WASM-based actors using the Extism plugin system. This provides a secure, sandboxed environment for running plugins written in any language that compiles to WebAssembly.

Overview

Reflow's WASM runtime is built on Extism, a cross-language framework for building plugin systems. It allows you to:

  • Write actors in languages like Rust, Go, C/C++, Zig, and more
  • Run plugins in a secure, sandboxed environment
  • Share state between plugin invocations
  • Communicate with the host system through well-defined interfaces

Architecture

┌─────────────────────────────────────────────┐
│              ScriptActor                    │
│  ┌─────────────────────────────────────┐   │
│  │         ExtismEngine                 │   │
│  │  ┌─────────────────────────────┐    │   │
│  │  │    Extism Plugin Host       │    │   │
│  │  │  ┌───────────────────┐      │    │   │
│  │  │  │   WASM Plugin     │      │    │   │
│  │  │  │  ┌─────────────┐  │      │    │   │
│  │  │  │  │ Actor Logic │  │      │    │   │
│  │  │  │  └─────────────┘  │      │    │   │
│  │  │  └───────────────────┘      │    │   │
│  │  └─────────────────────────────┘    │   │
│  └─────────────────────────────────────┘   │
└─────────────────────────────────────────────┘

Plugin SDK

Reflow provides a Rust SDK (reflow_wasm_actor) for building WASM plugins:

#![allow(unused)]
fn main() {
use reflow_wasm_actor::*;
use std::collections::HashMap;

// Define plugin metadata
fn metadata() -> PluginMetadata {
    PluginMetadata {
        component: "MyActor".to_string(),
        description: "Example WASM actor".to_string(),
        inports: vec![
            port_def!("input", "Input port", "Integer", required),
        ],
        outports: vec![
            port_def!("output", "Output port", "Integer"),
        ],
        config_schema: None,
    }
}

// Implement actor behavior
fn process_actor(context: ActorContext) -> Result<ActorResult, Box<dyn std::error::Error>> {
    let mut outputs = HashMap::new();
    
    if let Some(Message::Integer(value)) = context.payload.get("input") {
        outputs.insert("output".to_string(), Message::Integer(value * 2));
    }
    
    Ok(ActorResult {
        outputs,
        state: None,
    })
}

// Register the plugin
actor_plugin!(
    metadata: metadata(),
    process: process_actor
);
}

Host Functions

The WASM runtime provides several host functions that plugins can call:

State Management

  • __get_state(key: string) -> value - Retrieve a value from actor state
  • __set_state(key: string, value: any) - Store a value in actor state

Output

  • __send_output(outputs: HashMap<string, Message>) - Send messages to output ports

Message Types

The runtime supports all Reflow message types:

#![allow(unused)]
fn main() {
pub enum Message {
    Flow,                          // Control flow signal
    Event(Value),                  // Event with data
    Boolean(bool),                 // Boolean value
    Integer(i64),                  // 64-bit integer
    Float(f64),                    // 64-bit float
    String(String),                // UTF-8 string
    Object(Value),                 // JSON object
    Array(Vec<Value>),            // Array of values
    Stream(Vec<u8>),              // Binary data
    Optional(Option<Box<Value>>),  // Optional value
    Any(Value),                    // Any JSON value
    Error(String),                 // Error message
}
}

Configuration

WASM actors are configured through the ScriptConfig:

#![allow(unused)]
fn main() {
let config = ScriptConfig {
    environment: ScriptEnvironment::SYSTEM,
    runtime: ScriptRuntime::Extism,
    source: wasm_bytes,  // Compiled WASM binary
    entry_point: "process".to_string(),
    packages: None,
};
}

Security

The WASM runtime provides several security features:

  1. Sandboxing: Plugins run in isolated WASM sandboxes
  2. Resource Limits: Memory and execution time can be limited
  3. Host Function Access: Plugins can only call explicitly provided host functions
  4. No Direct System Access: Plugins cannot access the file system or network directly

Performance Considerations

  • Startup Cost: WASM modules have some initialization overhead
  • Memory Overhead: Each plugin instance requires its own memory space
  • Cross-Boundary Calls: Data serialization between host and plugin has a cost
  • Optimization: Use release builds and wasm-opt for best performance

Example: Stateful Counter (Rust)

#![allow(unused)]
fn main() {
use reflow_wasm_actor::*;
use std::collections::HashMap;

fn metadata() -> PluginMetadata {
    PluginMetadata {
        component: "Counter".to_string(),
        description: "Stateful counter actor".to_string(),
        inports: vec![
            port_def!("increment", "Increment counter", "Flow"),
            port_def!("decrement", "Decrement counter", "Flow"),
            port_def!("reset", "Reset counter", "Flow"),
        ],
        outports: vec![
            port_def!("count", "Current count", "Integer"),
        ],
        config_schema: Some(serde_json::json!({
            "type": "object",
            "properties": {
                "initial_value": {
                    "type": "integer",
                    "default": 0
                }
            }
        })),
    }
}

fn process_actor(context: ActorContext) -> Result<ActorResult, Box<dyn std::error::Error>> {
    let mut outputs = HashMap::new();
    
    // Get current count from state
    let mut count = context.state
        .get("count")
        .and_then(|v| v.as_i64())
        .unwrap_or_else(|| {
            context.config.get_integer("initial_value").unwrap_or(0)
        });
    
    // Process inputs
    if context.payload.contains_key("increment") {
        count += 1;
    } else if context.payload.contains_key("decrement") {
        count -= 1;
    } else if context.payload.contains_key("reset") {
        count = context.config.get_integer("initial_value").unwrap_or(0);
    }
    
    // Output current count
    outputs.insert("count".to_string(), Message::Integer(count));
    
    // Update state
    let mut new_state = serde_json::Map::new();
    new_state.insert("count".to_string(), count.into());
    
    Ok(ActorResult {
        outputs,
        state: Some(serde_json::Value::Object(new_state)),
    })
}

actor_plugin!(
    metadata: metadata(),
    process: process_actor
);
}

Example: Stateful Counter (Go)

package main

import (
    reflow "github.com/darmie/reflow/reflow_wasm_go/sdk"
)

func processCounter(context reflow.ActorContext) (reflow.ActorResult, error) {
    outputs := make(map[string]interface{})
    
    // Get current count from state using host function
    currentCount := int64(0)
    if stateValue, err := reflow.GetState("count"); err == nil && stateValue != nil {
        if countFloat, ok := stateValue.(float64); ok {
            currentCount = int64(countFloat)
        }
    }
    
    // Handle operations
    if operation, exists := context.Payload["operation"]; exists {
        if operation.Type == "String" {
            op := operation.Data.(string)
            
            switch op {
            case "increment":
                currentCount++
            case "decrement":
                currentCount--
            case "reset":
                currentCount = 0
            case "double":
                currentCount *= 2
            }
            
            // Save new count to state
            reflow.SetState("count", currentCount)
            
            outputs["value"] = reflow.NewIntegerMessage(currentCount).ToSerializable()
            outputs["operation"] = reflow.NewStringMessage(op).ToSerializable()
            
            // Send async output via host function
            asyncOutputs := map[string]interface{}{
                "status": "Processing complete",
                "count": currentCount,
            }
            reflow.SendOutput(asyncOutputs)
        }
    } else {
        // No operation, just return current count
        outputs["value"] = reflow.NewIntegerMessage(currentCount).ToSerializable()
    }
    
    // Update state
    newState := map[string]interface{}{
        "count": currentCount,
    }
    
    return reflow.ActorResult{
        Outputs: outputs,
        State:   newState,
    }, nil
}

func getMetadata() reflow.PluginMetadata {
    return reflow.PluginMetadata{
        Component:   "GoCounter",
        Description: "Stateful counter actor implemented in Go",
        Inports: []reflow.PortDefinition{
            reflow.NewOptionalPort("operation", "Operation to perform", "String"),
        },
        Outports: []reflow.PortDefinition{
            reflow.NewOptionalPort("value", "Current counter value", "Integer"),
            reflow.NewOptionalPort("operation", "Operation performed", "String"),
        },
        ConfigSchema: nil,
    }
}

func init() {
    reflow.RegisterPlugin(getMetadata(), processCounter)
}

// Export functions required by Extism
//export get_metadata
func get_metadata() int32 {
    return reflow.GetMetadata()
}

//export process
func process() int32 {
    return reflow.Process()
}

func main() {}

Building Go WASM Plugins

To build Go plugins for Reflow, you'll need TinyGo installed:

tinygo build -o counter.wasm -target wasi -no-debug main.go

Important considerations for Go WASM plugins:

  • Use TinyGo for smaller binary sizes and better WASM compatibility
  • Avoid using fmt package as it can cause runtime panics in WASM
  • JSON numbers are decoded as float64, so cast to int64 when needed
  • Use //export directives (not //go:wasmexport) for better compatibility
  • The main function should be empty

## Debugging

To debug WASM plugins:

1. Use `println!` or `eprintln!` in your plugin code (output goes to host stderr)
2. Use the Extism CLI for testing: `extism call plugin.wasm function_name --input '...'`
3. Enable debug logging in the host: `RUST_LOG=reflow_script=debug`

## See Also

- [Module Loading](./modules.md) - How WASM modules are loaded and managed
- [Memory Management](./memory.md) - Memory allocation and limits
- [Plugin Development Guide](../../guides/wasm-plugin-development.md) - Step-by-step plugin creation

Real-world Reflow

A tutorial series that teaches Reflow through small runnable projects. Each post solves one problem in one domain with one SDK, in under 200 lines of code, with at most one library beyond the SDK itself.

What Reflow is

Reflow is a runtime for reactive flow graphs. You declare nodes and the edges between them; Reflow runs each node when one of its inputs changes.

  • Reactive. A node only does work when something asks it to.
  • Graph. Connections are explicit data, not buried inside function calls.

The actor model, multi-language SDKs, pack format, and wasm runtime all serve those two ideas.

The shape

Three concepts cover most of what you write.

Actor. A unit of work with named inports and outports. The runtime calls run when messages arrive on its inputs; the actor emits messages on its outputs.

flowchart LR
    in([in])-->Doubler-->out([out])
    classDef port fill:#e8eef7,stroke:#5a6f96,color:#23314f
    class in,out port
class Doubler extends Actor {
  static inports = ["in"];
  static outports = ["out"];
  run(ctx) {
    ctx.send({ out: Message.integer(2 * ctx.input.in.data) });
    ctx.done();
  }
}

Graph. A description of which actors exist and which ports connect to which. Plain JSON. The same graph runs from any SDK.

flowchart LR
    A[a: doubler] -- out → in --> B[b: collector]
const g = new Graph("demo");
g.addNode("a", "tpl_doubler");
g.addNode("b", "tpl_collector");
g.addConnection("a", "out", "b", "in");

Network. The runtime that ticks the graph. You hand it a Graph, register the actor implementations the graph references, and call start.

const net = new Network(g);
net.registerActor("tpl_doubler", new Doubler());
await net.start();

Comparison to a UI signal library

A SolidJS signal recomputes whenever its tracked dependencies change:

const [count, setCount] = createSignal(1);
const doubled = createMemo(() => count() * 2);
createEffect(() => console.log(doubled()));
setCount(5); // logs 10
flowchart LR
    count((count signal)) -.tracked.-> doubled[doubled memo]
    doubled -.tracked.-> effect[log effect]

The same pipeline in Reflow:

flowchart LR
    source[source: input] --> doubler[doubler] --> logger[logger]
g.addNode("source", "tpl_input");
g.addNode("doubler", "tpl_doubler");
g.addNode("logger", "tpl_log");
g.addConnection("source", "out", "doubler", "in");
g.addConnection("doubler", "out", "logger", "in");

The graph is the dependency graph. Solid infers it from code; Reflow asks you to write it down. The trade pays back at the scale where graphs are usually authored visually in Zeal and exported as JSON.

Async reactivity

Solid's reactivity is synchronous and in-process. Reflow's is asynchronous: actors return Futures, messages travel over channels (in-memory, cross-process, or across the network), and the runtime schedules execution.

flowchart LR
    A[ingest] --> B[validate]
    B --> C[enrich]
    B --> D[score]
    C --> E[merge]
    D --> E
    E --> F[persist]
    classDef parallel fill:#fef3c7,stroke:#a16207,color:#3a2c08
    class C,D parallel

The two highlighted nodes have no dependency between them, so the runtime runs them concurrently. The shape of the graph implies the parallelism — no Promise.all.

What this gives you:

  1. Concurrency. Independent actors run in parallel; no manual Promise.all.
  2. Back-pressure. Channels are bounded; a slow consumer throttles its producer.
  3. Streams. A port can carry a byte stream (audio, video, large blobs) alongside discrete-message ports.
  4. Replayability. An actor's input is a sequence of messages; the same inputs reproduce the same run.
  5. Portability. The graph is JSON. The same graph runs from Node, Python, Go, JVM, C++, or a browser tab — same Rust core compiled per target.

When Reflow fits

Use Reflow when the work is shaped like a pipeline:

  • Stream of inputs to stream of outputs.
  • Mixed I/O and CPU work that benefits from concurrent stages.
  • The pipeline body changes over time and you want it as data, not code.
  • You want the same logic to run in the browser and on a server.

Skip Reflow when the work is shaped like a request:

  • One input, one output, no fan-out.
  • A page of imperative code with no reusable stages.
  • A CRUD endpoint where the framework you already use is fine.

Series outline

  1. Reactive particle field (Browser, Node SDK). Animation-frame driven graph rendering 200 spring-physics particles to canvas2D. Introduces actors, the graph, and the runtime contract.
  2. Live edits over a stream (Browser, Node SDK). Wikipedia's public SSE feed driving a Reflow graph. Same actor primitives, network-paced source.
  3. Multi-agent orchestration (Python). Three LLM agents run in parallel against a local Ollama model; a synthesizer combines their findings; per-token streaming through the graph.
  4. A concurrent worker pool over gRPC (Go). The canonical goroutines + channels fan-out fetcher pool expressed as a graph: a long-running gRPC server where each call spins up a fresh per-request network — Dispatcher → N Fetchers → Sink — and streams pages back over server-streaming RPC.
  5. Parallel data enrichment behind Spring Boot (Java). Per-request graph behind a REST endpoint: Splitter fans the SKU out to three slow downstream services, the Merger awaits all three (awaitAllInports) and returns a merged JSON payload — Reflow's CompletableFuture.allOf().join().
  6. A long-running Kafka stream router (Java). Daemon graph driven by the Kafka poll loop: OrderSource publishes records via ctx.send, Router picks one of four outports based on status, sinks fan in parallel with loggers for operational visibility.
  7. Composing a workflow from the catalog (Python). Issue triage that reads, decides, and acts. Almost every node is a catalog template instantiated by id — api_github_list_issues, tpl_loop, tpl_switch, api_slack_send_message. Three small custom actors fill the gaps. Demonstrates the third lifecycle (triggered batch), conditional routing as configuration, and the api_services pack model.
  8. A polyphonic synthesizer with ctx.pool (C++). Three voices, a mixer, a WAV file. The mixer holds a voices pool — one inport, variable-N upstreams, no port-per- voice explosion. Also showcases StreamProducer for high-throughput producer messaging and ctx.send for mid-tick flush.
  9. Reflow inside an Airflow PythonOperator (Python). Daily triage with Airflow owning the calendar (schedule, backfill, retry, UI, credentials store) and Reflow owning the actor graph. The integration boundary is one python_callable whose body is a regular Reflow Network.
  10. A graph that spans two processes (Rust + CLI). Two peers federated through the bundled reflow-discovery server. Same shape works for two machines. Auto-reconnect with backoff, auth-token gating on the accept path, periodic discovery refresh — everything the in-process series didn't need.

After tutorial 01 you will know enough Reflow to read the others in any order.

Reactive particle field in the browser

A reactive particle field rendered to a <canvas>. 200 coloured points spread across the screen and lean toward the cursor with their own spring physics. Every particle has a fixed home, so the field stays distributed; the cursor only deforms a local patch. One HTML file, no install, runs in any modern browser.

The animation has four jobs: pacing to the screen's frame rate, reading the cursor, advancing per-particle physics, painting. A vanilla implementation packs all four into one requestAnimationFrame callback with shared mutable state. Reflow splits them into four actors with declared inports and outports. The runtime calls each actor's run(ctx) when a new packet lands on one of its inports. Replace the canvas2D renderer with WebGL by writing a new Draw actor and changing one line in the wiring. Insert a recording node between simulator and renderer without touching the other actors.

What we are building

flowchart LR
    tick[clock] -->|dt + time| sim[simulate]
    mouse([mouse]) -->|position| sim
    sim -->|particles| draw[draw on canvas]

Three actors and one DOM event source. clock fires once per animation frame; simulate advances each particle one step; draw paints. The mouse position comes from mousemove events injected into the graph by bindInputEvents.

Setup

One file in any directory:

<!doctype html>
<meta charset="utf-8">
<title>Particle field</title>
<style>
  body { margin: 0; background: #0b1020; color: #c9d2e6; font: 14px system-ui; }
  canvas { display: block; }
  small { position: fixed; bottom: 8px; left: 12px; opacity: .6; }
</style>
<canvas id="stage"></canvas>
<small>move your mouse</small>

<script type="module">
import { ready, Network, Actor, Message, bindInputEvents }
  from "https://esm.sh/@offbit-ai/reflow";

await ready();
// the rest goes here
</script>

esm.sh fetches the browser build of @offbit-ai/reflow as an ES module. ready() initialises the wasm runtime once. After that line, Network, Actor, and Message are available.

The actors

Clock

Fires once per animation frame, emits dt and time on each tick.

class Clock extends Actor {
  static inports = ["tick"];
  static outports = ["tick", "dt", "time"];

  constructor() {
    super();
    this.last = performance.now();
  }

  run(ctx) {
    const now = performance.now();
    const dt = (now - this.last) / 1000;
    this.last = now;
    ctx.send({
      dt:   Message.float(dt),
      time: Message.float(now / 1000),
    });
    requestAnimationFrame(() => {
      ctx.send({ tick: Message.flow() });   // self-loop: re-fire next frame
      ctx.done();
    });
  }
}

The clock has no upstream actor, so we wire its own tick outport back to its tick inport (a self-loop, set up below) and seed the loop with one initial packet. Each run schedules one requestAnimationFrame callback; the callback emits a fresh tick on the outport, the loop delivers it back, the runtime calls run again. One pass per browser frame, no drift.

Simulate

Holds the particle array. Each particle has a fixed home position plus its own spring constants and colour. The effective target each tick is home + (cursor − home) · lean, where lean falls off with distance from the cursor — close particles bend hard, far particles barely move.

const N = 200;
class Simulate extends Actor {
  static inports = ["dt", "mouse"];
  static outports = ["particles"];

  constructor(width, height) {
    super();
    this.target = { x: width / 2, y: height / 2 };
    this.particles = Array.from({ length: N }, () => {
      const hx = Math.random() * width;
      const hy = Math.random() * height;
      return {
        x: hx, y: hy, vx: 0, vy: 0,
        hx, hy,
        k: 6 + Math.random() * 4,        // stiffness 6–10 (1/sec²)
        c: 2.5 + Math.random() * 1.5,    // damping  2.5–4 (1/sec)
        color: `hsl(${Math.random() * 360}, 80%, 70%)`,
      };
    });
    this.influence = Math.min(width, height) * 0.4;
  }

  run(ctx) {
    const dt = Math.min(ctx.input.dt?.data ?? 0, 0.05);
    if (ctx.input.mouse) this.target = ctx.input.mouse.data;
    const r2 = this.influence * this.influence;
    for (const p of this.particles) {
      const dx = this.target.x - p.hx;
      const dy = this.target.y - p.hy;
      const lean = r2 / (r2 + dx * dx + dy * dy);
      const tx = p.hx + dx * lean;
      const ty = p.hy + dy * lean;
      const ax = (tx - p.x) * p.k - p.vx * p.c;
      const ay = (ty - p.y) * p.k - p.vy * p.c;
      p.vx += ax * dt;
      p.vy += ay * dt;
      p.x += p.vx * dt;
      p.y += p.vy * dt;
    }
    ctx.send({ particles: Message.array(this.particles) });
    ctx.done();
  }
}

ctx.input.dt is the Message Clock sent; its .data is the float. ctx.input.mouse is absent on ticks where the cursor didn't move, so we cache this.target. Clamping dt to 0.05 stops a tab-switch hitch from blowing up the integrator. The physics is underdamped spring + viscous drag in seconds — same behaviour at 60Hz and 240Hz.

Draw

Paints particles to the canvas.

class Draw extends Actor {
  static inports = ["particles"];
  static outports = [];
  static portDelivery = { particles: "latest" };

  constructor(canvas) {
    super();
    this.ctx2d = canvas.getContext("2d");
    this.canvas = canvas;
  }

  run(ctx) {
    const ps = ctx.input.particles?.data ?? [];
    const c = this.ctx2d;
    c.fillStyle = "rgba(11, 16, 32, 0.35)";        // motion-blur trail
    c.fillRect(0, 0, this.canvas.width, this.canvas.height);
    for (const p of ps) {
      c.fillStyle = p.color;
      c.fillRect(p.x | 0, p.y | 0, 2, 2);
    }
    ctx.done();
  }
}

The semi-transparent fill each frame produces the motion-blur trails. static portDelivery = { particles: "latest" } tells the runtime that the simulator can outpace the painter — keep only the freshest packet on particles, drop stale ones. Without it, a slow Draw builds an inbox of unused particle arrays.

Wiring

const canvas = document.getElementById("stage");
canvas.width = innerWidth;
canvas.height = innerHeight;

const net = new Network();

net.addNode("clock", "tpl_clock");
net.addNode("mouse", "tpl_mouse_input");       // built-in DOM source
net.addNode("sim",   "tpl_simulate");
net.addNode("draw",  "tpl_draw");

net.addConnection("clock", "tick",       "clock", "tick");        // self-loop
net.addConnection("clock", "dt",         "sim",   "dt");
net.addConnection("mouse", "position",   "sim",   "mouse");
net.addConnection("sim",   "particles",  "draw",  "particles");

net.registerActor("tpl_clock",    new Clock());
net.registerActor("tpl_simulate", new Simulate(canvas.width, canvas.height));
net.registerActor("tpl_draw",     new Draw(canvas));

net.addInitial("clock", "tick", Message.flow());
await net.start();
bindInputEvents(net, document.body);

addInitial drops one Flow packet on the clock's tick inport. The runtime calls run(ctx) once, the self-loop carries it from there. Without this line nothing fires.

bindInputEvents is called after start() — the wasm GraphNetwork is created lazily during start. It attaches DOM listeners (mousemove, keydown, etc.) and routes events into the matching built-in input actor. tpl_mouse_input ships with the catalog, so the wiring is just mouse.position → sim.mouse.

Run it

Any static server works:

npx serve .

Open the page. Move the mouse. The particles follow.

Notes on the design

  • Each actor has one job and a single set of inports and outports. Replace the renderer (canvas2D → WebGL) by writing a new Draw actor and changing one line in the wiring.
  • Add a recording layer or a force field by inserting a node between simulator and renderer. The other actors don't need to change.
  • Larger graphs (12+ actors) are typically authored in Zeal and loaded from JSON rather than wired in code.

What is next

The next tutorial moves to Python and uses Reflow as a multi-agent orchestrator: three LLM agents run in parallel against a local Ollama model and a synthesizer combines their findings.

Live edits over a stream

Tutorial 01 paced a graph at the browser's animation-frame rate. This one drives a graph from inbound network data — Wikimedia's public Server-Sent Events feed of Wikipedia edits. Same actor model, the trigger comes from outside instead of from a local clock.

What we are building

flowchart LR
    sse([Wikimedia SSE]) -->|event| source[source]
    source -->|event| filter[substantive?]
    filter -->|event| display[display]

Three actors:

  • source opens an EventSource to Wikimedia and emits each parsed JSON event.
  • filter keeps edits to en.wikipedia articles by humans whose byte-delta is at least ±200 (skip stubs and typo fixes).
  • display puts each surviving event at the top of a list on the page.

Setup

One file in any directory:

<!doctype html>
<meta charset="utf-8">
<title>Live Wikipedia edits</title>
<style>
  body { margin: 0; background: #0b1020; color: #c9d2e6;
         font: 14px/1.5 system-ui; padding: 24px 32px; }
  ol { list-style: none; padding: 0; max-width: 720px; }
  li { padding: 8px 12px; margin: 6px 0; background: #131a30; border-radius: 4px; }
</style>
<ol id="feed"></ol>

<script type="module">
import { ready, Network, Actor, Message }
  from "https://esm.sh/@offbit-ai/reflow";

await ready();
// the rest goes here
</script>

The Wikimedia stream serves CORS-permissive headers, so a static file server is enough.

The actors

Source

Owns an EventSource and bridges it into the graph via an internal queue. Events arrive whenever the network pushes them; the actor emits one per run(ctx) call.

class Source extends Actor {
  static inports = ["tick"];
  static outports = ["tick", "event"];

  constructor(url) {
    super();
    this.queue = [];
    this.resume = null;
    const es = new EventSource(url);
    es.addEventListener("message", (e) => {
      try {
        this.queue.push(JSON.parse(e.data));
        this.resume?.();
        this.resume = null;
      } catch { /* drop malformed lines */ }
    });
  }

  run(ctx) {
    const send = () => {
      ctx.send({
        event: Message.object(this.queue.shift()),
        tick:  Message.flow(),
      });
      ctx.done();
    };
    if (this.queue.length) send();
    else this.resume = send;
  }
}

The tick outport is wired back to the tick inport (set up below), so each successful run schedules the next one. We seed the loop with one initial Flow on tick at startup.

Two run states. If the queue has events, drain one and emit a fresh tick. If the queue is empty, park the run by stashing the continuation in this.resume; the next inbound EventSource message calls it. The packed send({event, tick}) keeps the self-loop alive.

This pattern — actor-as-source with a queue and a tick self-loop — plugs any push-based input (sockets, EventSource, observers, native events) into a Reflow graph. The runtime drains the queue at the rate downstream consumers can keep up with. If display falls behind, the queue grows; the rest of the graph keeps running.

Filter

A pure transform. Receives an event, checks a predicate, forwards if it passes.

class Filter extends Actor {
  static inports = ["event"];
  static outports = ["event"];

  constructor(predicate) { super(); this.predicate = predicate; }

  run(ctx) {
    const event = ctx.input.event?.data;
    if (event && this.predicate(event)) {
      ctx.send({ event: Message.object(event) });
    }
    ctx.done();
  }
}

The predicate is injected at construction. The same Filter class works in any pipeline.

Display

Renders. Each event becomes one <li> at the top of the list, capped at 50 entries.

class Display extends Actor {
  static inports = ["event"];
  static outports = [];

  constructor(list, max = 50) {
    super();
    this.list = list;
    this.max = max;
  }

  run(ctx) {
    const e = ctx.input.event?.data;
    if (e) {
      const li = document.createElement("li");
      const delta = (e.length?.new ?? 0) - (e.length?.old ?? 0);
      li.textContent = `${delta >= 0 ? "+" : ""}${delta}  ${e.title}  — ${e.user}`;
      this.list.prepend(li);
      while (this.list.children.length > this.max) this.list.lastChild.remove();
    }
    ctx.done();
  }
}

Wiring

const STREAM = "https://stream.wikimedia.org/v2/stream/recentchange";

const substantive = (e) =>
  e.wiki === "enwiki" &&
  e.namespace === 0 &&
  !e.bot &&
  Math.abs((e.length?.new ?? 0) - (e.length?.old ?? 0)) >= 200;

const net = new Network();

net.addNode("source",  "tpl_wikipedia_source");
net.addNode("filter",  "tpl_substantive");
net.addNode("display", "tpl_display");

net.addConnection("source", "tick",  "source",  "tick");    // self-loop
net.addConnection("source", "event", "filter",  "event");
net.addConnection("filter", "event", "display", "event");

net.registerActor("tpl_wikipedia_source", new Source(STREAM));
net.registerActor("tpl_substantive",      new Filter(substantive));
net.registerActor("tpl_display",          new Display(document.getElementById("feed")));

net.addInitial("source", "tick", Message.flow());
await net.start();

addInitial wakes the source for its first run; the EventSource drives everything after that.

Run it

npx serve .

Open the page. Substantive edits scroll in within a few seconds. Click a title to open the article.

The full runnable example is at sdk/node/examples/tutorial-02-live-edits.

Notes on the design

  • The graph topology is identical to a clock-driven graph; only the source actor changes. A user-driven source (click handler feeding a queue) follows the same shape.
  • Swapping the predicate changes what surfaces. Drop enwiki to see every language; drop namespace === 0 to include talk pages; add e.user.includes("bot") to watch bots. No other code changes.

What is next

The next tutorial moves to Python. Reflow runs three LLM agents in parallel against a local Ollama model and a synthesizer combines their findings — same actor primitives, server-side, with streaming output through the graph.

Orchestrating multiple LLM agents

Tutorials 01 and 02 ran in a browser tab. This one moves to Python and uses Reflow as a multi-agent orchestrator: three specialist research agents run in parallel against a local Ollama model, a synthesizer combines their findings, a logger streams role-tagged tokens to stdout as they arrive, and a sink hands the final answer back to the calling Python script.

flowchart LR
    topic([topic])
    topic --> factual[factual_researcher]
    topic --> stat[statistician]
    topic --> quote[quoter]
    factual -->|chunk| logger[logger]
    stat -->|chunk| logger
    quote -->|chunk| logger
    factual -->|finding| synth[synthesizer]
    stat -->|finding| synth
    quote -->|finding| synth
    synth -->|chunk| logger
    synth -->|answer| sink[sink]

Two properties of the graph matter. The three specialists run concurrently because no edge connects them — total latency is max(t_specialist) + t_synth. Every agent emits per-token packets via ctx.send while it streams, so the user sees progress live instead of waiting for the synthesizer to finish.

Prerequisites

ollama pull qwen2.5:3b
python -m venv .venv && source .venv/bin/activate
pip install offbit-reflow openai

The agents call the OpenAI-compatible chat-completions endpoint Ollama serves at http://localhost:11434/v1. Set OPENAI_BASE_URL and OPENAI_API_KEY to point at OpenAI / Groq / OpenRouter if you want a hosted model — the code is identical.

A specialist

Each specialist subclasses Actor, declares one inport (topic), two outports (chunk for live tokens, finding for the full text), and calls the LLM with stream=True.

from openai import OpenAI
from offbit_reflow import Actor, Message

client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")

def stream_chat(system: str, user: str):
    resp = client.chat.completions.create(
        model="qwen2.5:3b",
        messages=[{"role": "system", "content": system},
                  {"role": "user",   "content": user}],
        temperature=0.4,
        stream=True,
    )
    for chunk in resp:
        if not chunk.choices:
            continue
        delta = chunk.choices[0].delta.content or ""
        if delta:
            yield delta


class Specialist(Actor):
    inports    = ["topic"]
    outports   = ["chunk", "finding"]
    role       = ""
    role_label = ""

    def run(self, ctx):
        topic = ctx.inputs["topic"]["data"]
        full, first = [], True
        for delta in stream_chat(self.role, f"Topic: {topic}"):
            full.append(delta)
            ctx.send({
                "chunk": Message.object({
                    "role":  self.role_label,
                    "text":  delta,
                    "first": first,
                }),
            })
            first = False
        ctx.done({"finding": Message.string("".join(full))})


class FactualResearcher(Specialist):
    component  = "factual_researcher"
    role_label = "facts"
    role = "You are a factual researcher. Give 3–5 bullets."


class Statistician(Specialist):
    component  = "statistician"
    role_label = "stats"
    role = "You are a statistician. Give 3–5 bullets of relevant numbers."


class Quoter(Specialist):
    component  = "quoter"
    role_label = "quotes"
    role = "You are a quote librarian. Provide 2–3 short, attributable quotes."

ctx.send(messages) flushes a packet to the outport channel immediately — the consumer fires before the agent's run returns. ctx.done(outputs) resolves the tick. Use ctx.send for streaming tokens; use ctx.done for the final value of the tick.

The synthesizer

Set await_all_inports = True. The runtime accumulates packets across declared inports and calls run once every inport has at least one packet.

class Synthesizer(Actor):
    component         = "synthesizer"
    inports           = ["facts", "stats", "quotes"]
    outports          = ["chunk", "answer"]
    await_all_inports = True
    role_label        = "answer"

    def run(self, ctx):
        facts  = ctx.inputs["facts"]["data"]
        stats  = ctx.inputs["stats"]["data"]
        quotes = ctx.inputs["quotes"]["data"]
        prompt = (
            "Combine these three perspectives into a 2–3 paragraph answer.\n\n"
            f"## Facts\n{facts}\n\n"
            f"## Stats\n{stats}\n\n"
            f"## Quotes\n{quotes}\n"
        )
        full, first = [], True
        for delta in stream_chat(
            "You are a senior writer. Compose a tight, sourced answer.",
            prompt,
        ):
            full.append(delta)
            ctx.send({
                "chunk": Message.object({
                    "role": self.role_label, "text": delta, "first": first,
                }),
            })
            first = False
        ctx.done({"answer": Message.string("".join(full))})

await_all_inports counts ports, not packets. A port is "ready" once it has received any packet — from a connection or from add_initial. This actor would not fire if any of facts, stats, or quotes were missing data.

The synthesizer also streams its tokens on chunk, using the same shape the specialists emit. The same logger handles them all.

The logger

A fan-in node that prints role-tagged tokens to stdout. One inport (chunk) bound to four sources.

import sys

class Logger(Actor):
    component = "logger"
    inports   = ["chunk"]
    outports  = []

    def __init__(self):
        super().__init__()
        self._current_role = None

    def run(self, ctx):
        c = ctx.inputs["chunk"]["data"]
        role, text = c["role"], c["text"]
        if role != self._current_role:
            if self._current_role is not None:
                sys.stdout.write("\n\n")
            sys.stdout.write(f"[{role}] ")
            self._current_role = role
        sys.stdout.write(text)
        sys.stdout.flush()
        ctx.done()

The runtime fires run once per packet, so prints interleave across the three parallel specialists. [answer] chunks follow once the synthesizer kicks in.

Bridging back to plain Python

Reflow runs the graph on its own scheduler; the calling Python script is sync code that wants the answer back. Sink puts the final answer on a queue.Queue; the script blocks on queue.get().

import queue

class Sink(Actor):
    component = "sink"
    inports   = ["answer"]
    outports  = []

    def __init__(self, q):
        super().__init__()
        self._q = q

    def run(self, ctx):
        self._q.put(ctx.inputs["answer"]["data"])
        ctx.done()

This is the standard pattern when you want to call a Reflow flow as if it were a function.

Wiring and running

from offbit_reflow import Network

def run(topic: str) -> str:
    out = queue.Queue()

    net = Network()
    net.register_actor("tpl_factual",      FactualResearcher())
    net.register_actor("tpl_statistician", Statistician())
    net.register_actor("tpl_quoter",       Quoter())
    net.register_actor("tpl_synthesizer",  Synthesizer())
    net.register_actor("tpl_logger",       Logger())
    net.register_actor("tpl_sink",         Sink(out))

    for name, tpl in [
        ("factual",      "tpl_factual"),
        ("statistician", "tpl_statistician"),
        ("quoter",       "tpl_quoter"),
        ("synth",        "tpl_synthesizer"),
        ("logger",       "tpl_logger"),
        ("sink",         "tpl_sink"),
    ]:
        net.add_node(name, tpl)

    net.add_connection("factual",      "finding", "synth", "facts")
    net.add_connection("statistician", "finding", "synth", "stats")
    net.add_connection("quoter",       "finding", "synth", "quotes")
    for src in ("factual", "statistician", "quoter", "synth"):
        net.add_connection(src, "chunk", "logger", "chunk")
    net.add_connection("synth", "answer", "sink", "answer")

    topic_msg = {"type": "String", "data": topic}
    net.add_initial("factual",      "topic", topic_msg)
    net.add_initial("statistician", "topic", topic_msg)
    net.add_initial("quoter",       "topic", topic_msg)

    net.start()
    try:
        return out.get(timeout=300)
    finally:
        net.shutdown()


if __name__ == "__main__":
    print(run("the history of zero"))

add_initial drops a packet directly on an actor's inport. The three calls kick the specialists; from there the graph runs on its own until the sink puts the synthesizer's answer on the queue.

Notes on the design

  • Adding an agent: write a class, register it, add an add_node and one or two add_connection calls. No other code changes.
  • Mixed graphs: actors that hit the LLM and actors that don't (HTTP fetches, file reads, tool calls) compose the same way. The synthesizer doesn't know or care which kind sent its inputs.
  • Cross-language: the same graph topology runs from any Reflow SDK. Move an agent to Go or the JVM by re-registering the template there.
  • Authoring: the graph is JSON. You can build it programmatically (above) or in Zeal and load it from disk.

What is next

The next post takes the same graph shape into a long-running service: a Go gRPC backend that spins up a per-request worker pool — Dispatcher → N Fetchers → Sink — and streams pages back over server-streaming RPC.

A concurrent worker pool over gRPC (Go)

A long-running gRPC server with one server-streaming RPC. Each call spins up a fresh Reflow network shaped like a worker pool — a dispatcher fans URLs across N fetcher actors, results fan back into a single sink, the sink writes to the gRPC stream. Same shape as the canonical goroutines + channels fetcher pool, expressed as a graph.

What this replaces

The vanilla Go version of this tutorial would be roughly:

func crawl(urls []string, n int) <-chan Page {
    work    := make(chan string, len(urls))
    results := make(chan Page, len(urls))
    var wg sync.WaitGroup
    for i := 0; i < n; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for url := range work { results <- fetch(url) }
        }()
    }
    for _, u := range urls { work <- u }
    close(work)
    go func() { wg.Wait(); close(results) }()
    return results
}

The pieces are familiar: bounded channels, a work generator, N worker goroutines, a WaitGroup, a closer goroutine. Lifecycle, backpressure, and dispatch are all manual; an unhandled panic or an unclosed channel turns into a goroutine leak.

The Reflow version is a graph:

flowchart LR
    grpc([Crawl RPC]) --> dispatch[dispatcher]
    dispatch -->|worker_0| f0[fetcher_0]
    dispatch -->|worker_1| f1[fetcher_1]
    dispatch -->|worker_N| fN[fetcher_N]
    f0 -->|page| sink[sink]
    f1 -->|page| sink
    fN -->|page| sink
    sink -.write.-> grpc

Backpressure is built into every connector. Cancellation is one net.Close(). Worker count is N, and adding a worker is one AddNode plus two AddConnections.

Prerequisites

Get the Go SDK and the matching C ABI shared library it loads via cgo:

go get github.com/offbit-ai/reflow/sdk/go@v0.2.5
cd "$(go env GOMODCACHE)/github.com/offbit-ai/reflow/sdk/go@v0.2.5"
./scripts/install_lib.sh v0.2.5

install_lib.sh pulls a prebuilt libreflow_rt_capi for your platform from the matching GitHub Release and drops it where cgo can find it. No Rust toolchain required.

The example ships pre-generated protobuf stubs. Regenerate them if you edit proto/search.proto:

go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.5.1
protoc --go_out=. --go_opt=paths=source_relative \
       --go-grpc_out=. --go-grpc_opt=paths=source_relative \
       proto/search.proto

The proto

service Crawler {
  rpc Crawl (CrawlRequest) returns (stream Page);
}

message CrawlRequest { repeated string urls = 1; uint32 workers = 2; }
message Page         { string url = 1; uint32 status = 2;
                       string title = 3; uint64 bytes = 4;
                       uint64 took_ms = 5; }

Server-streaming. Client sends a list of URLs; server streams back one Page per fetched result.

Dispatcher

Reads the URL list from its urls inport and round-robins each URL onto one of N outports (worker_0, worker_1, …). Different load policies — hash-by-host, send-to-idle — are one method away.

type Dispatcher struct {
    reflow.BaseActor
    workers int
}

func NewDispatcher(workers int) *Dispatcher {
    out := make([]string, workers)
    for i := range out {
        out[i] = fmt.Sprintf("worker_%d", i)
    }
    return &Dispatcher{
        BaseActor: reflow.BaseActor{
            ComponentName: "dispatcher",
            InportsList:   []string{"urls"},
            OutportsList:  out,
        },
        workers: workers,
    }
}

func (d *Dispatcher) Run(ctx *reflow.ActorContext) error {
    raw, ok := ctx.Input("urls").Data()
    if !ok {
        return fmt.Errorf("dispatch: missing urls")
    }
    var urls []string
    if err := json.Unmarshal(raw, &urls); err != nil {
        return err
    }
    for i, u := range urls {
        port := fmt.Sprintf("worker_%d", i%d.workers)
        if err := ctx.Emit(port, reflow.MessageString(u)); err != nil {
            return err
        }
    }
    return nil
}

ctx.Input("urls").Data() is the unwrapped payload — bare JSON without the {type, data} envelope, so json.Unmarshal lands straight in []string. Use AsJSON() instead when you actually want the tagged form (e.g. to round-trip through MessageFromJSON).

ctx.Emit(port, msg) flushes immediately to the named outport. Reflow's connector model is broadcast — every connector from a source fires for every packet on that source's outport — so to route (one URL → one worker), the dispatcher gives each worker its own outport and emits explicitly.

Fetcher

One template, many node instances. Each instance has its own goroutine inside the runtime and an independent inport channel, so fetches happen concurrently with no shared state.

type Fetcher struct {
    reflow.BaseActor
    client *http.Client
}

func NewFetcher() *Fetcher {
    return &Fetcher{
        BaseActor: reflow.BaseActor{
            ComponentName: "fetcher",
            InportsList:   []string{"url"},
            OutportsList:  []string{"page"},
        },
        client: &http.Client{Timeout: 10 * time.Second},
    }
}

var titleRe = regexp.MustCompile(`(?is)<title[^>]*>(.*?)</title>`)

func (f *Fetcher) Run(ctx *reflow.ActorContext) error {
    url, _ := ctx.Input("url").AsString()
    t0 := time.Now()
    page := map[string]any{
        "url": url, "status": uint32(0), "title": "",
        "bytes": uint64(0), "took_ms": uint64(0),
    }
    defer func() {
        page["took_ms"] = uint64(time.Since(t0).Milliseconds())
        msg, _ := reflow.MessageObject(page)
        _ = ctx.Emit("page", msg)
    }()
    req, err := http.NewRequest("GET", url, nil)
    if err != nil {
        page["title"] = err.Error(); return nil
    }
    req.Header.Set("User-Agent", "reflow-tutorial-04/0.2 (https://github.com/offbit-ai/reflow)")
    resp, err := f.client.Do(req)
    if err != nil {
        page["title"] = err.Error()
        return nil
    }
    defer resp.Body.Close()
    page["status"] = uint32(resp.StatusCode)
    body, err := io.ReadAll(io.LimitReader(resp.Body, 256*1024))
    if err != nil {
        page["title"] = err.Error()
        return nil
    }
    page["bytes"] = uint64(len(body))
    if m := titleRe.FindSubmatch(body); m != nil {
        page["title"] = strings.TrimSpace(string(m[1]))
    }
    return nil
}

The defer ensures every input — successful or errored — produces exactly one page packet, so the sink can count completions.

Sink

Holds the gRPC stream handle. Counts completions and signals the handler when all expected pages have been sent.

type Sink struct {
    reflow.BaseActor
    stream pb.Crawler_CrawlServer
    want   int
    got    int
    done   chan error
}

func NewSink(stream pb.Crawler_CrawlServer, want int) *Sink {
    return &Sink{
        BaseActor: reflow.BaseActor{
            ComponentName: "sink",
            InportsList:   []string{"page"},
        },
        stream: stream,
        want:   want,
        done:   make(chan error, 1),
    }
}

func (s *Sink) Run(ctx *reflow.ActorContext) error {
    raw, ok := ctx.Input("page").Data()
    if !ok { return nil }
    var p struct {
        URL string `json:"url"`; Status uint32 `json:"status"`
        Title string `json:"title"`; Bytes uint64 `json:"bytes"`
        TookMs uint64 `json:"took_ms"`
    }
    json.Unmarshal(raw, &p)
    if err := s.stream.Send(&pb.Page{
        Url: p.URL, Status: p.Status, Title: p.Title,
        Bytes: p.Bytes, TookMs: p.TookMs,
    }); err != nil {
        select { case s.done <- err: default: }
        return nil
    }
    s.got++
    if s.got >= s.want {
        select { case s.done <- nil: default: }
    }
    return nil
}

The handler

One Reflow network per call. Every per-request resource — actors, goroutines, channels — lives inside the network. defer rnet.Close() tears it all down regardless of how the handler returns.

func (s *server) Crawl(req *pb.CrawlRequest, stream pb.Crawler_CrawlServer) error {
    workers := int(req.Workers)
    if workers == 0 { workers = 4 }

    rnet := reflow.NewNetwork()
    defer rnet.Close()

    dispatcher := NewDispatcher(workers)
    sink := NewSink(stream, len(req.Urls))

    rnet.RegisterGoActor("tpl_dispatcher", dispatcher)
    rnet.RegisterGoActor("tpl_fetcher",    NewFetcher())
    rnet.RegisterGoActor("tpl_sink",       sink)

    rnet.AddNode("dispatch", "tpl_dispatcher", nil)
    rnet.AddNode("collect",  "tpl_sink",       nil)
    for i := 0; i < workers; i++ {
        id := fmt.Sprintf("fetcher_%d", i)
        rnet.AddNode(id, "tpl_fetcher", nil)
        rnet.AddConnection("dispatch", fmt.Sprintf("worker_%d", i), id, "url")
        rnet.AddConnection(id, "page", "collect", "page")
    }

    rnet.AddInitial("dispatch", "urls", map[string]any{
        "type": "Array", "data": req.Urls,
    })

    if err := rnet.Start(); err != nil { return err }

    select {
    case err := <-sink.done:
        return err
    case <-stream.Context().Done():
        return stream.Context().Err()
    }
}

Two arms in the select. sink.done fires when every URL has been sent or the stream has rejected a write. stream.Context().Done() fires when the gRPC client cancels or the deadline expires. Either arm returns; defer rnet.Close() cleans up.

Bumping the worker count from 4 to 16 is req.Workers = 16 — the dispatcher routes accordingly, the runtime spawns 16 fetcher tasks. No WaitGroup resizing, no buffered-channel capacity tuning.

Run it

# server (terminal 1)
cd sdk/go/examples/tutorial-04-grpc-search/server
go run .

# client (terminal 2)
cd sdk/go/examples/tutorial-04-grpc-search/client
go run . \
  https://en.wikipedia.org/wiki/Flow-based_programming \
  https://en.wikipedia.org/wiki/Actor_model \
  https://en.wikipedia.org/wiki/Dataflow_programming

Output:

 105ms  200  Actor model - Wikipedia                              https://en.wikipedia.org/wiki/Actor_model
 114ms  200  Dataflow programming - Wikipedia                     https://en.wikipedia.org/wiki/Dataflow_programming
 201ms  200  Flow-based programming - Wikipedia                   https://en.wikipedia.org/wiki/Flow-based_programming

Pages arrive interleaved with fetch latency, not in submission order — proof the pool is doing its job.

Notes on the design

  • Worker pool. N nodes referencing the same tpl_fetcher template, each with its own goroutine. Size is len(req.Urls) aware via req.Workers; one config field, no channel-capacity ceremony.
  • Routing policy. Round-robin lives in the dispatcher's Run. Hash by host, sticky session, send-to-idle — replace those few lines.
  • Backpressure. Every connector is a bounded flume channel. A slow sink throttles the fetchers automatically.
  • Cancellation. defer rnet.Close() is the entire teardown path. No errgroup, no goroutine bookkeeping. stream.Context() cancels propagate through the network when the handler returns.
  • Mixed actors. The catalog gives you HTTP, JSON parse, file I/O, triggers — drop them in alongside Go actors when the work is generic enough.

What is next

The next post takes the same per-request pattern to the JVM and wires a Reflow flow into a Micronaut service.

Parallel data enrichment behind a Spring Boot endpoint (Java)

A REST service that enriches a product SKU by fanning out to three slow downstream services in parallel, joining the results, and returning a merged JSON payload. Per-request Reflow network — same shape as tutorial 04, different convergence point: where the Go gRPC tutorial fans out into a streaming response, this one fans out and back in into a single response.

What this replaces

The vanilla Spring version is the standard CompletableFuture chain:

@PostMapping("/enrich")
public Map<String, Object> enrich(@RequestBody EnrichRequest req) {
    var inv     = CompletableFuture.supplyAsync(() -> inventory(req.sku()));
    var price   = CompletableFuture.supplyAsync(() -> price(req.sku()));
    var reviews = CompletableFuture.supplyAsync(() -> reviews(req.sku()));
    CompletableFuture.allOf(inv, price, reviews).join();
    return Map.of(
        "inventory", inv.join(),
        "price",     price.join(),
        "reviews",   reviews.join());
}

It works. But every dependency is implicit in the allOf argument list, the executor is pulled from somewhere — usually the common ForkJoinPool, which is the wrong choice for blocking I/O — and adding a fourth service means editing four places.

The Reflow version is a graph:

flowchart LR
    req([POST /enrich]) --> split[Splitter]
    split -->|inv| inv[InventoryActor]
    split -->|price| pri[PriceActor]
    split -->|reviews| rev[ReviewsActor]
    inv --> merge[Merger]
    pri --> merge
    rev --> merge
    merge -.complete.-> req

Merger declares awaitAllInports = true — it fires once when each distinct inport has a packet. That's Reflow's allOf().join(). Adding a fourth service is one AddNode and two AddConnections. Backpressure is automatic on every edge.

Prerequisites

Java 17+ and Gradle. Add the JVM SDK as a Maven dependency — the published artifact bundles the native runtime for every supported platform, no manual build:

// build.gradle.kts
plugins {
    java
    id("org.springframework.boot") version "3.3.5"
    id("io.spring.dependency-management") version "1.1.6"
}

dependencies {
    implementation("org.springframework.boot:spring-boot-starter-web")
    implementation("ai.offbit:reflow:0.2.7")
    testImplementation("org.springframework.boot:spring-boot-starter-test")
    testRuntimeOnly("org.junit.platform:junit-platform-launcher")
}

Splitter

Reads the SKU and broadcasts it on three named outports — one per downstream service. Reflow connectors are broadcast: every connector from a source fires for every packet on that source's outport. To route the same SKU to three different consumers we declare one outport per consumer and emit on each.

public class Splitter extends Actor {
    @Override public String component() { return "splitter"; }
    @Override public List<String> inports()  { return List.of("sku"); }
    @Override public List<String> outports() { return List.of("inv", "price", "reviews"); }

    @Override public void run(ActorCallContext ctx) {
        String sku = stripQuotes(ctx.inputDataJson("sku"));
        ctx.emit("inv",     Message.string(sku));
        ctx.emit("price",   Message.string(sku));
        ctx.emit("reviews", Message.string(sku));
        ctx.done();
    }
}

ctx.inputDataJson("sku") returns the bare JSON payload for the named inport — a primitive scalar in JSON form ("WIDGET-42"), ready to use without unwrapping the runtime's {type, data} envelope. stripQuotes peels the JSON quoting off the string scalar.

Service stubs

Three stand-ins for slow I/O. Each sleeps for a different duration so the test can confirm the wall-clock dominator is the slowest branch (220 ms), not the sum of all three (550 ms).

public class InventoryActor extends Actor {
    @Override public String component() { return "inventory"; }
    @Override public List<String> inports()  { return List.of("sku"); }
    @Override public List<String> outports() { return List.of("out"); }

    @Override public void run(ActorCallContext ctx) {
        String sku = ctx.inputDataJson("sku");
        sleep(150);
        long stock = (long) (stripped(sku).length() * 7);
        String json = String.format("{\"sku\":%s,\"stock\":%d}", sku, stock);
        ctx.emit("out", Message.fromJson(
            "{\"type\":\"Object\",\"data\":" + json + "}"));
        ctx.done();
    }
}

PriceActor and ReviewsActor follow the same shape with different sleeps and payload fields.

Merger

Joins the three branches. awaitAllInports = true flips the runtime's tick policy: instead of firing on any input, it waits until every declared inport has a packet, then fires once. The controller passes in a CompletableFuture<String>; completing it signals the request handler to return.

public class Merger extends Actor {
    private final CompletableFuture<String> done;

    public Merger(CompletableFuture<String> done) { this.done = done; }

    @Override public String component() { return "merger"; }
    @Override public List<String> inports()  {
        return List.of("inventory", "price", "reviews");
    }
    @Override public List<String> outports() { return List.of(); }
    @Override public boolean awaitAllInports() { return true; }

    @Override public void run(ActorCallContext ctx) {
        String inv     = ctx.inputDataJson("inventory");
        String price   = ctx.inputDataJson("price");
        String reviews = ctx.inputDataJson("reviews");
        String merged = String.format(
            "{\"inventory\":%s,\"price\":%s,\"reviews\":%s}",
            inv, price, reviews);
        done.complete(merged);
        ctx.done();
    }
}

ctx.inputDataJson(port) is the JVM SDK's per-port JSON accessor — no need to scan the full inputsJson() envelope to find one branch's data.

The controller

One handler. Per-request network in a try-with-resources block — when the response returns or the timeout fires, the network shuts down cleanly. No shared state between requests.

@RestController
public class EnrichController {

    public record EnrichRequest(String sku) {}

    @PostMapping(value = "/enrich",
                 consumes = MediaType.APPLICATION_JSON_VALUE,
                 produces = MediaType.APPLICATION_JSON_VALUE)
    public String enrich(@RequestBody EnrichRequest req) throws Exception {
        var done = new CompletableFuture<String>();

        try (var net = new Network()) {
            net.registerActor("tpl_split",   new Splitter());
            net.registerActor("tpl_inv",     new InventoryActor());
            net.registerActor("tpl_price",   new PriceActor());
            net.registerActor("tpl_reviews", new ReviewsActor());
            net.registerActor("tpl_merge",   new Merger(done));

            net.addNode("split",   "tpl_split");
            net.addNode("inv",     "tpl_inv");
            net.addNode("price",   "tpl_price");
            net.addNode("reviews", "tpl_reviews");
            net.addNode("merge",   "tpl_merge");

            net.addConnection("split", "inv",     "inv",     "sku");
            net.addConnection("split", "price",   "price",   "sku");
            net.addConnection("split", "reviews", "reviews", "sku");

            net.addConnection("inv",     "out", "merge", "inventory");
            net.addConnection("price",   "out", "merge", "price");
            net.addConnection("reviews", "out", "merge", "reviews");

            net.addInitial("split", "sku",
                "{\"type\":\"String\",\"data\":\"" + req.sku() + "\"}");
            net.start();

            return done.get(5, TimeUnit.SECONDS);
        }
    }
}

Network implements AutoCloseable, so the try-with-resources wraps the whole graph lifecycle. done.get(...) blocks the request thread until Merger completes the future or the timeout fires.

Run it

cd sdk/jvm/examples/tutorial-05-spring-enrich
gradle bootRun

# in another terminal
curl -s localhost:8080/enrich \
  -H 'content-type: application/json' \
  -d '{"sku":"WIDGET-42"}' | jq

Output:

{
  "inventory": {"sku": "WIDGET-42", "stock": 63},
  "price":     {"amount": 14.49, "currency": "USD", "sku": "WIDGET-42"},
  "reviews":   {"avg": 3.5, "count": 27, "sku": "WIDGET-42"}
}

The repo includes an EnrichTest that boots the full Spring context via @SpringBootTest and asserts the merged response shape — covers the full per-request lifecycle:

gradle test

Notes on the design

  • Per-request lifecycle. try (var net = new Network()) makes the graph a request-scoped object. No process-wide actor state to clean up; no leaks between requests.
  • awaitAllInports. The fan-in barrier. Unlike a CompletableFuture.allOf join, the merger doesn't have to know how many upstream sources there are — it just declares the ports it cares about, and the runtime tracks completion per inport.
  • Backpressure. Every connector is a bounded channel. If you swap one of the stubs for a real downstream and the merger ends up faster than the network, the runtime throttles upstream automatically.
  • Adding a service. One AddNode, two AddConnections, one inport on the merger. The handler shape doesn't change.
  • Routing topology lives in the wiring. The splitter is a template; you can change "fan to all three" to "hash by SKU prefix" by editing one method, no controller changes.

What is next

The next post takes the same SDK to a long-running shape: a Kafka stream router where the graph stays up indefinitely, consuming from one topic and routing events into N output topics by content.

A long-running Kafka stream router (Java)

A daemon that consumes events from one Kafka topic, routes each one to a different output topic based on content, and stays up indefinitely. Same SDK as tutorial 05, opposite shape: where 05 spins up a fresh per-request graph and tears it down, this one boots a single long-running graph at startup and lets the Kafka poll loop drive ticks through it.

flowchart LR
    kin([orders topic]) --> source[OrderSource]
    source --> router[Router]
    router -->|confirmed| s1[Sink: orders.confirmed]
    router -->|confirmed| l1[Logger]
    router -->|cancelled| s2[Sink: orders.cancelled]
    router -->|cancelled| l2[Logger]
    router -->|refunded| s3[Sink: orders.refunded]
    router -->|refunded| l3[Logger]
    router -->|other| s4[Sink: orders.dlq]
    router -->|other| l4[Logger]
    s1 --> ko1([orders.confirmed])
    s2 --> ko2([orders.cancelled])
    s3 --> ko3([orders.refunded])
    s4 --> ko4([orders.dlq])

Each router outport feeds two downstream actors — a KafkaSink and a Logger. Reflow connectors are broadcast: every connector from a source fires for every packet on that source's outport, so adding the logger doesn't take packets away from the sinks.

What this replaces

Plain Kafka Streams would express this as a topology defined inside a single KafkaStreams builder; routing is KStream#branch with predicates compiled into the topology object. That works, but the topology and the producers/consumers are coupled — the same "split into N topics by content" shape on RabbitMQ, NATS, or in front of a SQS-like service requires a different framework.

Reflow's wiring is independent of the transport. Swap KafkaSink for an HTTP sink, an internal channel sink, or one of the built-in catalog sinks; the routing topology doesn't change.

Prerequisites

Java 17+. The published JVM SDK auto-loads the native runtime — no separate cargo step:

// build.gradle.kts
plugins {
    java
    application
}

dependencies {
    implementation("ai.offbit:reflow:0.2.7")
    implementation("org.apache.kafka:kafka-clients:3.7.1")
    implementation("org.slf4j:slf4j-simple:2.0.13")
}

application {
    mainClass.set("ai.offbit.reflow.tutorial06.Tutorial06Application")
}

A local Kafka broker — KRaft, single-node — via docker compose up -d:

services:
  kafka:
    image: apache/kafka:3.7.1
    container_name: tut06-kafka
    ports: ["9092:9092"]
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      CLUSTER_ID: reflow-tut06-cluster

OrderSource

A long-running consumer that publishes one Reflow message per inbound record. Two things are different from any actor we've seen so far in this series:

  1. The actor declares a _trigger inport even though it has no upstream. Reflow's actor model is event-driven on input, so a pure source needs an initial Flow packet to schedule its first run. We add that initial in the network setup.
  2. run() never returns. The Kafka poll loop runs indefinitely, and the actor publishes packets via ctx.send(port, msg) — not ctx.emit.
public class OrderSource extends Actor {
    private final String bootstrap, topic, groupId;
    private volatile boolean stopped = false;

    public OrderSource(String bootstrap, String topic, String groupId) {
        this.bootstrap = bootstrap; this.topic = topic; this.groupId = groupId;
    }
    public void stop() { stopped = true; }

    @Override public String component() { return "order_source"; }
    @Override public List<String> inports()  { return List.of("_trigger"); }
    @Override public List<String> outports() { return List.of("order"); }

    @Override public void run(ActorCallContext ctx) {
        Properties p = new Properties();
        p.put(BOOTSTRAP_SERVERS_CONFIG, bootstrap);
        p.put(GROUP_ID_CONFIG, groupId);
        p.put(KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        p.put(VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        p.put(AUTO_OFFSET_RESET_CONFIG, "earliest");

        try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(p)) {
            consumer.subscribe(List.of(topic));
            while (!stopped) {
                var records = consumer.poll(Duration.ofMillis(500));
                for (ConsumerRecord<String, String> r : records) {
                    ctx.send("order", Message.string(r.value()));
                }
            }
        }
        ctx.done();
    }
}

Why ctx.send, not ctx.emit

ctx.emit accumulates packets in a HashMap that drains only when ctx.done() fires. For an actor whose run() returns once per tick that's the right model — it's the bulk-flush pattern. For a continuously-publishing source whose run() never returns, emits would sit in the buffer forever, never reaching downstream connectors.

ctx.send(port, msg) writes straight to the outport channel, bypassing the done() drain. It's how every long-running source-actor pattern gets implemented across the SDKs (Python's ctx.send, the new JVM ctx.send added in 0.2.6).

Router

Inspects the order's status field and emits on the matching outport. Routing policy is one method — swap status-based for hash-by-customer, geo-by-region, etc., by editing run().

public class Router extends Actor {
    @Override public String component() { return "router"; }
    @Override public List<String> inports()  { return List.of("order"); }
    @Override public List<String> outports() {
        return List.of("confirmed", "cancelled", "refunded", "other");
    }

    @Override public void run(ActorCallContext ctx) {
        String body = stripQuotes(ctx.inputDataJson("order"));
        String status = extractStatus(body);
        String port = switch (status) {
            case "confirmed" -> "confirmed";
            case "cancelled" -> "cancelled";
            case "refunded"  -> "refunded";
            default          -> "other";
        };
        ctx.emit(port, Message.string(body));
        ctx.done();
    }
}

ctx.emit is fine here because Router fires per tick — one input, one output, done. The bulk-flush model fits.

Logger

Operational tap that fans off the router's outports in parallel with the KafkaSinks. Reflow's broadcast-on-source-outport semantics means we can add this without touching the routing logic — the connectors do the splitting.

public class Logger extends Actor {
    private final String label;
    public Logger(String label) { this.label = label; }

    @Override public String component() { return "logger_" + label; }
    @Override public List<String> inports()  { return List.of("in"); }
    @Override public List<String> outports() { return List.of(); }

    @Override public void run(ActorCallContext ctx) {
        String body = stripQuotes(ctx.inputDataJson("in"));
        System.out.printf("[%-9s] %s%n", label, body);
        ctx.done();
    }
}

Output at the terminal as the router fires:

[confirmed] {"id":"a","status":"confirmed"}
[cancelled] {"id":"b","status":"cancelled"}
[refunded ] {"id":"c","status":"refunded"}
[dlq      ] {"id":"d","status":"weird"}

KafkaSink

Three instances, one per output topic. Each holds a Kafka producer constructed lazily on first tick.

public class KafkaSink extends Actor {
    private final String bootstrap, topic;
    private volatile KafkaProducer<String, String> producer;

    public KafkaSink(String bootstrap, String topic) {
        this.bootstrap = bootstrap; this.topic = topic;
    }
    public void close() { var p = producer; if (p != null) p.close(); }

    @Override public String component() { return "kafka_sink_" + topic; }
    @Override public List<String> inports()  { return List.of("in"); }
    @Override public List<String> outports() { return List.of(); }

    @Override public void run(ActorCallContext ctx) {
        if (producer == null) { producer = makeProducer(); }
        String body = stripQuotes(ctx.inputDataJson("in"));
        producer.send(new ProducerRecord<>(topic, body));
        ctx.done();
    }
}

The wiring

One graph, started once at boot. The shutdown hook stops the source's poll loop and tears down the network — the JVM exits cleanly when the consumer/producer threads have drained.

public class Tutorial06Application {
    public static void main(String[] args) throws Exception {
        var bootstrap = System.getenv().getOrDefault("KAFKA_BOOTSTRAP", "localhost:9092");
        var inputTopic = System.getenv().getOrDefault("INPUT_TOPIC", "orders");
        var groupId    = System.getenv().getOrDefault("GROUP_ID", "reflow-tut06");

        var source     = new OrderSource(bootstrap, inputTopic, groupId);
        var confirmed  = new KafkaSink(bootstrap, "orders.confirmed");
        var cancelled  = new KafkaSink(bootstrap, "orders.cancelled");
        var refunded   = new KafkaSink(bootstrap, "orders.refunded");
        var dlq        = new KafkaSink(bootstrap, "orders.dlq");

        Network net = new Network();
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            source.stop(); net.shutdown();
            confirmed.close(); cancelled.close(); refunded.close(); dlq.close();
            net.close();
        }));

        net.registerActor("tpl_source",    source);
        net.registerActor("tpl_router",    new Router());
        net.registerActor("tpl_sink_conf", confirmed);
        net.registerActor("tpl_sink_canc", cancelled);
        net.registerActor("tpl_sink_ref",  refunded);
        net.registerActor("tpl_sink_dlq",  dlq);
        net.registerActor("tpl_log_conf",  new Logger("confirmed"));
        net.registerActor("tpl_log_canc",  new Logger("cancelled"));
        net.registerActor("tpl_log_ref",   new Logger("refunded"));
        net.registerActor("tpl_log_dlq",   new Logger("dlq"));

        net.addNode("source",    "tpl_source");
        net.addNode("router",    "tpl_router");
        net.addNode("confirmed", "tpl_sink_conf");
        net.addNode("cancelled", "tpl_sink_canc");
        net.addNode("refunded",  "tpl_sink_ref");
        net.addNode("dlq",       "tpl_sink_dlq");
        net.addNode("log_conf",  "tpl_log_conf");
        net.addNode("log_canc",  "tpl_log_canc");
        net.addNode("log_ref",   "tpl_log_ref");
        net.addNode("log_dlq",   "tpl_log_dlq");

        net.addConnection("source", "order",     "router",    "order");
        net.addConnection("router", "confirmed", "confirmed", "in");
        net.addConnection("router", "cancelled", "cancelled", "in");
        net.addConnection("router", "refunded",  "refunded",  "in");
        net.addConnection("router", "other",     "dlq",       "in");
        // Same outports fan to the loggers in parallel — broadcast
        // means adding these doesn't reduce traffic to the sinks.
        net.addConnection("router", "confirmed", "log_conf",  "in");
        net.addConnection("router", "cancelled", "log_canc",  "in");
        net.addConnection("router", "refunded",  "log_ref",   "in");
        net.addConnection("router", "other",     "log_dlq",   "in");

        // Source has no upstream — kick it with a Flow initial.
        net.addInitial("source", "_trigger", "{\"type\":\"Flow\"}");

        net.start();
        Thread.currentThread().join();   // park until SIGTERM
    }
}

Run it

docker compose up -d
for t in orders orders.confirmed orders.cancelled orders.refunded orders.dlq; do
  docker exec tut06-kafka /opt/kafka/bin/kafka-topics.sh \
    --bootstrap-server localhost:9092 --create --topic $t \
    --partitions 1 --replication-factor 1
done

gradle run &  # router process

docker exec -i tut06-kafka /opt/kafka/bin/kafka-console-producer.sh \
  --bootstrap-server localhost:9092 --topic orders <<EOF
{"id":"a","status":"confirmed"}
{"id":"b","status":"cancelled"}
{"id":"c","status":"refunded"}
{"id":"d","status":"weird"}
EOF

Verify each output topic received its match:

=== orders.confirmed ===
{"id":"a","status":"confirmed"}
=== orders.cancelled ===
{"id":"b","status":"cancelled"}
=== orders.refunded ===
{"id":"c","status":"refunded"}
=== orders.dlq ===
{"id":"d","status":"weird"}

Notes on the design

  • Long-running graph. No per-request setup; Network.start() fires once at app boot, the source actor's poll loop drives ticks forever. Adding a new output topic is one AddNode, one AddConnection, one outport on the Router.
  • ctx.send for source actors. Continuous publishers can't use the per-tick emit/done cycle — ctx.send pushes straight to the outport channel.
  • Routing topology lives in Router.run. Hash-by-customer, geo-by-region, schema-version split — all one-method changes. None of them touch the wiring or the sinks.
  • Transport-independence. KafkaSink could be RabbitSink, HttpSink, an in-memory test sink — same graph shape. The Reflow contract is "messages on ports", not "Kafka records."
  • Shutdown. source.stop() exits the poll loop on next iteration (while (!stopped)); producer.close() flushes the outbound Kafka batch. The JVM exits when both drain.

What is next

The next post takes the SDK in a different direction — back to Python, but with a triggered-batch lifecycle and a workflow assembled almost entirely from catalog templates (api_github_*, tpl_loop, tpl_switch, api_slack_send_message). Reads, decides, takes action.

Composing a workflow from the catalog (Python)

A workflow that reads issues from GitHub, decides via JSON rules, and takes action — Slack alert, follow-up ping, or local archive. Almost every node in the graph is a catalog template instantiated by id; the only Python code is small data-shape adapters and a per-row JSONL appender.

The routing logic itself is rules in config, evaluated by the catalog's tpl_rules_engine — no custom decision actor.

This is the third lifecycle in the series:

TutorialLifecycleDriver
05 (Spring Boot)Per-requestHTTP request handler
06 (Kafka)Long-runningContinuous Kafka poll loop
07 (this post)Triggered batchOne-shot or scheduled, runs to completion

What this replaces

Hand-rolled Python ETL ends up looking like this:

def triage():
    issues = github.list_issues(state="open", filter="assigned")
    for issue in issues:
        if "priority:high" in labels(issue) or "bug" in labels(issue):
            slack.post(channel="#ops-triage", text=fmt(issue))
        elif issue.assignee is None and age_days(issue) >= 3:
            slack.post(channel="#ops-triage", text=ping(issue))
        else:
            archive.append(issue)

You write the if/elif chain, the executor pool, the GitHub/Slack client wiring. With Reflow:

  • The if/elif is two tpl_rules_engine actors with rules in JSON config. Adding a new rule is one config dict and one connection.
  • The Slack/GitHub clients are catalog API actorstemplate_actor("api_github_list_issues"), template_actor("api_slack_send_message").
  • The custom Python is just data adaptation: peel an HTTP envelope, flatten labels for the rules engine, append a JSONL row.
flowchart LR
    src[api_github_list_issues] --> ext[ExtractIssues]
    ext -->|issues| each[tpl_loop]
    each -->|item| norm[IssueNormalize]
    norm --> rh[tpl_rules_engine<br/>high_prio]
    rh -->|matched| slack[api_slack_send_message]
    rh -->|unmatched| ro[tpl_rules_engine<br/>needs_owner]
    ro -->|matched| ping[ConsoleSink<br/>or Slack ping]
    ro -->|unmatched| arch[JsonlAppender<br/>tracked.jsonl]

Prerequisites

Python 3.9+ and the published Python SDK plus the api_services pack (matched ABI for your platform):

pip install 'offbit-reflow>=0.2.8'

PACK=https://github.com/offbit-ai/reflow/releases/download/pack-v0.2.4
TRIPLE=$(uname -m)-apple-darwin   # or x86_64-unknown-linux-gnu, etc.
curl -LO "$PACK/reflow.pack.api_services-0.2.0-$TRIPLE.rflpack"

The SDK auto-loads its native runtime; the pack adds the ~6,700 api_* templates (GitHub, Slack, OpenAI, Stripe, …).

import offbit_reflow as reflow

reflow.load_pack("./reflow.pack.api_services-0.2.0-…rflpack")
# After this call, template_actor("api_github_list_issues") works.

Rules as data

Each branch of the workflow is a tpl_rules_engine actor whose config holds the rule. The rule's actions.setProperty enriches matched records with a branch tag; the engine emits on matched or unmatched accordingly. Chain them and you get if/elif/else without writing a single match arm in Python.

High-priority rule

HIGH_PRIO_RULE = {
    "rules": {
        "type": "IF",
        "groups": [
            {
                "connector": "OR",
                "rules": [
                    {"field": "labels", "operator": "contains", "value": "priority:high"},
                    {"field": "labels", "operator": "contains", "value": "bug"},
                ],
            },
        ],
        "actions": {
            "setProperty": [{"key": "branch", "value": "high_prio"}],
        },
    },
}

Operators the engine recognizes: is, is_not, contains, not_contains, greater_than, less_than, greater_equal, less_equal, between, empty, not_empty. Groups are AND/OR over flat rules; the top-level type is IF (all groups match) or OR (any group matches).

Needs-owner rule

NEEDS_OWNER_RULE = {
    "rules": {
        "type": "IF",
        "groups": [
            {
                "connector": "AND",
                "rules": [
                    {"field": "has_assignee", "operator": "is",            "value": False},
                    {"field": "age_days",     "operator": "greater_equal", "value": 3},
                ],
            },
        ],
        "actions": {
            "setProperty": [{"key": "branch", "value": "needs_owner"}],
        },
    },
}

Adding a third rule (e.g. "needs_review" for stale PRs) is another dict and one add_connection to a third sink — no graph restructuring.

Custom actors (the part the catalog can't anticipate)

Three small Python actors. Together about 60 lines.

ExtractIssues — peel the HTTP envelope

api_github_list_issues emits response = {status, headers, body}. tpl_loop wants a bare array. One small actor bridges them.

class ExtractIssues(Actor):
    component = "extract_issues"
    inports = ["response"]
    outports = ["issues"]

    def run(self, ctx):
        envelope = ctx.inputs["response"]["data"]
        body = envelope.get("body") if isinstance(envelope, dict) else envelope
        ctx.done({"issues": Message.array(body)})

IssueNormalize — flatten for the rules engine

tpl_rules_engine's contains operator works on Value::Array(...) against bare values, so labels need to be a list of strings (not the GitHub object form [{"name": ..., "color": ..., ...}]). Same idea for derived fields (age_days, has_assignee).

class IssueNormalize(Actor):
    component = "issue_normalize"
    inports = ["item"]
    outports = ["issue"]

    def run(self, ctx):
        wrapper = ctx.inputs["item"]["data"]   # tpl_loop wraps as {value, index}
        issue = wrapper.get("value", wrapper)
        ctx.done({"issue": Message.object({
            "number":       issue.get("number"),
            "title":        issue.get("title"),
            "url":          issue.get("html_url"),
            "labels":       sorted({l.get("name", "") for l in (issue.get("labels") or [])}),
            "comments":     int(issue.get("comments") or 0),
            "has_assignee": issue.get("assignee") is not None,
            "age_days":     _age_days(issue.get("created_at")),
        })})

JsonlAppender — append-mode writer

tpl_file_save writes the whole file in one shot. Per-record audit logs need append semantics. Ten lines.

Wiring the graph

The graph has no custom routing logic — it's all rules engines plus connections.

import offbit_reflow as reflow
from offbit_reflow import Network

reflow.load_pack("./reflow.pack.api_services.rflpack")
net = Network()

# Source — real GitHub API
net.register_actor("tpl_source", reflow.template_actor("api_github_list_issues"))
net.add_node("source", "tpl_source", config={"state": "open", "filter": "assigned"})

# Pipeline
net.register_actor("tpl_extract",   ExtractIssues()._build())
net.register_actor("tpl_loop",      reflow.template_actor("tpl_loop"))
net.register_actor("tpl_normalize", IssueNormalize()._build())
net.add_node("extract",   "tpl_extract")
net.add_node("each",      "tpl_loop")
net.add_node("normalize", "tpl_normalize")

# Routing tree (rules-engine chain)
net.register_actor("tpl_rule_high",  reflow.template_actor("tpl_rules_engine"))
net.register_actor("tpl_rule_owner", reflow.template_actor("tpl_rules_engine"))
net.add_node("rule_high",  "tpl_rule_high",  config=HIGH_PRIO_RULE)
net.add_node("rule_owner", "tpl_rule_owner", config=NEEDS_OWNER_RULE)

# Sinks
net.register_actor("tpl_slack", reflow.template_actor("api_slack_send_message"))
net.add_node("sink_high", "tpl_slack", config={"channel": "#ops-triage"})

net.register_actor("tpl_sink_owner", ConsoleSink("needs-owner")._build())
net.add_node("sink_owner", "tpl_sink_owner")

net.register_actor("tpl_archive", JsonlAppender("./out/tracked.jsonl")._build())
net.add_node("archive", "tpl_archive")

# Connections
net.add_connection("source",     "response",  "extract",    "response")
net.add_connection("extract",    "issues",    "each",       "collection")
net.add_connection("each",       "item",      "normalize",  "item")
net.add_connection("normalize",  "issue",     "rule_high",  "data")
net.add_connection("rule_high",  "matched",   "sink_high",  "data")
net.add_connection("rule_high",  "unmatched", "rule_owner", "data")
net.add_connection("rule_owner", "matched",   "sink_owner", "routed")
net.add_connection("rule_owner", "unmatched", "archive",    "data")

net.add_initial("source", "filter", {"type": "String", "data": "assigned"})
net.start()

That's the entire workflow. The branch table is the two RULE dicts; adding a third rule is one new dict + one new tpl_rules_engine node + two add_connections (matched → sink, unmatched → next rule).

Run it

export GITHUB_API_KEY=ghp_…              # PAT with repo scope
export SLACK_API_KEY=xoxb-…              # bot token in your workspace
export SLACK_CHANNEL=#ops-triage

cd sdk/python/examples/tutorial-07-issue-triage
python3 pipeline.py

The pipeline calls GET /issues?state=open&filter=assigned, routes each returned issue, and posts to Slack / archives according to the rules. Output:

[would-slack] #412 Crashes on startup with Python 3.13   …
[would-slack] #350 Memory leak when streaming huge files …
[needs-owner] #388 Document new ctx.send mid-tick API    …

archive: ./out/tracked.jsonl (1 rows)

(The repo also ships a fixture mode that runs the same graph against fixtures/issues.json without credentials — handy for verifying the wiring before pointing at real APIs. See the example's README.md.)

Notes on the design

  • The catalog is the workflow runtime. Of the 9 nodes in the graph, 6 are catalog templates (api_github_list_issues, tpl_loop, two tpl_rules_engine instances, api_slack_send_message, and the API source). The custom Python is just three small data adapters.
  • Routing is rules, not code. tpl_rules_engine evaluates a JSON rule per tick; chaining matched/unmatched gives if/elif/else for free. Compare to tpl_switch (field equality only): the rules engine handles AND/OR over multiple fields, numeric ranges, array membership, etc.
  • Triggered batch lifecycle. No HTTP request, no Kafka stream — the network runs once on startup, drains, exits. Wrap with tpl_interval_trigger or tpl_cron_trigger to schedule.
  • Pack model. The api_services pack ships the 6,700 api_* actors out-of-band from the SDK wheel. One reflow.load_pack(...) call at startup makes them all reachable via template_actor(id).

What is next

The next post is a small C++ audio synthesizer that puts ctx.pool to work — three voices, a mixer with one inport (not three), a WAV file. Same idea, different domain: per-upstream stable-id state in a shared pool the consumer reads atomically.

A polyphonic synthesizer with ctx.pool (C++)

A small offline synth: three voices, a mixer, a WAV file at the end. The mixer is one node — not three — even though it consumes per-voice state from a variable number of upstreams. ctx.pool is what makes that possible: a per-actor {id: value} map that persists across ticks, so the consumer can hold per-upstream state without one inport per upstream.

The graph also showcases two other patterns that drop out of the design:

  • Streams for high-throughput producer messaging — the driver pushes 344 tick events through one stream rather than 344 separate outport packets, sidestepping the cap-50 outport queue's backpressure.
  • ctx.send (mid-tick flush) for publishing several values from inside one run callback when the per-tick emit model would collapse them.
flowchart LR
    drv[driver] -->|meta * 3| mix[mixer]
    drv -->|tick stream| mix
    mix -->|block N| sink([WAV file])

Why pool

Pre-pool, a fan-in graph either had to:

  1. Declare one inport per upstream — voice_0_in, voice_1_in, voice_2_in, with awaitAllInports = true on the consumer. Doesn't scale to variable N — adding a fourth voice means editing the consumer.
  2. Accumulate state in ctx.state_set("voice_0_freq", ...), state_set("voice_1_freq", ...), etc. Manual key namespacing, no count operation, no clear, no atomic snapshot.

ctx.pool is the dedicated tool: namespaced under _pool:<name>, keyed by id, atomic snapshot via pool_get_json, count via pool_count. The consumer reads pool_get_json("voices") and gets one JSON object with every active voice's metadata — no matter how many upstreams produced it.

Prerequisites

  • C++17 toolchain (Apple Clang, GCC, MSVC).
  • CMake ≥ 3.16.
  • A built libreflow_rt_capi.{dylib,so,dll} from the runtime, or a release tarball from the Go SDK GitHub Releases — the same binary the Go SDK ships.
cargo build --release -p reflow_rt_capi
cmake -S sdk/cpp -B build \
    -DREFLOW_CPP_BUILD_EXAMPLES=ON \
    -DREFLOW_RT_CAPI_LIB=$PWD/target/release/libreflow_rt_capi.dylib
cmake --build build --target reflow_cpp_tutorial08
build/reflow_cpp_tutorial08

The output tutorial-08.wav is 44.1 kHz mono PCM, 1 second long — play it in any audio app.

The driver

One actor that fires once on a Flow initial. It publishes voice metadata via ctx.send (mid-tick flush — the per-tick emit HashMap would collapse the three updates into one), then opens a stream for the per-block tick events.

auto driver = reflow::Actor::from_callback(
    "driver", /*inports=*/{"_trigger"}, /*outports=*/{"meta", "tick"},
    [](reflow::Context& ctx) {
        // Voice metadata — three small messages, one per voice.
        for (int v = 0; v < kVoiceCount; ++v) {
            char buf[128];
            std::snprintf(buf, sizeof(buf),
                          "{\"voice_id\":%d,\"freq\":%.4f,\"gain\":0.25}",
                          v, kVoiceFreqs[v]);
            ctx.send("meta", reflow::Message::object_from_json(buf));
        }

        // Tick stream — each block index packed as 4 little-endian
        // bytes. The stream's own channel is unbounded
        // (buffer_size = 0), so 344 frames push without blocking.
        auto stream = reflow::StreamProducer::create(
            /*buffer_size=*/0, "driver", "tick");
        for (int b = 0; b < kNumBlocks; ++b) {
            std::uint8_t bytes[4] = { /* le32(b) */ };
            stream.send_bytes(bytes, 4);
        }
        ctx.emit("tick", std::move(stream).into_message());
    });

emit vs send vs streams

Three ways to publish from inside a callback, each for a different shape of work:

WhenToolWhy
≤ 1 message per port per tickctx.emit(port, msg)Accumulates outputs in a HashMap drained on return. Multiple emits to the same port collapse to the last write.
Several messages per port per tick (≤ outport cap)ctx.send(port, msg)Writes straight to the outport channel, bypassing the per-tick drain. Bounded by the outport's queue capacity (cap 50).
Many messages per port per tickStreamProducerCarries its own channel — bounded or unbounded — independent of the actor's outport queue. The right tool for "publish 100k frames in one callback."

The driver hits all three modes. emit for the StreamHandle (one message), send for the three voice metadata updates, and a stream for the 344 tick frames.

The mixer (where pool earns its keep)

Two inports — meta and tick. The mixer fires whenever either has input.

auto mixer = reflow::Actor::from_callback(
    "mixer", /*inports=*/{"meta", "tick"}, /*outports=*/{"block"},
    [render_block](reflow::Context& ctx) {
        // ── meta: stash voice config in the pool ─────────────────
        if (auto m = ctx.take_input("meta")) {
            auto inner_opt = m->data_json();
            if (inner_opt) {
                int voice_id = static_cast<int>(json_int(*inner_opt, "voice_id"));
                ctx.pool_upsert("voices", std::to_string(voice_id), *inner_opt);
            }
        }

        // ── tick: a StreamHandle that delivers every block index ─
        if (auto t = ctx.take_input("tick")) {
            auto reader = reflow::StreamReader::from_message(*t);
            if (!reader) return;
            while (true) {
                auto frame = reader->recv(5000);
                if (frame.kind != rfl_stream_frame_kind_Data) break;
                int block_idx = decode_le_int32(frame.data);
                render_block(ctx, block_idx);
            }
        }
    });

ctx.pool_upsert("voices", "0", "{voice_id:0, freq:..., gain:...}") stores one entry. Three voices = three entries. Adding a fourth voice means the driver sends one more meta message — the mixer already handles it.

render_block is a closure that reads the whole pool with ctx.pool_get_json("voices"), walks every entry, generates samples for each active voice, and accumulates into the output buffer:

std::string pool = ctx.pool_get_json("voices");
// {"0": {voice_id:0, freq:261.6256, gain:0.25}, "1": {...}, "2": {...}}

for (each entry) {
    int voice_id = json_int(entry, "voice_id");
    double freq  = json_double(entry, "freq");
    double gain  = json_double(entry, "gain");
    // Generate one block of sine samples at this freq, sum into mixed[].
}

The mixer's port count never changes. Adding a voice is one extra message from the driver — which is exactly the pattern flexible catalog actors expose to the user (api_* HTTP services that scale to thousands of endpoints with one inport apiece).

Voice phase across ticks

Pool stores immutable JSON snapshots. For mutable per-tick state (here: per-voice oscillator phase), we use a closure capturestd::shared_ptr<std::vector<double>> — that lives across invocations of the mixer's callback. Reflow guarantees an actor's callback isn't invoked concurrently with itself, so plain shared state without a lock is safe.

auto phases = std::make_shared<std::vector<double>>(64, 0.0);

auto render_block = [phases, &sink](reflow::Context& ctx, int block_idx) {
    // ...
    double phase = (*phases)[voice_id];
    for (int i = 0; i < kBlockSize; ++i) {
        mixed[i] += static_cast<float>(std::sin(phase) * gain);
        phase += dphase;
    }
    (*phases)[voice_id] = phase;
};

Pool is the right tool for "stable-id state that flows in from upstream." Captured-shared state is the right tool for "per-id state the consumer derives itself across ticks." A real synth would keep both.

Wiring the network

Two actors, two connections. The driver's meta outport feeds the mixer's meta inport (three small messages); the driver's tick outport feeds the mixer's tick inport (one StreamHandle that delivers all 344 frames). A single Flow initial on _trigger kicks the driver, which then publishes everything in one run.

reflow::Network net;
net.register_actor("tpl_driver", std::move(driver));
net.register_actor("tpl_mixer",  std::move(mixer));

net.add_node("driver", "tpl_driver");
net.add_node("mixer",  "tpl_mixer");

net.add_connection("driver", "meta", "mixer", "meta");
net.add_connection("driver", "tick", "mixer", "tick");

// One Flow initial on _trigger — the driver fires once and
// publishes everything (3 metas via ctx.send + 1 tick stream
// via ctx.emit).
net.add_initial("driver", "_trigger", R"({"type":"Flow"})");

net.start();

Once started, the network runs asynchronously on the runtime's Tokio worker pool. Main thread waits for the mixer to render every expected block:

{
    std::unique_lock<std::mutex> lk(sink.mu);
    sink.cv.wait_for(lk, std::chrono::seconds(20), [&] {
        return sink.blocks_seen.load() >= kNumBlocks;
    });
}
net.shutdown();
write_wav_pcm16("tutorial-08.wav", sink.samples, kSampleRate);

net.shutdown() is non-blocking — it signals the actors to stop; the destructor ~Network() (when net leaves scope) tears the runtime down. The condition variable + mutex pattern lets the mixer's callback flag completion (blocks_seen.fetch_add(1) + cv.notify_one()) and the main thread wake up exactly once the last block lands.

Run it

build/reflow_cpp_tutorial08
# reflow runtime 0.2.3
# rendered 344 blocks (344 expected)
# wrote tutorial-08.wav (44032 samples, 1.00 s)

tutorial-08.wav is a one-second C-major chord (C4 + E4 + G4 sine voices, soft fade in/out). Open it in any audio player.

Notes on the design

  • Pool is per-actor, not network-shared. Each actor has its own pools. A pool exposed to user code lives only inside the consumer's state — pool_upsert from outside is impossible. Upstream producers send a message; the consumer takes the message and upserts.
  • Pool keys must be stable. The pool's value when N voices write is {id_0: ..., id_1: ..., ..., id_{N-1}: ...}. Use deterministic per-upstream identifiers (voice id, sensor mac, file path). Random ids accumulate forever; use pool_remove or pool_clear to garbage-collect.
  • Pool requires MemoryState. Custom state backends yield InvalidState from every pool method. The default backend is MemoryState, so this is only a concern if you've registered a custom backend.
  • Streams when you'd otherwise burst the outport. The C ABI's per-actor outport channel has a capacity of 50. Trying to emit or send more than that in one callback dead-locks against the forwarder draining the channel. Streams have their own (configurable) channel — the right escape hatch for high-throughput producers.

What is next

The next post takes Reflow back to production-shaped Python: a daily triage pipeline where Airflow owns the calendar (schedule, backfill, retry, UI, credentials) and Reflow owns the actor graph. Same shape as tutorial 07, wrapped in the integration pattern Airflow shops actually deploy.

Reflow inside an Airflow PythonOperator (Python)

A daily issue-triage pipeline. Airflow handles the calendar: schedule, retries, backfill, the dependency graph, the credentials store, the duration UI. Reflow handles the actor graph: catalog templates for the I/O, custom actors for the bits the catalog can't anticipate. The integration point is one PythonOperator whose body is a regular Reflow Network — same lifecycle, same actors, same start() → drain → shutdown() shape as a stand-alone script.

ConcernOwned by
Cron / @daily scheduleAirflow
Backfill (-s 2024-01-01 -e 2024-01-31)Airflow
Retry on failure, exponential backoffAirflow
Connections / Variables / Pools (credentials, fleet caps)Airflow
Web UI tied to a calendarAirflow
Per-ds summary persistence (Postgres / metrics)Airflow
Actor graph — read GitHub, route via rules, post SlackReflow
Concurrency & backpressure within the graphReflow
Catalog of I/O templates (api_github_*, api_slack_*, tpl_rules_engine, tpl_loop)Reflow

The pairing is intentional. Airflow is the wrong tool for fine-grained in-graph concurrency (one PythonOperator per actor would be a 30-process meatgrinder). Reflow is the wrong tool for "rerun this graph for last month" (no concept of a calendar). Use each for what it's good at.

What this replaces

Hand-rolled Airflow DAGs that Python their way through fan-out via ThreadPoolExecutor and a pile of PythonOperator chains:

@dag(schedule="@daily")
def triage():
    issues = list_issues()                                   # PythonOperator
    classified = classify_concurrent(issues, max_workers=8)  # PythonOperator
    high, low = partition(classified)                        # BranchPythonOperator
    [post_slack(h) for h in high]                            # mapped tasks
    write_jsonl(low)                                         # PythonOperator

The mapped-tasks fan-out spawns N task instances per day — each is a separate process, each runs through Airflow's scheduler-loop serialization, each gets its own row in the task-instance table. Fine for handful-of-items workflows. Painful at scale and overkill for "fan three API lookups out and join them."

The Reflow-inside-Airflow shape collapses that:

@dag(schedule="@daily")
def triage_pipeline():
    @task(retries=2, retry_exponential_backoff=True)
    def triage(ds: str | None = None, **ctx) -> dict:
        return run_triage(ds=ctx["ds"], ...)        # one Reflow Network here

    triage() >> record_metrics

Two task instances per day — one to run the network, one to record. The fan-out happens inside Reflow's actor scheduler, not Airflow's.

Prerequisites

pip install 'offbit-reflow>=0.2.9' 'apache-airflow>=2.10' 'apache-airflow-providers-postgres'

# Pack download — same one tutorial 07 uses.
PACK=https://github.com/offbit-ai/reflow/releases/download/pack-v0.2.5
TRIPLE=$(uname -m)-apple-darwin
curl -LO "$PACK/reflow.pack.api_services-0.2.0-$TRIPLE.rflpack"
mv reflow.pack.api_services-0.2.0-*.rflpack reflow.pack.api_services.rflpack

In Airflow:

# Variables (Admin → Variables, or via CLI):
airflow variables set REFLOW_GITHUB_API_KEY ghp_...
airflow variables set REFLOW_SLACK_API_KEY  xoxb-...
airflow variables set REFLOW_PACK_PATH      /opt/airflow/dags/reflow.pack.api_services.rflpack
airflow variables set REFLOW_OUTPUT_DIR     /var/lib/reflow-triage
airflow variables set REFLOW_SLACK_CHANNEL  '#ops-triage'

# Connections:
airflow connections add reflow_metrics --conn-uri 'postgres://user:pw@host:5432/metrics'

Variables vs env vars: in dev you can set GITHUB_API_KEY / SLACK_API_KEY directly. In production, store creds in Airflow Variables (encrypted at rest) and bind them to environment variables inside the operator — api_github_* / api_slack_* actors read the env, so this is a one-line os.environ[...] = Variable.get(...) inside the task body.

The Reflow Network (pipeline.py)

This file is identical in shape to tutorial 07 — read GitHub → loop → rules engine → split outputs to Slack vs JSONL archive — but it's wrapped in one function the operator calls.

def run_triage(
    ds: str,
    *,
    pack_path: str,
    output_dir: str,
    slack_channel: str,
    timeout_seconds: float = 300.0,
) -> dict[str, Any]:
    reflow.load_pack(pack_path)

    out_path = str(Path(output_dir) / f"{ds}.jsonl")
    Path(out_path).unlink(missing_ok=True)

    counter = {"written": 0, "alerted": 0}
    cv = threading.Condition()

    net = Network()
    # ... register actors, add nodes, add connections ...
    net.start()

    # Wait for the graph to drain. Airflow's task heartbeat handles
    # the longer-running side; the timeout here just guards a stuck
    # network.
    deadline = time.time() + timeout_seconds
    with cv:
        while time.time() < deadline:
            if counter["written"] + counter["alerted"] == 0:
                cv.wait(timeout=2.0)
                continue
            initial = (counter["written"], counter["alerted"])
            cv.wait(timeout=1.5)
            if (counter["written"], counter["alerted"]) == initial:
                break
    net.shutdown()

    return {
        "ds": ds,
        "alerted": counter["alerted"],
        "tracked": counter["written"],
        "output_path": out_path,
    }

The graph itself uses the same shape as tutorial 07:

flowchart LR
    src[api_github_list_issues] --> ext[ExtractIssues]
    ext -->|issues| each[tpl_loop]
    each -->|item| norm[IssueNormalize]
    norm --> rh[tpl_rules_engine<br/>high_prio]
    rh -->|matched| fmt[SlackFormatter]
    fmt -->|text| slack[api_slack_send_message]
    rh -->|matched| ac[AlertCounter]
    rh -->|unmatched| arch[JsonlAppender]

Three custom actors:

  • ExtractIssues peels the {status, headers, body} HTTP envelope.
  • IssueNormalize flattens labels for the rules engine.
  • JsonlAppender is an append-mode JSONL writer that also bumps a completion counter so the operator knows when to return.

Plus two tiny adapters needed for the api-services contracts:

  • SlackFormatter builds a Slack-formatted text from the matched issue (api_slack_send_message wants text, not the raw object).
  • AlertCounter taps the same matched outport as the formatter to bump the alert tally — pure broadcast wiring.

The DAG (dags/triage.py)

One file under $AIRFLOW_HOME/dags/. The PythonOperator's python_callable is run_triage from above; the only Airflow- specific work is binding Variables to env vars and chaining a follow-up task that records the day's summary.

@dag(
    dag_id="reflow_issue_triage",
    schedule="@daily",
    start_date=datetime(2024, 1, 1),
    catchup=True,
    max_active_runs=1,
    tags=["reflow", "triage"],
)
def triage_pipeline():
    @task(retries=2, retry_exponential_backoff=True)
    def triage(ds: str | None = None, **ctx) -> dict:
        os.environ["GITHUB_API_KEY"] = Variable.get("REFLOW_GITHUB_API_KEY")
        os.environ["SLACK_API_KEY"]  = Variable.get("REFLOW_SLACK_API_KEY")
        return run_triage(
            ds=ds or ctx["ds"],
            pack_path=Variable.get("REFLOW_PACK_PATH"),
            output_dir=Variable.get("REFLOW_OUTPUT_DIR"),
            slack_channel=Variable.get("REFLOW_SLACK_CHANNEL"),
            timeout_seconds=600.0,
        )

    record = PostgresOperator(
        task_id="record_run",
        postgres_conn_id="reflow_metrics",
        sql="""
            INSERT INTO triage_runs (ds, alerted, tracked, output_path)
            VALUES ('{{ ds }}',
                    {{ ti.xcom_pull(task_ids='triage')['alerted'] }},
                    {{ ti.xcom_pull(task_ids='triage')['tracked'] }},
                    '{{ ti.xcom_pull(task_ids='triage')['output_path'] }}')
            ON CONFLICT (ds) DO UPDATE SET
              alerted = EXCLUDED.alerted,
              tracked = EXCLUDED.tracked;
        """,
    )

    triage() >> record

What Airflow gives you for free

  • Schedule + catchup. schedule="@daily" + catchup=True means every missing date since start_date runs automatically when the DAG first deploys.
  • Backfill. airflow dags backfill -s 2024-01-01 -e 2024-01-31 fires 31 task instances; each one runs the same Reflow Network with its own ds. Idempotency is on the operator (the network truncates out/<ds>.jsonl on entry; the SQL uses ON CONFLICT DO UPDATE).
  • Retry. retries=2, retry_exponential_backoff=True — if the task fails (network exception, timeout, anything), Airflow retries with backoff. The Reflow Network gets re-instantiated on each retry, fresh state.
  • XCom. triage()'s return dict is automatically pushed to XCom. Downstream tasks pull it via Jinja or ti.xcom_pull. No glue code.
  • Web UI. Calendar view, task duration histograms, log per task instance, manual trigger, mark-as-success, clear-and-rerun.
  • Connections / Variables. Encrypted credential storage. The alternative — ~/.bashrc-style env vars — doesn't survive audits.

What stays Reflow's job

  • Per-tick scheduling inside the graph. When the rules engine fires matched, both the Slack formatter and the alert counter fire concurrently. Airflow doesn't see this happen; it sees one task instance running for ~5 seconds. Trying to model the fan-out as Airflow tasks would burn ~6 task instances per matched issue and choke the scheduler.
  • Backpressure. Bounded channels between actors throttle fast producers. tpl_loop fans out N issues, but downstream pressure prevents the network from saturating Slack.
  • The catalog. ~6,700 api_* actors plus ~30 tpl_* flow templates. Adding a new branch (e.g., GitHub comment instead of Slack) is one rule + one connection — not a new Airflow task.

Test without Airflow

The pipeline.py file ships a __main__ entry that runs the network without an Airflow scheduler. Set credentials via env vars and invoke directly — useful when iterating on the actor graph before deploying the DAG.

cd sdk/python/examples/tutorial-09-airflow-triage
export GITHUB_API_KEY=$(gh auth token)
export SLACK_API_KEY=xoxb-...
python3 pipeline.py
# → {"ds": "2026-04-28", "alerted": 2, "tracked": 9, "output_path": "..."}

Notes on the design

  • No Airflow dependency in the network code. pipeline.py imports only offbit_reflow. The DAG file imports pipeline.py and the Airflow surface, then wires them. Swap Airflow for Prefect / Dagster / a cron script and pipeline.py is unchanged.
  • One Network per task instance. Each retry, backfill run, or manual trigger gets a fresh Reflow Network. No leaked state between days.
  • Variables → env vars binding inside the operator. Two-line pattern that lets the api-services catalog read its standard env vars. The same pipeline runs identically in dev (env vars set in shell), staging (Airflow Variables), and production (KubernetesExecutor with sealed secrets).
  • Idempotent JSONL output. Every operator entry truncates out/<ds>.jsonl so reruns produce the same file regardless of retry count. Downstream Postgres uses ON CONFLICT for the same property.

What's next

The next post takes Reflow across the process boundary: two peers federated through the bundled reflow-discovery server, the same shape that works for two machines.

A graph that spans two processes

Tutorial 09 closed the in-process series. This is the follow-up: two Reflow networks running as separate processes — the same shape works for two machines, two pods, two regions — federated through the reflow-discovery server. Messages sent on one peer land in an actor running on the other.

What this is for

In-process Reflow handles per-tick concurrency inside one address space — perfect for a CLI tool, a daemon, an embedded worker. Crossing the process boundary means a different set of constraints:

ConcernSolved by
Where is the other peer?Discovery server (reflow-discovery)
Auth — is this a peer I should talk to?Shared auth_token
What if the peer dies?Heartbeat eviction + auto-reconnect with backoff
What if the network blips?Same — reconnect transparent to the actor graph
Addressing remote actors<actor_id>@<network_id> proxy in the local graph

The reflow-peer CLI bundles all of that. The actor authoring side is unchanged from earlier tutorials — the same Actor trait, the same lifecycle. Everything new lives in the bridge.

flowchart LR
    disc[reflow-discovery<br/>:9000]:::svc
    alpha[peer alpha<br/>:9100]:::peer
    beta[peer beta<br/>:9101]:::peer

    alpha -. POST /register .-> disc
    beta  -. POST /register .-> disc
    alpha -. GET /networks .-> disc
    beta  -. GET /networks .-> disc

    beta == WebSocket bridge ==> alpha
    classDef svc fill:#e8eef7,stroke:#5a6f96,color:#23314f
    classDef peer fill:#f3eafa,stroke:#774d9c,color:#37214a

Prerequisites

Build the binaries:

cargo build --release -p reflow_distributed
ls target/release/reflow-{discovery,peer}

Both binaries are self-contained — no shared state files, no runtime dependencies beyond a TCP port.

The discovery server

Run it on whichever host every peer can reach. For local dev that means 127.0.0.1; in production it sits on an internal network behind your usual TLS-terminating proxy.

target/release/reflow-discovery --bind 127.0.0.1:9000
# 2026-04-28T22:48:32Z INFO  reflow distributed server listening on http://127.0.0.1:9000

The server is HTTP-only. The contract is two endpoints:

POST /register   {network_id, instance_id, endpoint, capabilities}
GET  /networks   → [{network_id, instance_id, endpoint, capabilities, last_seen}, …]

Peers re-register on every refresh tick (default 15s). Entries older than --entry-ttl-secs (default 60s) get pruned.

Peer alpha

alpha.toml:

network_id   = "alpha"
instance_id  = "alpha-1"
bind_address = "127.0.0.1"
bind_port    = 9100
discovery_endpoints = ["http://127.0.0.1:9000"]
auth_token   = "shared-secret"

alpha doesn't need to dial anyone — beta will dial it. Run it:

target/release/reflow-peer --config alpha.toml
# INFO starting peer: network_id=alpha, instance_id=alpha-1, bind=127.0.0.1:9100
# INFO Registered with discovery endpoint: http://127.0.0.1:9000
# INFO peer ready, awaiting messages (Ctrl-C to stop)

The CLI registers a built-in recorder actor — it logs every inbound message on recorder.in. That's enough for a smoke target; swap in your own actor by writing a small Rust binary that reuses reflow_distributed::peer_config::PeerConfig to load the same TOML.

Peer beta

beta.toml:

network_id   = "beta"
instance_id  = "beta-1"
bind_address = "127.0.0.1"
bind_port    = 9101
discovery_endpoints = ["http://127.0.0.1:9000"]
auth_token   = "shared-secret"

[[connect]]
endpoint = "127.0.0.1:9100"

[[connect]] tells beta to dial alpha on startup. Discovery would eventually surface alpha's endpoint anyway, but for the first message you don't want to wait one refresh cycle.

target/release/reflow-peer --config beta.toml \
    --send 'alpha:recorder:in:hello-from-beta'
# INFO connected to peer at 127.0.0.1:9100
# INFO sent test message to alpha/recorder.in: "hello-from-beta"
# INFO peer ready, awaiting messages (Ctrl-C to stop)

Watch alpha:

INFO Established connection with network: beta
INFO 🌐 BRIDGE: Received remote message from beta to alpha::recorder
INFO ✅ ROUTER: Successfully routed message to local actor recorder
INFO recorder.in: String("hello-from-beta")

That's the full federation: beta → bridge → alpha → local recorder actor's in port. The Actor::run callback fires the same way it would for an in-process message.

Crossing the bridge from your own code

The CLI is fine for ops scripts and integration tests, but a real service writes its own binary. The shape:

#![allow(unused)]
fn main() {
use reflow_distributed::peer_config::PeerConfig;
use reflow_network::distributed_network::DistributedNetwork;

let cfg = PeerConfig::from_path(&args.config)?.to_distributed_config();
let mut net = DistributedNetwork::new(cfg).await?;

// Your own actors here.
net.register_local_actor("transcoder", TranscoderActor::new(), None)?;
net.start().await?;

// Register a proxy for the actor we want to talk to.
net.register_remote_actor("recorder", "alpha").await?;

// Push messages through the local network — the proxy forwards
// across the bridge.
{
    let local = net.get_local_network();
    let net = local.read();
    net.send_to_actor("recorder@alpha", "in",
        Message::String(Arc::new("from-my-binary".into())))?;
}
}

reflow_network::distributed_network is the public API; the reflow-peer binary is a thin wrapper around it. Mixing your own actors with peer-CLI infra is just "use the same TOML config and add register_local_actor calls."

Auth tokens

The bridge enforces token equality on the accept side. With auth_token = "shared-secret" on both peers:

beta  → handshake { auth_token: "shared-secret" }
alpha → handshake response { success: true }

With a mismatch:

beta  → handshake { auth_token: "wrong" }
alpha → handshake response { success: false, error: "auth_token mismatch" }
beta  → connect_to_network returns Err("handshake refused: auth_token mismatch")

If alpha has no auth_token set, anyone can connect — sensible for loopback dev, not for anything that talks to a public IP.

Surviving a peer restart

Kill alpha while beta is running:

^C alpha

Beta's heartbeat catches it within 3 × heartbeat_interval_ms (3s by default). The connection map drops alpha. The reconnect dispatcher starts retrying with capped exponential backoff — 200ms → 400ms → 800ms → 1.6s → 3.2s, give-up after five attempts.

Restart alpha within that window:

target/release/reflow-peer --config alpha.toml

Beta logs:

WARN Reconnect attempt 1/5 for alpha failed: Connection refused
WARN Reconnect attempt 2/5 for alpha failed: Connection refused
INFO Reconnected to alpha (endpoint 127.0.0.1:9100) on attempt 3

No state replay, no message redelivery — anything sent during the gap was dropped. The bridge's job ends at delivery; durability is something you build on top of it (an outbox actor, a Kafka log, retry-with-idempotency-keys). That's deliberate: the bridge stays simple and the persistence policy stays in your hands.

Discovery refresh

Every 15s (configurable on the DiscoveryService) each peer re-registers and re-reads /networks. When a new peer joins:

#![allow(unused)]
fn main() {
let mut events = net.discovery().subscribe();
while let Ok(ev) = events.recv().await {
    match ev {
        DiscoveryEvent::Added(info)   => /* a new peer is up */,
        DiscoveryEvent::Removed(id)   => /* a peer is gone */,
        DiscoveryEvent::Updated(info) => /* same id, new endpoint */,
    }
}
}

Use this if you want to auto-register remote-actor proxies the moment a peer comes online — saves a round-trip vs. the configuration approach.

Production shape

  • Run discovery as its own service. Behind a TLS-terminating proxy if it crosses an untrusted boundary. The library (reflow_distributed::router) lets you mount the contract inside your existing axum app if you want auth in front of it.
  • Use a --bind 0.0.0.0:<port> peer + private network. The bridge speaks plain WebSocket today; put it behind your usual network controls (VPC, mesh, mTLS termination) instead of building auth into the protocol.
  • auth_token is a shared secret, not a JWT. Rotate by re-deploying both sides with the new token. For finer-grained authorization, add it at the actor level — the recorder actor can inspect message metadata before processing.
  • One peer per failure domain. Each DistributedNetwork is one bridge process. Run multiple per host if you want isolation by trust boundary or cgroups.
  • Heartbeat tuning. Default heartbeat_interval_ms = 1000 means timeout in 3s. Lower it for chatty local-network setups, raise it if you're crossing a slow link and don't want false positives.

What stays in-process

Everything inside one peer is the same Reflow you already know. Backpressure, ctx.send, ctx.pool, streams, await rules, the catalog — all unchanged. The bridge only sits between peers, never between actors inside a peer. Two consequences:

  • No latency tax for local fan-out. A 10-actor pipeline on one peer runs at memory speed; only the cross-peer hop pays the WebSocket cost.
  • Failure granularity matches the peer boundary. A crashing peer drops only the actors it was hosting; everyone else's graphs keep running and pick up reconnects automatically.

What's next

The series stops here for now. The distributed transport works end-to-end and ships with the binaries you need to run it; the gaps that remain (message persistence, ack semantics, backpressure across the bridge) are real but out of scope for the runtime itself — they belong in the actors that sit on either side.

Building a Visual Graph Editor

Complete tutorial for creating a visual graph editor using Reflow's WebAssembly APIs.

Overview

This tutorial walks through building a complete visual graph editor that allows users to:

  • Create and edit graphs visually
  • Add nodes by dragging from a component palette
  • Connect nodes with visual links
  • Configure node properties through forms
  • Execute workflows and see real-time results
  • Save and load graph files

Prerequisites

  • Basic HTML, CSS, and JavaScript knowledge
  • Understanding of Reflow's graph concepts
  • Node.js and npm installed

Project Setup

1. Initialize Project

mkdir reflow-visual-editor
cd reflow-visual-editor
npm init -y

2. Install Dependencies

# Core dependencies
npm install reflow-network-wasm

# Development dependencies
npm install --save-dev webpack webpack-cli webpack-dev-server
npm install --save-dev html-webpack-plugin css-loader style-loader
npm install --save-dev @babel/core @babel/preset-env babel-loader

3. Project Structure

reflow-visual-editor/
├── src/
│   ├── index.html
│   ├── index.js
│   ├── style.css
│   ├── components/
│   │   ├── Graph.js
│   │   ├── Node.js
│   │   ├── Connection.js
│   │   ├── Palette.js
│   │   └── PropertyPanel.js
│   ├── utils/
│   │   ├── drag-drop.js
│   │   ├── events.js
│   │   └── serialization.js
│   └── workers/
│       └── execution-worker.js
├── webpack.config.js
└── package.json

Core Implementation

1. Basic HTML Structure

<!-- src/index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Reflow Visual Editor</title>
</head>
<body>
    <div id="app">
        <header class="toolbar">
            <div class="toolbar-group">
                <button id="new-graph">New</button>
                <button id="open-graph">Open</button>
                <button id="save-graph">Save</button>
            </div>
            <div class="toolbar-group">
                <button id="run-graph">Run</button>
                <button id="stop-graph">Stop</button>
                <button id="validate-graph">Validate</button>
            </div>
            <div class="toolbar-group">
                <button id="auto-layout">Auto Layout</button>
                <button id="zoom-fit">Zoom to Fit</button>
            </div>
        </header>
        
        <div class="editor-container">
            <div class="sidebar">
                <div class="component-palette" id="palette">
                    <h3>Components</h3>
                    <div class="palette-category" data-category="data">
                        <h4>Data Operations</h4>
                        <div class="palette-items">
                            <!-- Component items will be populated by JavaScript -->
                        </div>
                    </div>
                    <div class="palette-category" data-category="flow">
                        <h4>Flow Control</h4>
                        <div class="palette-items">
                            <!-- Component items will be populated by JavaScript -->
                        </div>
                    </div>
                    <div class="palette-category" data-category="io">
                        <h4>Input/Output</h4>
                        <div class="palette-items">
                            <!-- Component items will be populated by JavaScript -->
                        </div>
                    </div>
                </div>
            </div>
            
            <div class="graph-canvas-container">
                <svg id="graph-canvas" class="graph-canvas">
                    <defs>
                        <marker id="arrowhead" markerWidth="10" markerHeight="7" 
                                refX="10" refY="3.5" orient="auto">
                            <polygon points="0 0, 10 3.5, 0 7" fill="#666" />
                        </marker>
                    </defs>
                    <g id="connections-layer"></g>
                    <g id="nodes-layer"></g>
                </svg>
                
                <div class="canvas-overlay">
                    <div class="zoom-controls">
                        <button id="zoom-in">+</button>
                        <button id="zoom-out">-</button>
                        <span id="zoom-level">100%</span>
                    </div>
                </div>
            </div>
            
            <div class="properties-panel" id="properties-panel">
                <h3>Properties</h3>
                <div id="property-form">
                    <p>Select a node to edit properties</p>
                </div>
            </div>
        </div>
        
        <div class="status-bar">
            <span id="status-text">Ready</span>
            <div class="status-indicators">
                <span id="node-count">0 nodes</span>
                <span id="connection-count">0 connections</span>
            </div>
        </div>
    </div>
</body>
</html>

2. Main Application Class

// src/index.js
import { Graph } from 'reflow-network-wasm';
import GraphEditor from './components/Graph.js';
import ComponentPalette from './components/Palette.js';
import PropertyPanel from './components/PropertyPanel.js';
import './style.css';

class VisualEditor {
    constructor() {
        this.graph = new Graph("VisualWorkflow", true, {});
        this.graphEditor = new GraphEditor(this.graph, '#graph-canvas');
        this.palette = new ComponentPalette('#palette');
        this.propertyPanel = new PropertyPanel('#properties-panel');
        
        this.selectedNode = null;
        this.isExecuting = false;
        
        this.initializeEventListeners();
        this.initializeComponents();
    }
    
    initializeEventListeners() {
        // Toolbar events
        document.getElementById('new-graph').addEventListener('click', () => this.newGraph());
        document.getElementById('open-graph').addEventListener('click', () => this.openGraph());
        document.getElementById('save-graph').addEventListener('click', () => this.saveGraph());
        document.getElementById('run-graph').addEventListener('click', () => this.runGraph());
        document.getElementById('stop-graph').addEventListener('click', () => this.stopGraph());
        document.getElementById('validate-graph').addEventListener('click', () => this.validateGraph());
        document.getElementById('auto-layout').addEventListener('click', () => this.autoLayout());
        document.getElementById('zoom-fit').addEventListener('click', () => this.zoomToFit());
        
        // Zoom controls
        document.getElementById('zoom-in').addEventListener('click', () => this.graphEditor.zoomIn());
        document.getElementById('zoom-out').addEventListener('click', () => this.graphEditor.zoomOut());
        
        // Graph events
        this.graphEditor.on('nodeSelected', (node) => this.selectNode(node));
        this.graphEditor.on('nodeDeselected', () => this.deselectNode());
        this.graphEditor.on('nodeAdded', (node) => this.updateStatus());
        this.graphEditor.on('nodeRemoved', (node) => this.updateStatus());
        this.graphEditor.on('connectionAdded', (connection) => this.updateStatus());
        this.graphEditor.on('connectionRemoved', (connection) => this.updateStatus());
        
        // Palette events
        this.palette.on('componentDragStart', (component) => this.handleComponentDrag(component));
        
        // Property panel events
        this.propertyPanel.on('propertyChanged', (property, value) => this.updateNodeProperty(property, value));
    }
    
    initializeComponents() {
        this.palette.loadComponents([
            // Data Operations
            { 
                name: 'Map', 
                category: 'data', 
                component: 'MapActor',
                description: 'Transform data using a function',
                icon: '🔄',
                ports: {
                    input: [{ name: 'input', type: 'any' }],
                    output: [{ name: 'output', type: 'any' }]
                }
            },
            { 
                name: 'Filter', 
                category: 'data', 
                component: 'FilterActor',
                description: 'Filter data based on conditions',
                icon: '🔍',
                ports: {
                    input: [{ name: 'input', type: 'any' }],
                    output: [{ name: 'output', type: 'any' }]
                }
            },
            { 
                name: 'Aggregate', 
                category: 'data', 
                component: 'AggregateActor',
                description: 'Aggregate multiple inputs',
                icon: '📊',
                ports: {
                    input: [{ name: 'input', type: 'any' }],
                    output: [{ name: 'output', type: 'any' }]
                }
            },
            
            // Flow Control
            { 
                name: 'Conditional', 
                category: 'flow', 
                component: 'ConditionalActor',
                description: 'Branch based on condition',
                icon: '🔀',
                ports: {
                    input: [{ name: 'input', type: 'any' }],
                    output: [
                        { name: 'true', type: 'any' },
                        { name: 'false', type: 'any' }
                    ]
                }
            },
            { 
                name: 'Merge', 
                category: 'flow', 
                component: 'MergeActor',
                description: 'Merge multiple inputs',
                icon: '🔗',
                ports: {
                    input: [
                        { name: 'input1', type: 'any' },
                        { name: 'input2', type: 'any' }
                    ],
                    output: [{ name: 'output', type: 'any' }]
                }
            },
            
            // Input/Output
            { 
                name: 'HTTP Request', 
                category: 'io', 
                component: 'HttpRequestActor',
                description: 'Make HTTP requests',
                icon: '🌐',
                ports: {
                    input: [{ name: 'url', type: 'string' }],
                    output: [
                        { name: 'response', type: 'object' },
                        { name: 'error', type: 'object' }
                    ]
                }
            },
            { 
                name: 'Logger', 
                category: 'io', 
                component: 'LoggerActor',
                description: 'Log messages',
                icon: '📝',
                ports: {
                    input: [{ name: 'message', type: 'any' }],
                    output: []
                }
            }
        ]);
        
        this.updateStatus();
    }
    
    newGraph() {
        if (this.hasUnsavedChanges()) {
            if (!confirm('You have unsaved changes. Create a new graph anyway?')) {
                return;
            }
        }
        
        this.graph = new Graph("VisualWorkflow", true, {});
        this.graphEditor.setGraph(this.graph);
        this.deselectNode();
        this.updateStatus();
        this.setStatus('New graph created');
    }
    
    async openGraph() {
        const input = document.createElement('input');
        input.type = 'file';
        input.accept = '.json';
        input.onchange = async (e) => {
            const file = e.target.files[0];
            if (file) {
                try {
                    const text = await file.text();
                    const graphData = JSON.parse(text);
                    this.graph = Graph.fromJson(graphData);
                    this.graphEditor.setGraph(this.graph);
                    this.deselectNode();
                    this.updateStatus();
                    this.setStatus(`Opened: ${file.name}`);
                } catch (error) {
                    alert(`Error opening file: ${error.message}`);
                }
            }
        };
        input.click();
    }
    
    saveGraph() {
        try {
            const graphData = this.graph.toJson();
            const blob = new Blob([JSON.stringify(graphData, null, 2)], 
                                { type: 'application/json' });
            const url = URL.createObjectURL(blob);
            
            const a = document.createElement('a');
            a.href = url;
            a.download = `${this.graph.name || 'workflow'}.json`;
            a.click();
            
            URL.revokeObjectURL(url);
            this.setStatus('Graph saved');
        } catch (error) {
            alert(`Error saving graph: ${error.message}`);
        }
    }
    
    async runGraph() {
        try {
            this.setStatus('Validating graph...');
            const validation = this.graph.validate();
            
            if (!validation.isValid) {
                alert(`Graph validation failed:\n${validation.errors.join('\n')}`);
                return;
            }
            
            this.setStatus('Starting execution...');
            this.isExecuting = true;
            
            // Use Web Worker for graph execution
            if (!this.executionWorker) {
                this.executionWorker = new Worker('./workers/execution-worker.js');
                this.executionWorker.onmessage = (e) => this.handleExecutionMessage(e);
            }
            
            this.executionWorker.postMessage({
                type: 'execute',
                graph: this.graph.toJson()
            });
            
            this.updateToolbarState();
        } catch (error) {
            this.setStatus(`Execution error: ${error.message}`);
            this.isExecuting = false;
            this.updateToolbarState();
        }
    }
    
    stopGraph() {
        if (this.executionWorker) {
            this.executionWorker.postMessage({ type: 'stop' });
        }
        this.isExecuting = false;
        this.updateToolbarState();
        this.setStatus('Execution stopped');
    }
    
    validateGraph() {
        try {
            const validation = this.graph.validate();
            
            if (validation.isValid) {
                this.setStatus('Graph is valid');
                // Highlight valid state in UI
                this.graphEditor.highlightValidation(validation);
            } else {
                this.setStatus(`Validation failed: ${validation.errors.length} errors`);
                // Show validation errors in UI
                this.graphEditor.showValidationErrors(validation.errors);
                
                // Show detailed errors in console or modal
                console.log('Validation errors:', validation.errors);
            }
        } catch (error) {
            this.setStatus(`Validation error: ${error.message}`);
        }
    }
    
    autoLayout() {
        try {
            this.setStatus('Calculating layout...');
            const positions = this.graph.calculateLayout({
                algorithm: 'hierarchical',
                nodeSpacing: 120,
                layerSpacing: 80
            });
            
            this.graphEditor.animateToPositions(positions);
            this.setStatus('Layout applied');
        } catch (error) {
            this.setStatus(`Layout error: ${error.message}`);
        }
    }
    
    zoomToFit() {
        this.graphEditor.zoomToFit();
        this.setStatus('Zoomed to fit');
    }
    
    selectNode(node) {
        this.selectedNode = node;
        this.propertyPanel.showNodeProperties(node);
        this.graphEditor.highlightNode(node.id);
    }
    
    deselectNode() {
        this.selectedNode = null;
        this.propertyPanel.clear();
        this.graphEditor.clearHighlight();
    }
    
    updateNodeProperty(property, value) {
        if (this.selectedNode) {
            this.selectedNode.metadata[property] = value;
            this.graphEditor.updateNode(this.selectedNode);
            this.setStatus(`Updated ${property}`);
        }
    }
    
    handleComponentDrag(component) {
        this.graphEditor.enableDropZone(component);
    }
    
    handleExecutionMessage(e) {
        const { type, data } = e.data;
        
        switch (type) {
            case 'progress':
                this.setStatus(`Executing... ${data.progress}%`);
                this.graphEditor.updateExecutionProgress(data);
                break;
                
            case 'completed':
                this.setStatus('Execution completed successfully');
                this.isExecuting = false;
                this.updateToolbarState();
                this.graphEditor.showExecutionResults(data.results);
                break;
                
            case 'error':
                this.setStatus(`Execution failed: ${data.error}`);
                this.isExecuting = false;
                this.updateToolbarState();
                this.graphEditor.showExecutionError(data);
                break;
                
            case 'nodeExecuted':
                this.graphEditor.highlightExecutedNode(data.nodeId);
                break;
        }
    }
    
    updateStatus() {
        const nodeCount = this.graph.getNodes().length;
        const connectionCount = this.graph.getConnections().length;
        
        document.getElementById('node-count').textContent = `${nodeCount} nodes`;
        document.getElementById('connection-count').textContent = `${connectionCount} connections`;
    }
    
    updateToolbarState() {
        document.getElementById('run-graph').disabled = this.isExecuting;
        document.getElementById('stop-graph').disabled = !this.isExecuting;
    }
    
    setStatus(message) {
        document.getElementById('status-text').textContent = message;
        console.log(`Status: ${message}`);
    }
    
    hasUnsavedChanges() {
        // Implement change tracking logic
        return false;
    }
}

// Initialize application when DOM is ready
document.addEventListener('DOMContentLoaded', () => {
    new VisualEditor();
});

3. Graph Editor Component

// src/components/Graph.js
import { EventEmitter } from '../utils/events.js';
import Node from './Node.js';
import Connection from './Connection.js';

class GraphEditor extends EventEmitter {
    constructor(graph, canvasSelector) {
        super();
        this.graph = graph;
        this.canvas = document.querySelector(canvasSelector);
        this.nodesLayer = this.canvas.querySelector('#nodes-layer');
        this.connectionsLayer = this.canvas.querySelector('#connections-layer');
        
        this.nodes = new Map();
        this.connections = new Map();
        this.scale = 1;
        this.panX = 0;
        this.panY = 0;
        
        this.dragState = null;
        this.connectionDragState = null;
        
        this.initializeEventListeners();
        this.updateView();
    }
    
    initializeEventListeners() {
        // Mouse events for panning and selection
        this.canvas.addEventListener('mousedown', (e) => this.handleMouseDown(e));
        this.canvas.addEventListener('mousemove', (e) => this.handleMouseMove(e));
        this.canvas.addEventListener('mouseup', (e) => this.handleMouseUp(e));
        this.canvas.addEventListener('wheel', (e) => this.handleWheel(e));
        
        // Drag and drop for components
        this.canvas.addEventListener('dragover', (e) => e.preventDefault());
        this.canvas.addEventListener('drop', (e) => this.handleDrop(e));
        
        // Keyboard events
        document.addEventListener('keydown', (e) => this.handleKeyDown(e));
    }
    
    setGraph(graph) {
        this.graph = graph;
        this.updateView();
    }
    
    updateView() {
        this.clearView();
        this.renderConnections();
        this.renderNodes();
    }
    
    clearView() {
        this.nodesLayer.innerHTML = '';
        this.connectionsLayer.innerHTML = '';
        this.nodes.clear();
        this.connections.clear();
    }
    
    renderNodes() {
        const graphNodes = this.graph.getNodes();
        
        graphNodes.forEach(nodeData => {
            const node = new Node(nodeData, this);
            this.nodes.set(nodeData.id, node);
            this.nodesLayer.appendChild(node.element);
        });
    }
    
    renderConnections() {
        const graphConnections = this.graph.getConnections();
        
        graphConnections.forEach(connectionData => {
            const connection = new Connection(connectionData, this);
            this.connections.set(connection.id, connection);
            this.connectionsLayer.appendChild(connection.element);
        });
    }
    
    addNode(componentType, position) {
        const nodeId = `node_${Date.now()}`;
        const nodeData = {
            id: nodeId,
            component: componentType.component,
            metadata: {
                x: position.x,
                y: position.y,
                label: componentType.name,
                ...componentType.defaultProperties
            }
        };
        
        this.graph.addNode(nodeId, componentType.component, nodeData.metadata);
        
        const node = new Node(nodeData, this);
        this.nodes.set(nodeId, node);
        this.nodesLayer.appendChild(node.element);
        
        this.emit('nodeAdded', nodeData);
        return node;
    }
    
    removeNode(nodeId) {
        const node = this.nodes.get(nodeId);
        if (node) {
            // Remove all connections to this node
            const connections = this.graph.getConnections()
                .filter(conn => conn.fromNode === nodeId || conn.toNode === nodeId);
            
            connections.forEach(conn => this.removeConnection(conn.id));
            
            // Remove node from graph
            this.graph.removeNode(nodeId);
            
            // Remove from UI
            node.element.remove();
            this.nodes.delete(nodeId);
            
            this.emit('nodeRemoved', { id: nodeId });
        }
    }
    
    addConnection(fromNode, fromPort, toNode, toPort) {
        try {
            const connectionId = this.graph.addConnection(fromNode, fromPort, toNode, toPort, {});
            
            const connectionData = {
                id: connectionId,
                fromNode,
                fromPort,
                toNode,
                toPort
            };
            
            const connection = new Connection(connectionData, this);
            this.connections.set(connectionId, connection);
            this.connectionsLayer.appendChild(connection.element);
            
            this.emit('connectionAdded', connectionData);
            return connection;
        } catch (error) {
            console.error('Failed to create connection:', error);
            throw error;
        }
    }
    
    removeConnection(connectionId) {
        const connection = this.connections.get(connectionId);
        if (connection) {
            this.graph.removeConnection(connectionId);
            connection.element.remove();
            this.connections.delete(connectionId);
            
            this.emit('connectionRemoved', { id: connectionId });
        }
    }
    
    getNodePosition(nodeId) {
        const node = this.nodes.get(nodeId);
        return node ? node.getPosition() : null;
    }
    
    updateNodePosition(nodeId, position) {
        const node = this.nodes.get(nodeId);
        if (node) {
            node.setPosition(position);
            this.updateConnectionsForNode(nodeId);
        }
    }
    
    updateConnectionsForNode(nodeId) {
        this.connections.forEach(connection => {
            if (connection.fromNode === nodeId || connection.toNode === nodeId) {
                connection.updatePath();
            }
        });
    }
    
    handleMouseDown(e) {
        if (e.target === this.canvas) {
            this.startPanning(e);
        }
    }
    
    handleMouseMove(e) {
        if (this.dragState?.type === 'pan') {
            this.updatePanning(e);
        } else if (this.connectionDragState) {
            this.updateConnectionDrag(e);
        }
    }
    
    handleMouseUp(e) {
        if (this.dragState?.type === 'pan') {
            this.endPanning();
        } else if (this.connectionDragState) {
            this.endConnectionDrag(e);
        }
    }
    
    handleWheel(e) {
        e.preventDefault();
        const delta = e.deltaY > 0 ? 0.9 : 1.1;
        this.zoom(delta, { x: e.clientX, y: e.clientY });
    }
    
    handleDrop(e) {
        e.preventDefault();
        const componentData = JSON.parse(e.dataTransfer.getData('component'));
        const rect = this.canvas.getBoundingClientRect();
        const position = this.screenToWorld({
            x: e.clientX - rect.left,
            y: e.clientY - rect.top
        });
        
        this.addNode(componentData, position);
    }
    
    handleKeyDown(e) {
        if (e.key === 'Delete' && this.selectedNode) {
            this.removeNode(this.selectedNode.id);
        }
    }
    
    startPanning(e) {
        this.dragState = {
            type: 'pan',
            startX: e.clientX,
            startY: e.clientY,
            initialPanX: this.panX,
            initialPanY: this.panY
        };
    }
    
    updatePanning(e) {
        if (this.dragState?.type === 'pan') {
            const dx = e.clientX - this.dragState.startX;
            const dy = e.clientY - this.dragState.startY;
            
            this.panX = this.dragState.initialPanX + dx;
            this.panY = this.dragState.initialPanY + dy;
            
            this.updateTransform();
        }
    }
    
    endPanning() {
        this.dragState = null;
    }
    
    startConnectionDrag(fromNode, fromPort, startPosition) {
        this.connectionDragState = {
            fromNode,
            fromPort,
            startPosition,
            currentPosition: startPosition
        };
        
        // Create temporary connection line
        this.createTempConnectionLine();
    }
    
    updateConnectionDrag(e) {
        if (this.connectionDragState) {
            const rect = this.canvas.getBoundingClientRect();
            this.connectionDragState.currentPosition = {
                x: e.clientX - rect.left,
                y: e.clientY - rect.top
            };
            
            this.updateTempConnectionLine();
        }
    }
    
    endConnectionDrag(e) {
        if (this.connectionDragState) {
            // Find target node and port
            const target = this.findConnectionTarget(e);
            
            if (target) {
                try {
                    this.addConnection(
                        this.connectionDragState.fromNode,
                        this.connectionDragState.fromPort,
                        target.nodeId,
                        target.portName
                    );
                } catch (error) {
                    console.error('Connection failed:', error);
                }
            }
            
            this.removeTempConnectionLine();
            this.connectionDragState = null;
        }
    }
    
    zoom(factor, center) {
        const newScale = Math.max(0.1, Math.min(3, this.scale * factor));
        
        if (center) {
            const worldCenter = this.screenToWorld(center);
            this.scale = newScale;
            const newScreenCenter = this.worldToScreen(worldCenter);
            
            this.panX += center.x - newScreenCenter.x;
            this.panY += center.y - newScreenCenter.y;
        } else {
            this.scale = newScale;
        }
        
        this.updateTransform();
        this.updateZoomDisplay();
    }
    
    zoomIn() {
        this.zoom(1.2);
    }
    
    zoomOut() {
        this.zoom(0.8);
    }
    
    zoomToFit() {
        if (this.nodes.size === 0) return;
        
        // Calculate bounding box of all nodes
        let minX = Infinity, minY = Infinity;
        let maxX = -Infinity, maxY = -Infinity;
        
        this.nodes.forEach(node => {
            const pos = node.getPosition();
            minX = Math.min(minX, pos.x);
            minY = Math.min(minY, pos.y);
            maxX = Math.max(maxX, pos.x + 120); // Node width
            maxY = Math.max(maxY, pos.y + 80);  // Node height
        });
        
        const padding = 50;
        const contentWidth = maxX - minX + 2 * padding;
        const contentHeight = maxY - minY + 2 * padding;
        
        const canvasRect = this.canvas.getBoundingClientRect();
        const scaleX = canvasRect.width / contentWidth;
        const scaleY = canvasRect.height / contentHeight;
        
        this.scale = Math.min(scaleX, scaleY, 1);
        this.panX = (canvasRect.width - contentWidth * this.scale) / 2 - (minX - padding) * this.scale;
        this.panY = (canvasRect.height - contentHeight * this.scale) / 2 - (minY - padding) * this.scale;
        
        this.updateTransform();
        this.updateZoomDisplay();
    }
    
    updateTransform() {
        const transform = `translate(${this.panX}px, ${this.panY}px) scale(${this.scale})`;
        this.nodesLayer.style.transform = transform;
        this.connectionsLayer.style.transform = transform;
    }
    
    updateZoomDisplay() {
        const zoomPercent = Math.round(this.scale * 100);
        document.getElementById('zoom-level').textContent = `${zoomPercent}%`;
    }
    
    screenToWorld(screenPos) {
        return {
            x: (screenPos.x - this.panX) / this.scale,
            y: (screenPos.y - this.panY) / this.scale
        };
    }
    
    worldToScreen(worldPos) {
        return {
            x: worldPos.x * this.scale + this.panX,
            y: worldPos.y * this.scale + this.panY
        };
    }
    
    // Additional methods for animations, validation display, etc.
    animateToPositions

Building a ReactFlow Editor with Reflow Engine in a Web Worker

This tutorial demonstrates how to build a modern visual workflow editor using ReactFlow for the user interface and Reflow engine running in a Web Worker for graph execution and state management.

Table of Contents

  1. Architecture Overview
  2. Project Setup
  3. Worker Integration
  4. ReactFlow Integration
  5. Custom Node Components
  6. Real-time Communication
  7. Advanced Features
  8. Complete Example
  9. Performance Optimization

Architecture Overview

Our architecture separates concerns between the UI layer (ReactFlow) and the execution engine (Reflow WebAssembly):

┌─────────────────────────────────────────────────────────────────┐
│                    React Application                            │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐ │
│  │   ReactFlow     │  │  Component      │  │   Execution     │ │
│  │     Editor      │  │    Palette      │  │    Controls     │ │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘ │
│            │                    │                    │         │
│            └────────────────────┼────────────────────┘         │
│                                 │                              │
└─────────────────────────────────┼──────────────────────────────┘
                                  │ PostMessage API
┌─────────────────────────────────┼──────────────────────────────┐
│                          Web Worker                             │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐ │
│  │ Reflow WebAssm  │  │  Graph State    │  │   Persistence   │ │
│  │     Engine      │  │   Management    │  │   (IndexedDB)   │ │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘

Benefits:

  • Performance: Heavy graph operations don't block the UI thread
  • Scalability: Can handle large, complex workflows
  • Persistence: Graph state maintained separately from UI state
  • Modularity: Clear separation between presentation and logic

Project Setup

Prerequisites

  • Node.js 18+ and npm/yarn
  • Rust toolchain with wasm-pack installed
  • Basic knowledge of React and TypeScript

1. Initialize React Project

# Create new React TypeScript project
npm create react-app@latest reflow-editor --template typescript
cd reflow-editor

# Install ReactFlow and dependencies
npm install reactflow
npm install @types/web

2. Install Reflow WebAssembly Package

# Build the Reflow WebAssembly package (from Reflow repo root)
cd crates/reflow_network
wasm-pack build --target web --out-dir pkg

# Copy the generated package to your React project
cp -r pkg/ /path/to/reflow-editor/src/reflow-wasm/

3. Project Structure

reflow-editor/
├── src/
│   ├── components/
│   │   ├── Editor/
│   │   │   ├── ReactFlowEditor.tsx
│   │   │   ├── CustomNodes/
│   │   │   └── CustomEdges/
│   │   ├── Palette/
│   │   │   └── ComponentPalette.tsx
│   │   └── Controls/
│   │       └── ExecutionControls.tsx
│   ├── workers/
│   │   └── reflow-worker.ts
│   ├── hooks/
│   │   ├── useReflowWorker.ts
│   │   └── useGraphSync.ts
│   ├── types/
│   │   └── reflow.ts
│   ├── reflow-wasm/          # Copied from Reflow build
│   └── App.tsx

Worker Integration

1. Create the Reflow Worker

First, let's create the Web Worker that manages the Reflow engine:

// src/workers/reflow-worker.ts
import { Graph, GraphHistory, StorageManager, initSync } from '../reflow-wasm/reflow_network.js';

// Worker state
let graph: Graph | null = null;
let history: GraphHistory | null = null;
let storage: StorageManager | null = null;

// Message types for type safety
export interface WorkerMessage {
  type: 'INIT' | 'ADD_NODE' | 'ADD_EDGE' | 'UPDATE_NODE' | 'ADD_GROUP' | 'EXECUTE';
  payload?: any;
}

export interface WorkerResponse {
  type: 'READY' | 'GRAPH_LOADED' | 'NODE_ADDED' | 'EDGE_ADDED' | 'ERROR';
  payload?: any;
}

// Initialize WebAssembly
fetch('/reflow_network_bg.wasm').then(async (res) => {
  initSync(await res.arrayBuffer());
  self.postMessage({ type: 'READY' } as WorkerResponse);
});

// Auto-save functionality
let saveTimeout: NodeJS.Timeout;
const autoSave = () => {
  if (saveTimeout) clearTimeout(saveTimeout);
  saveTimeout = setTimeout(() => saveGraphState(), 1000);
};

// Message handler
self.addEventListener('message', async (event: MessageEvent<WorkerMessage>) => {
  const { type, payload } = event.data;

  try {
    switch (type) {
      case 'INIT':
        await initializeGraph(payload.name);
        break;

      case 'ADD_NODE':
        if (!graph) throw new Error('Graph not initialized');
        addNode(payload);
        break;

      case 'ADD_EDGE':
        if (!graph) throw new Error('Graph not initialized');
        addEdge(payload);
        break;

      case 'UPDATE_NODE':
        if (!graph) throw new Error('Graph not initialized');
        updateNode(payload);
        break;

      case 'ADD_GROUP':
        if (!graph) throw new Error('Graph not initialized');
        addGroup(payload);
        break;

      case 'EXECUTE':
        if (!graph) throw new Error('Graph not initialized');
        executeGraph();
        break;

      default:
        console.warn('Unknown message type:', type);
    }
  } catch (error) {
    self.postMessage({
      type: 'ERROR',
      payload: { message: error.message }
    } as WorkerResponse);
  }
});

// Initialize graph with persistence
async function initializeGraph(name: string) {
  [graph, history] = Graph.withHistory();
  storage = GraphHistory.createStorageManager(name, 'history');
  
  await storage.initDatabase();

  // Load existing state
  try {
    const snapshot = await storage.loadFromIndexedDB('latest');
    if (snapshot) {
      history = GraphHistory.loadFromSnapshot(snapshot, graph);
    }
  } catch (error) {
    console.warn('No previous state found:', error);
  }

  // Subscribe to graph events
  graph.subscribe((graphEvent) => {
    self.postMessage({
      type: 'GRAPH_EVENT',
      payload: graphEvent
    } as WorkerResponse);
  });

  self.postMessage({
    type: 'GRAPH_LOADED',
    payload: { graph: graph.toJSON() }
  } as WorkerResponse);
}

// Graph operation functions
function addNode(nodeData: any) {
  if (!graph || !history) return;

  graph.addNode(nodeData.id, nodeData.process, nodeData.metadata);
  history.processEvents(graph);
  autoSave();

  self.postMessage({
    type: 'NODE_ADDED',
    payload: nodeData
  } as WorkerResponse);
}

function addEdge(edgeData: any) {
  if (!graph || !history) return;

  const { from, to } = edgeData;
  
  // Add ports if they don't exist
  graph.addOutport(from.port.id, from.actor, from.port.name, true, from.port.metadata);
  graph.addInport(to.port.id, to.actor, to.port.name, true, to.port.metadata);
  
  // Add connection
  graph.addConnection(from.actor, from.port.id, to.actor, to.port.id, edgeData.metadata);
  
  history.processEvents(graph);
  autoSave();

  self.postMessage({
    type: 'EDGE_ADDED',
    payload: edgeData
  } as WorkerResponse);
}

function updateNode(nodeData: any) {
  if (!graph || !history) return;

  graph.setNodeMetadata(nodeData.id, nodeData.metadata);
  history.processEvents(graph);
  autoSave();
}

function addGroup(groupData: any) {
  if (!graph || !history) return;

  graph.addGroup(groupData.id, groupData.nodes, groupData.metadata);
  history.processEvents(graph);
  autoSave();
}

function executeGraph() {
  if (!graph) return;
  
  // Implement graph execution logic here
  console.log('Executing graph:', graph.toJSON());
}

// Save graph state
async function saveGraphState() {
  if (!graph || !history || !storage) return;

  try {
    await storage.saveToIndexedDB('latest', graph, history);
  } catch (error) {
    console.warn('Failed to save to IndexedDB:', error);
    try {
      storage.saveToLocalStorage('latest', graph, history);
    } catch (storageError) {
      console.error('Failed to save state:', storageError);
    }
  }
}

2. Create Worker Hook

Create a React hook to manage the worker communication:

// src/hooks/useReflowWorker.ts
import { useEffect, useRef, useCallback, useState } from 'react';
import type { WorkerMessage, WorkerResponse } from '../workers/reflow-worker';

export interface ReflowWorkerHook {
  isReady: boolean;
  sendMessage: (message: WorkerMessage) => void;
  addEventListener: (listener: (event: WorkerResponse) => void) => void;
  removeEventListener: (listener: (event: WorkerResponse) => void) => void;
}

export function useReflowWorker(): ReflowWorkerHook {
  const workerRef = useRef<Worker | null>(null);
  const [isReady, setIsReady] = useState(false);
  const listenersRef = useRef<Set<(event: WorkerResponse) => void>>(new Set());

  useEffect(() => {
    // Create worker
    workerRef.current = new Worker('/src/workers/reflow-worker.ts', {
      type: 'module'
    });

    // Handle worker messages
    const handleMessage = (event: MessageEvent<WorkerResponse>) => {
      const message = event.data;
      
      if (message.type === 'READY') {
        setIsReady(true);
      }

      // Notify all listeners
      listenersRef.current.forEach(listener => listener(message));
    };

    workerRef.current.addEventListener('message', handleMessage);

    return () => {
      workerRef.current?.terminate();
    };
  }, []);

  const sendMessage = useCallback((message: WorkerMessage) => {
    if (workerRef.current && isReady) {
      workerRef.current.postMessage(message);
    }
  }, [isReady]);

  const addEventListener = useCallback((listener: (event: WorkerResponse) => void) => {
    listenersRef.current.add(listener);
  }, []);

  const removeEventListener = useCallback((listener: (event: WorkerResponse) => void) => {
    listenersRef.current.delete(listener);
  }, []);

  return {
    isReady,
    sendMessage,
    addEventListener,
    removeEventListener
  };
}

ReactFlow Integration

1. Main Editor Component

// src/components/Editor/ReactFlowEditor.tsx
import React, { useCallback, useEffect, useState } from 'react';
import ReactFlow, {
  Node,
  Edge,
  addEdge,
  useNodesState,
  useEdgesState,
  Connection,
  ReactFlowProvider,
  Controls,
  Background,
  Panel,
} from 'reactflow';

import 'reactflow/dist/style.css';

import { useReflowWorker } from '../../hooks/useReflowWorker';
import { useGraphSync } from '../../hooks/useGraphSync';
import { ReflowNode } from './CustomNodes/ReflowNode';
import { ComponentPalette } from '../Palette/ComponentPalette';
import { ExecutionControls } from '../Controls/ExecutionControls';

// Custom node types
const nodeTypes = {
  reflow: ReflowNode,
};

export function ReactFlowEditor() {
  const [nodes, setNodes, onNodesChange] = useNodesState([]);
  const [edges, setEdges, onEdgesChange] = useEdgesState([]);
  const worker = useReflowWorker();

  // Sync ReactFlow state with Reflow worker
  const { syncToWorker, syncFromWorker } = useGraphSync(worker, setNodes, setEdges);

  useEffect(() => {
    if (worker.isReady) {
      // Initialize the graph in the worker
      worker.sendMessage({
        type: 'INIT',
        payload: { name: 'ReactFlow Graph' }
      });
    }
  }, [worker.isReady]);

  const onConnect = useCallback(
    (params: Edge | Connection) => {
      // Update ReactFlow state
      setEdges((eds) => addEdge(params, eds));
      
      // Sync to worker
      syncToWorker.addEdge({
        from: {
          actor: params.source,
          port: {
            id: `${params.source}-${params.sourceHandle}`,
            name: params.sourceHandle || 'output',
          }
        },
        to: {
          actor: params.target,
          port: {
            id: `${params.target}-${params.targetHandle}`,
            name: params.targetHandle || 'input',
          }
        }
      });
    },
    [setEdges, syncToWorker]
  );

  const onDrop = useCallback(
    (event: React.DragEvent) => {
      event.preventDefault();

      const reactFlowBounds = event.currentTarget.getBoundingClientRect();
      const type = event.dataTransfer.getData('application/reactflow');
      const position = {
        x: event.clientX - reactFlowBounds.left,
        y: event.clientY - reactFlowBounds.top,
      };

      const newNode: Node = {
        id: `${type}-${Date.now()}`,
        type: 'reflow',
        position,
        data: {
          label: type,
          process: type,
          inports: getDefaultInports(type),
          outports: getDefaultOutports(type),
        },
      };

      // Update ReactFlow state
      setNodes((nds) => nds.concat(newNode));
      
      // Sync to worker
      syncToWorker.addNode({
        id: newNode.id,
        process: type,
        metadata: {
          position,
          name: type,
          inports: newNode.data.inports,
          outports: newNode.data.outports,
        }
      });
    },
    [setNodes, syncToWorker]
  );

  const onDragOver = useCallback((event: React.DragEvent) => {
    event.preventDefault();
    event.dataTransfer.dropEffect = 'move';
  }, []);

  return (
    <div style={{ width: '100vw', height: '100vh', display: 'flex' }}>
      {/* Component Palette */}
      <ComponentPalette />
      
      {/* Main ReactFlow Editor */}
      <div style={{ flex: 1 }} onDrop={onDrop} onDragOver={onDragOver}>
        <ReactFlow
          nodes={nodes}
          edges={edges}
          onNodesChange={onNodesChange}
          onEdgesChange={onEdgesChange}
          onConnect={onConnect}
          nodeTypes={nodeTypes}
          fitView
        >
          <Controls />
          <Background />
          
          {/* Execution Controls Panel */}
          <Panel position="top-right">
            <ExecutionControls 
              onExecute={() => worker.sendMessage({ type: 'EXECUTE' })}
              isReady={worker.isReady}
            />
          </Panel>
        </ReactFlow>
      </div>
    </div>
  );
}

// Helper functions for default port configurations
function getDefaultInports(nodeType: string) {
  const configs = {
    'DataSource': [],
    'MapActor': [{ id: 'input', name: 'input', trait: 'data' }],
    'Logger': [{ id: 'input', name: 'input', trait: 'data' }],
    'FilterActor': [{ id: 'input', name: 'input', trait: 'data' }],
  };
  return configs[nodeType] || [{ id: 'input', name: 'input', trait: 'data' }];
}

function getDefaultOutports(nodeType: string) {
  const configs = {
    'DataSource': [{ id: 'output', name: 'output', trait: 'data' }],
    'MapActor': [{ id: 'output', name: 'output', trait: 'data' }],
    'Logger': [],
    'FilterActor': [{ id: 'output', name: 'output', trait: 'data' }],
  };
  return configs[nodeType] || [{ id: 'output', name: 'output', trait: 'data' }];
}

Custom Node Components

1. Reflow Node Component

// src/components/Editor/CustomNodes/ReflowNode.tsx
import React, { memo } from 'react';
import { Handle, Position } from 'reactflow';

interface ReflowNodeData {
  label: string;
  process: string;
  inports: Array<{ id: string; name: string; trait: string }>;
  outports: Array<{ id: string; name: string; trait: string }>;
}

interface ReflowNodeProps {
  data: ReflowNodeData;
  isConnectable: boolean;
}

export const ReflowNode = memo(({ data, isConnectable }: ReflowNodeProps) => {
  return (
    <div className="reflow-node">
      {/* Input Handles */}
      {data.inports.map((port, index) => (
        <Handle
          key={port.id}
          type="target"
          position={Position.Left}
          id={port.id}
          isConnectable={isConnectable}
          style={{
            top: `${20 + (index * 25)}px`,
            background: getPortColor(port.trait),
          }}
        />
      ))}

      {/* Node Content */}
      <div className="node-content">
        <div className="node-header">
          <strong>{data.label}</strong>
        </div>
        <div className="node-type">
          {data.process}
        </div>
        
        {/* Port Labels */}
        <div className="port-labels">
          <div className="input-labels">
            {data.inports.map((port) => (
              <div key={port.id} className="port-label">
                {port.name}
              </div>
            ))}
          </div>
          <div className="output-labels">
            {data.outports.map((port) => (
              <div key={port.id} className="port-label">
                {port.name}
              </div>
            ))}
          </div>
        </div>
      </div>

      {/* Output Handles */}
      {data.outports.map((port, index) => (
        <Handle
          key={port.id}
          type="source"
          position={Position.Right}
          id={port.id}
          isConnectable={isConnectable}
          style={{
            top: `${20 + (index * 25)}px`,
            background: getPortColor(port.trait),
          }}
        />
      ))}
    </div>
  );
});

// Utility function for port colors
function getPortColor(trait: string): string {
  const colors = {
    data: '#3b82f6',      // Blue for data
    control: '#ef4444',   // Red for control
    event: '#10b981',     // Green for events
    config: '#f59e0b',    // Yellow for configuration
  };
  return colors[trait] || '#6b7280'; // Gray as default
}

2. Node Styles

/* src/components/Editor/CustomNodes/ReflowNode.css */
.reflow-node {
  background: #ffffff;
  border: 2px solid #e5e7eb;
  border-radius: 8px;
  padding: 10px;
  min-width: 150px;
  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
  position: relative;
}

.reflow-node:hover {
  border-color: #3b82f6;
}

.node-content {
  display: flex;
  flex-direction: column;
  gap: 8px;
}

.node-header {
  font-size: 14px;
  font-weight: bold;
  color: #1f2937;
}

.node-type {
  font-size: 12px;
  color: #6b7280;
  font-style: italic;
}

.port-labels {
  display: flex;
  justify-content: space-between;
  font-size: 10px;
  color: #9ca3af;
}

.input-labels,
.output-labels {
  display: flex;
  flex-direction: column;
  gap: 4px;
}

.port-label {
  line-height: 1.2;
}

/* ReactFlow handle overrides */
.react-flow__handle {
  width: 8px;
  height: 8px;
  border: 2px solid #ffffff;
}

.react-flow__handle-left {
  left: -6px;
}

.react-flow__handle-right {
  right: -6px;
}

Real-time Communication

1. Graph Synchronization Hook

// src/hooks/useGraphSync.ts
import { useCallback, useEffect } from 'react';
import { Node, Edge } from 'reactflow';
import type { ReflowWorkerHook } from './useReflowWorker';

export function useGraphSync(
  worker: ReflowWorkerHook,
  setNodes: React.Dispatch<React.SetStateAction<Node[]>>,
  setEdges: React.Dispatch<React.SetStateAction<Edge[]>>
) {
  
  // Listen to worker events and sync to ReactFlow
  useEffect(() => {
    const handleWorkerMessage = (message: any) => {
      switch (message.type) {
        case 'GRAPH_LOADED':
          syncGraphFromWorker(message.payload.graph);
          break;
          
        case 'NODE_ADDED':
          // Handle real-time node additions from other sources
          break;
          
        case 'EDGE_ADDED':
          // Handle real-time edge additions from other sources
          break;
          
        case 'GRAPH_EVENT':
          // Handle live graph execution events
          console.log('Graph event:', message.payload);
          break;
      }
    };

    worker.addEventListener(handleWorkerMessage);
    
    return () => {
      worker.removeEventListener(handleWorkerMessage);
    };
  }, [worker]);

  // Convert Reflow graph to ReactFlow format
  const syncGraphFromWorker = useCallback((reflowGraph: any) => {
    const reactFlowNodes: Node[] = [];
    const reactFlowEdges: Edge[] = [];

    // Convert Reflow processes to ReactFlow nodes
    if (reflowGraph.processes) {
      Array.from(reflowGraph.processes.values()).forEach((process: any) => {
        const metadata = Object.fromEntries(process.metadata);
        const position = Object.fromEntries(metadata.position || new Map());
        
        reactFlowNodes.push({
          id: process.id,
          type: 'reflow',
          position: position,
          data: {
            label: metadata.name || process.component,
            process: process.component,
            inports: metadata.inports || [],
            outports: metadata.outports || [],
          },
        });
      });
    }

    // Convert Reflow connections to ReactFlow edges
    if (reflowGraph.connections) {
      reflowGraph.connections.forEach((connection: any) => {
        reactFlowEdges.push({
          id: `${connection.from.node_id}-${connection.from.port_name}-to-${connection.to.node_id}-${connection.to.port_name}`,
          source: connection.from.node_id,
          target: connection.to.node_id,
          sourceHandle: connection.from.port_name,
          targetHandle: connection.to.port_name,
        });
      });
    }

    setNodes(reactFlowNodes);
    setEdges(reactFlowEdges);
  }, [setNodes, setEdges]);

  // Functions to sync ReactFlow changes to worker
  const syncToWorker = {
    addNode: useCallback((nodeData: any) => {
      worker.sendMessage({
        type: 'ADD_NODE',
        payload: nodeData
      });
    }, [worker]),

    addEdge: useCallback((edgeData: any) => {
      worker.sendMessage({
        type: 'ADD_EDGE',
        payload: edgeData
      });
    }, [worker]),

    updateNode: useCallback((nodeData: any) => {
      worker.sendMessage({
        type: 'UPDATE_NODE',
        payload: nodeData
      });
    }, [worker]),
  };

  return {
    syncToWorker,
    syncFromWorker: syncGraphFromWorker,
  };
}

Advanced Features

1. Component Palette

// src/components/Palette/ComponentPalette.tsx
import React from 'react';

const COMPONENT_CATEGORIES = {
  'Data Sources': [
    { type: 'DataSource', label: 'Data Source', description: 'Generate or load data' },
    { type: 'FileReader', label: 'File Reader', description: 'Read files from disk' },
    { type: 'APISource', label: 'API Source', description: 'Fetch data from REST APIs' },
  ],
  'Processors': [
    { type: 'MapActor', label: 'Map', description: 'Transform data elements' },
    { type: 'FilterActor', label: 'Filter', description: 'Filter data elements' },
    { type: 'ReduceActor', label: 'Reduce', description: 'Aggregate data' },
    { type: 'SortActor', label: 'Sort', description: 'Sort data elements' },
  ],
  'Outputs': [
    { type: 'Logger', label: 'Logger', description: 'Log data to console' },
    { type: 'FileWriter', label: 'File Writer', description: 'Write data to file' },
    { type: 'ChartDisplay', label: 'Chart Display', description: 'Visualize data' },
  ],
};

export function ComponentPalette() {
  const onDragStart = (event: React.DragEvent, nodeType: string) => {
    event.dataTransfer.setData('application/reactflow', nodeType);
    event.dataTransfer.effectAllowed = 'move';
  };

  return (
    <div className="component-palette">
      <div className="palette-header">
        <h3>Components</h3>
      </div>
      
      <div className="palette-content">
        {Object.entries(COMPONENT_CATEGORIES).map(([category, components]) => (
          <div key={category} className="component-category">
            <h4>{category}</h4>
            <div className="component-list">
              {components.map((component) => (
                <div
                  key={component.type}
                  className="component-item"
                  draggable
                  onDragStart={(event) => onDragStart(event, component.type)}
                >
                  <div className="component-label">{component.label}</div>
                  <div className="component-description">{component.description}</div>
                </div>
              ))}
            </div>
          </div>
        ))}
      </div>
    </div>
  );
}

2. Execution Controls

// src/components/Controls/ExecutionControls.tsx
import React, { useState } from 'react';

interface ExecutionControlsProps {
  onExecute: () => void;
  isReady: boolean;
}

export function ExecutionControls({ onExecute, isReady }: ExecutionControlsProps) {
  const [isExecuting, setIsExecuting] = useState(false);

  const handleExecute = async () => {
    setIsExecuting(true);
    try {
      onExecute();
      // You can add execution status monitoring here
      setTimeout(() => setIsExecuting(false), 2000); // Simulate execution time
    } catch (error) {
      console.error('Execution failed:', error);
      setIsExecuting(false);
    }
  };

  return (
    <div className="execution-controls">
      <button
        onClick={handleExecute}
        disabled={!isReady || isExecuting}
        className={`execute-button ${isExecuting ? 'executing' : ''}`}
      >
        {isExecuting ? 'Executing...' : 'Execute Workflow'}
      </button>
      
      <div className="status-indicator">
        <div className={`status-dot ${isReady ? 'ready' : 'not-ready'}`} />
        <span>{isReady ? 'Ready' : 'Initializing...'}</span>
      </div>
    </div>
  );
}

Complete Example

1. Main App Component

// src/App.tsx
import React from 'react';
import { ReactFlowProvider } from 'reactflow';
import { ReactFlowEditor } from './components/Editor/ReactFlowEditor';

import './App.css';

function App() {
  return (
    <div className="App">
      <ReactFlowProvider>
        <ReactFlowEditor />
      </ReactFlowProvider>
    </div>
  );
}

export default App;

2. Complete Styling

/* src/App.css */
.App {
  height: 100vh;
  width: 100vw;
  font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', sans-serif;
}

/* Component Palette Styles */
.component-palette {
  width: 300px;
  height: 100vh;
  background: #f8f9fa;
  border-right: 1px solid #e9ecef;
  display: flex;
  flex-direction: column;
  overflow-y: auto;
}

.palette-header {
  padding: 16px;
  background: #ffffff;
  border-bottom: 1px solid #e9ecef;
  box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
}

.palette-header h3 {
  margin: 0;
  font-size: 18px;
  font-weight: 600;
  color: #495057;
}

.palette-content {
  flex: 1;
  padding: 16px;
}

.component-category {
  margin-bottom: 24px;
}

.component-category h4 {
  margin: 0 0 12px 0;
  font-size: 14px;
  font-weight: 600;
  color: #6c757d;
  text-transform: uppercase;
  letter-spacing: 0.5px;
}

.component-list {
  display: flex;
  flex-direction: column;
  gap: 8px;
}

.component-item {
  padding: 12px;
  background: #ffffff;
  border: 1px solid #e9ecef;
  border-radius: 8px;
  cursor: grab;
  transition: all 0.2s ease;
  user-select: none;
}

.component-item:hover {
  border-color: #3b82f6;
  box-shadow: 0 2px 8px rgba(59, 130, 246, 0.15);
  transform: translateY(-1px);
}

.component-item:active {
  cursor: grabbing;
  transform: translateY(0);
}

.component-label {
  font-weight: 600;
  color: #212529;
  margin-bottom: 4px;
}

.component-description {
  font-size: 12px;
  color: #6c757d;
  line-height: 1.4;
}

/* Execution Controls Styles */
.execution-controls {
  display: flex;
  flex-direction: column;
  gap: 12px;
  padding: 16px;
  background: #ffffff;
  border: 1px solid #e9ecef;
  border-radius: 8px;
  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
  min-width: 200px;
}

.execute-button {
  padding: 10px 16px;
  background: #10b981;
  color: white;
  border: none;
  border-radius: 6px;
  font-weight: 600;
  cursor: pointer;
  transition: all 0.2s ease;
}

.execute-button:hover:not(:disabled) {
  background: #059669;
  transform: translateY(-1px);
}

.execute-button:disabled {
  background: #9ca3af;
  cursor: not-allowed;
  transform: none;
}

.execute-button.executing {
  background: #f59e0b;
  animation: pulse 2s infinite;
}

@keyframes pulse {
  0%, 100% { opacity: 1; }
  50% { opacity: 0.7; }
}

.status-indicator {
  display: flex;
  align-items: center;
  gap: 8px;
  font-size: 14px;
  color: #6c757d;
}

.status-dot {
  width: 8px;
  height: 8px;
  border-radius: 50%;
  transition: background-color 0.3s ease;
}

.status-dot.ready {
  background: #10b981;
}

.status-dot.not-ready {
  background: #ef4444;
  animation: blink 1s infinite;
}

@keyframes blink {
  0%, 100% { opacity: 1; }
  50% { opacity: 0.3; }
}

/* ReactFlow Customizations */
.react-flow__node.react-flow__node-reflow {
  background: transparent;
  border: none;
}

.react-flow__edge-path {
  stroke: #3b82f6;
  stroke-width: 2;
}

.react-flow__edge:hover .react-flow__edge-path {
  stroke: #1d4ed8;
  stroke-width: 3;
}

.react-flow__controls {
  background: #ffffff;
  border: 1px solid #e9ecef;
  border-radius: 8px;
  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}

.react-flow__controls button {
  background: #ffffff;
  border: none;
  border-bottom: 1px solid #e9ecef;
}

.react-flow__controls button:hover {
  background: #f8f9fa;
}

3. TypeScript Type Definitions

// src/types/reflow.ts
export interface ReflowPort {
  id: string;
  name: string;
  trait: 'data' | 'control' | 'event' | 'config';
  position?: { x: number; y: number };
  metadata?: Record<string, any>;
}

export interface ReflowNodeMetadata {
  position: { x: number; y: number };
  name: string;
  inports: ReflowPort[];
  outports: ReflowPort[];
  [key: string]: any;
}

export interface ReflowConnectionPoint {
  actor: string;
  port: {
    id: string;
    name: string;
    metadata?: Record<string, any>;
  };
}

export interface ReflowConnection {
  from: ReflowConnectionPoint;
  to: ReflowConnectionPoint;
  metadata?: Record<string, any>;
}

export interface ReflowGraphEvent {
  type: 'node_added' | 'edge_added' | 'node_updated' | 'execution_started' | 'execution_completed';
  data: any;
  timestamp: number;
}

4. Package.json Configuration

{
  "name": "reflow-editor",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@types/react": "^18.2.0",
    "@types/react-dom": "^18.2.0",
    "@types/web": "^0.0.99",
    "react": "^18.2.0",
    "react-dom": "^18.2.0",
    "react-scripts": "5.0.1",
    "reactflow": "^11.10.0",
    "typescript": "^4.9.0"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  },
  "eslintConfig": {
    "extends": [
      "react-app",
      "react-app/jest"
    ]
  },
  "browserslist": {
    "production": [
      ">0.2%",
      "not dead",
      "not op_mini all"
    ],
    "development": [
      "last 1 chrome version",
      "last 1 firefox version",
      "last 1 safari version"
    ]
  }
}

Performance Optimization

1. Memory Management

// Optimize large graphs with virtualization
import { useCallback, useMemo } from 'react';

function useOptimizedNodes(nodes: Node[], viewport: { x: number; y: number; zoom: number }) {
  const visibleNodes = useMemo(() => {
    // Only render nodes in viewport to improve performance
    const padding = 200; // Extra padding around viewport
    const viewportBounds = {
      left: -viewport.x - padding,
      top: -viewport.y - padding,
      right: (-viewport.x + window.innerWidth) / viewport.zoom + padding,
      bottom: (-viewport.y + window.innerHeight) / viewport.zoom + padding,
    };

    return nodes.filter(node => {
      return (
        node.position.x >= viewportBounds.left &&
        node.position.x <= viewportBounds.right &&
        node.position.y >= viewportBounds.top &&
        node.position.y <= viewportBounds.bottom
      );
    });
  }, [nodes, viewport]);

  return visibleNodes;
}

2. Worker Optimization

// Batch worker messages to reduce overhead
class WorkerMessageBatcher {
  private batchedMessages: WorkerMessage[] = [];
  private batchTimeout: NodeJS.Timeout | null = null;
  private worker: Worker;

  constructor(worker: Worker) {
    this.worker = worker;
  }

  sendMessage(message: WorkerMessage) {
    this.batchedMessages.push(message);
    
    if (this.batchTimeout) {
      clearTimeout(this.batchTimeout);
    }

    this.batchTimeout = setTimeout(() => {
      this.flushBatch();
    }, 16); // Batch messages for ~60fps
  }

  private flushBatch() {
    if (this.batchedMessages.length > 0) {
      this.worker.postMessage({
        type: 'BATCH',
        payload: this.batchedMessages
      });
      this.batchedMessages = [];
    }
    this.batchTimeout = null;
  }
}

3. State Management Optimization

// Use React.memo and useMemo for expensive operations
import { memo, useMemo } from 'react';

export const OptimizedReflowNode = memo(({ data, isConnectable }: ReflowNodeProps) => {
  const portColors = useMemo(() => {
    return {
      inports: data.inports.map(port => getPortColor(port.trait)),
      outports: data.outports.map(port => getPortColor(port.trait))
    };
  }, [data.inports, data.outports]);

  return (
    <div className="reflow-node">
      {/* Optimized rendering with memoized colors */}
    </div>
  );
}, (prevProps, nextProps) => {
  // Custom comparison for better performance
  return (
    prevProps.data.label === nextProps.data.label &&
    prevProps.data.process === nextProps.data.process &&
    prevProps.isConnectable === nextProps.isConnectable &&
    JSON.stringify(prevProps.data.inports) === JSON.stringify(nextProps.data.inports) &&
    JSON.stringify(prevProps.data.outports) === JSON.stringify(nextProps.data.outports)
  );
});

4. WebAssembly Loading Optimization

// Pre-load and cache WebAssembly modules
class WasmCache {
  private static instance: WasmCache;
  private wasmModule: WebAssembly.Module | null = null;
  private loading: Promise<WebAssembly.Module> | null = null;

  static getInstance() {
    if (!WasmCache.instance) {
      WasmCache.instance = new WasmCache();
    }
    return WasmCache.instance;
  }

  async getModule(): Promise<WebAssembly.Module> {
    if (this.wasmModule) {
      return this.wasmModule;
    }

    if (this.loading) {
      return this.loading;
    }

    this.loading = this.loadModule();
    this.wasmModule = await this.loading;
    return this.wasmModule;
  }

  private async loadModule(): Promise<WebAssembly.Module> {
    const response = await fetch('/reflow_network_bg.wasm');
    const bytes = await response.arrayBuffer();
    return WebAssembly.compile(bytes);
  }
}

Best Practices & Tips

1. Error Handling

  • Always wrap worker communication in try-catch blocks
  • Implement proper error boundaries in React components
  • Provide meaningful error messages to users
  • Log errors for debugging but don't expose sensitive information

2. State Synchronization

  • Keep ReactFlow state as the source of truth for UI
  • Use the worker for business logic and persistence
  • Implement debouncing for frequent updates
  • Handle race conditions in async operations

3. Performance

  • Use React.memo for components that render frequently
  • Implement virtualization for large graphs (>1000 nodes)
  • Batch worker messages to reduce overhead
  • Optimize WebAssembly loading and initialization

4. User Experience

  • Show loading states during initialization
  • Provide feedback for long-running operations
  • Implement undo/redo functionality
  • Add keyboard shortcuts for common operations

Conclusion

This tutorial demonstrated how to build a modern, high-performance visual workflow editor by combining ReactFlow's excellent UI capabilities with Reflow's powerful WebAssembly engine running in a Web Worker.

Key Benefits Achieved

  • Performance: UI remains responsive during heavy graph operations
  • Scalability: Can handle complex workflows with hundreds of nodes
  • Persistence: Automatic saving and loading of graph state
  • Type Safety: Full TypeScript integration for better development experience
  • Modularity: Clean separation between UI and business logic

Next Steps

  • Custom Components: Extend the component palette with domain-specific actors
  • Real-time Collaboration: Add WebSocket support for multi-user editing
  • Advanced Debugging: Implement step-through execution and breakpoints
  • Plugin System: Create an extensible architecture for custom functionality
  • Cloud Integration: Add support for cloud storage and sharing

The architecture presented here provides a solid foundation for building production-ready workflow editors that can scale to enterprise requirements while maintaining excellent user experience.

For more advanced topics and examples, explore the main Reflow documentation and the audio-flow example which demonstrates many of these concepts in action.

Performance Optimization Guide

Advanced techniques for optimizing Reflow workflows and applications.

Overview

This guide covers comprehensive performance optimization strategies for Reflow applications, from basic configuration tweaks to advanced architectural patterns.

Performance Analysis

1. Profiling Your Application

#![allow(unused)]
fn main() {
use reflow_network::profiling::{ProfileConfig, Profiler, PerformanceMetrics};
use std::time::Instant;

// Enable comprehensive profiling
let profile_config = ProfileConfig {
    enable_memory_tracking: true,
    enable_cpu_profiling: true,
    enable_network_monitoring: true,
    sample_rate: 1000, // Sample every 1000 operations
    output_format: OutputFormat::Json,
};

let profiler = Profiler::new(profile_config);
profiler.start();

// Run your workflow
let start = Instant::now();
network.execute().await?;
let duration = start.elapsed();

// Collect profiling data
let metrics = profiler.stop_and_collect();
println!("Execution time: {:?}", duration);
println!("Memory peak: {:.2} MB", metrics.peak_memory_mb);
println!("CPU utilization: {:.1}%", metrics.avg_cpu_percent);

// Save detailed report
metrics.save_report("performance_report.json")?;
}

2. Benchmarking Workflows

#![allow(unused)]
fn main() {
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId};

fn benchmark_workflow_variants(c: &mut Criterion) {
    let rt = tokio::runtime::Runtime::new().unwrap();
    
    let mut group = c.benchmark_group("workflow_comparison");
    
    // Test different configurations
    for batch_size in [10, 50, 100, 500].iter() {
        group.bench_with_input(
            BenchmarkId::new("batched_workflow", batch_size),
            batch_size,
            |b, &batch_size| {
                b.iter(|| {
                    rt.block_on(async {
                        let network = create_batched_workflow(batch_size).await;
                        black_box(network.execute().await)
                    })
                })
            },
        );
    }
    
    group.finish();
}

criterion_group!(benches, benchmark_workflow_variants);
criterion_main!(benches);
}

Memory Optimization

1. Memory Pool Configuration

#![allow(unused)]
fn main() {
use reflow_network::{MemoryPool, PoolConfig};

// Configure memory pools for different object types
let pool_config = PoolConfig {
    message_pool_size: 10000,
    node_pool_size: 1000, 
    connection_pool_size: 5000,
    enable_auto_scaling: true,
    max_pool_size: 50000,
    cleanup_threshold: 0.8,
};

let memory_pool = MemoryPool::new(pool_config);

// Use pooled objects
let network = Network::with_memory_pool(memory_pool);
}

2. Message Optimization

#![allow(unused)]
fn main() {
use reflow_network::{Message, MessageBuilder, CompactMessage};

// Use compact message format for large data
fn create_efficient_message(data: &[u8]) -> Message {
    if data.len() > 1024 {
        // Use compressed format for large payloads
        MessageBuilder::new()
            .compress_payload(true)
            .use_binary_format(true)
            .build_from_bytes(data)
    } else {
        // Use standard format for small payloads
        Message::Binary(data.to_vec())
    }
}

// Implement message recycling
struct MessageCache {
    cache: Vec<Message>,
    max_size: usize,
}

impl MessageCache {
    fn get_or_create(&mut self) -> Message {
        self.cache.pop().unwrap_or_else(|| Message::Null)
    }
    
    fn return_message(&mut self, mut msg: Message) {
        if self.cache.len() < self.max_size {
            // Reset message and return to cache
            msg.clear();
            self.cache.push(msg);
        }
    }
}
}

3. Zero-Copy Optimizations

#![allow(unused)]
fn main() {
use std::sync::Arc;
use bytes::Bytes;

// Use reference counting for large shared data
#[derive(Clone)]
struct SharedData {
    inner: Arc<Bytes>,
}

impl SharedData {
    fn new(data: Vec<u8>) -> Self {
        Self {
            inner: Arc::new(Bytes::from(data))
        }
    }
    
    fn as_slice(&self) -> &[u8] {
        &self.inner
    }
}

// Actor implementation with zero-copy semantics
impl Actor for OptimizedProcessor {
    fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
        let mut outputs = HashMap::new();
        
        if let Some(Message::Binary(data)) = inputs.get("input") {
            // Process without copying the data
            let shared_data = SharedData::new(data.clone());
            
            // Pass reference to multiple outputs
            outputs.insert("output1".to_string(), 
                          Message::Custom(Box::new(shared_data.clone())));
            outputs.insert("output2".to_string(), 
                          Message::Custom(Box::new(shared_data)));
        }
        
        Ok(outputs)
    }
}
}

CPU Optimization

1. Parallel Processing

#![allow(unused)]
fn main() {
use rayon::prelude::*;
use tokio::task;

// Parallel data processing
impl Actor for ParallelProcessor {
    fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
        if let Some(Message::Array(items)) = inputs.get("input") {
            // Process items in parallel
            let results: Vec<Message> = items
                .par_iter()
                .map(|item| self.process_item(item))
                .collect();
            
            let mut outputs = HashMap::new();
            outputs.insert("output".to_string(), Message::Array(results));
            Ok(outputs)
        } else {
            Err(ActorError::InvalidInput)
        }
    }
}

// Async parallel processing
async fn process_batch_async(items: Vec<Message>) -> Result<Vec<Message>, ActorError> {
    let tasks: Vec<_> = items.into_iter()
        .map(|item| task::spawn(async move { process_item_async(item).await }))
        .collect();
    
    let mut results = Vec::new();
    for task in tasks {
        results.push(task.await??);
    }
    
    Ok(results)
}
}

2. CPU Affinity and Thread Management

#![allow(unused)]
fn main() {
use reflow_network::{ThreadConfig, CpuAffinity};

// Configure thread affinity for specific actors
let thread_config = ThreadConfig {
    worker_threads: num_cpus::get(),
    enable_work_stealing: true,
    cpu_affinity: CpuAffinity::Balanced,
    thread_priority: ThreadPriority::High,
};

// Pin specific actors to dedicated threads
let high_priority_executor = ThreadPoolBuilder::new()
    .num_threads(2)
    .thread_name(|i| format!("high-priority-{}", i))
    .build()?;

network.set_actor_executor("critical_processor", high_priority_executor);
}

3. SIMD Optimizations

#![allow(unused)]
fn main() {
use std::simd::{f32x8, SimdFloat};

// SIMD-optimized data processing
fn process_array_simd(data: &mut [f32]) {
    let chunks = data.chunks_exact_mut(8);
    let remainder = chunks.remainder();
    
    for chunk in chunks {
        let vec = f32x8::from_slice(chunk);
        let processed = vec * f32x8::splat(2.0) + f32x8::splat(1.0);
        processed.copy_to_slice(chunk);
    }
    
    // Handle remainder
    for item in remainder {
        *item = *item * 2.0 + 1.0;
    }
}

impl Actor for SIMDProcessor {
    fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
        if let Some(Message::Array(items)) = inputs.get("input") {
            let mut float_data: Vec<f32> = items.iter()
                .filter_map(|item| {
                    if let Message::Float(f) = item {
                        Some(*f as f32)
                    } else {
                        None
                    }
                })
                .collect();
            
            process_array_simd(&mut float_data);
            
            let results: Vec<Message> = float_data.into_iter()
                .map(|f| Message::Float(f as f64))
                .collect();
            
            let mut outputs = HashMap::new();
            outputs.insert("output".to_string(), Message::Array(results));
            Ok(outputs)
        } else {
            Err(ActorError::InvalidInput)
        }
    }
}
}

Network Optimization

1. Connection Pooling

#![allow(unused)]
fn main() {
use reflow_network::{ConnectionPool, PooledConnection};

// HTTP client with connection pooling
struct OptimizedHttpClient {
    pool: ConnectionPool,
    config: HttpConfig,
}

impl OptimizedHttpClient {
    fn new() -> Self {
        let pool = ConnectionPool::builder()
            .max_connections(100)
            .idle_timeout(Duration::from_secs(30))
            .connection_timeout(Duration::from_secs(5))
            .keepalive(true)
            .build();
            
        Self {
            pool,
            config: HttpConfig::default(),
        }
    }
    
    async fn request(&self, url: &str) -> Result<Response, HttpError> {
        let connection = self.pool.get_connection(url).await?;
        let response = connection.request(url).await?;
        
        // Connection is automatically returned to pool
        Ok(response)
    }
}
}

2. Batch Network Operations

#![allow(unused)]
fn main() {
use reflow_components::integration::BatchHttpActor;

// Batch multiple HTTP requests
let batch_http = BatchHttpActor::new()
    .batch_size(10)
    .batch_timeout(Duration::from_millis(100))
    .max_concurrent_batches(5)
    .retry_config(RetryConfig {
        max_attempts: 3,
        backoff: BackoffStrategy::Exponential,
        ..Default::default()
    });

// Configure request batching
impl Actor for BatchHttpActor {
    fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
        if let Some(Message::Array(urls)) = inputs.get("urls") {
            // Batch requests automatically
            let batched_requests = self.create_batches(urls);
            let futures: Vec<_> = batched_requests.into_iter()
                .map(|batch| self.execute_batch(batch))
                .collect();
            
            // Execute batches concurrently
            let results = futures::future::join_all(futures).await;
            
            let mut outputs = HashMap::new();
            outputs.insert("responses".to_string(), Message::Array(results));
            Ok(outputs)
        } else {
            Err(ActorError::InvalidInput)
        }
    }
}
}

3. WebSocket Optimization

#![allow(unused)]
fn main() {
use tokio_tungstenite::{WebSocketStream, MaybeTlsStream};

// Optimized WebSocket handling
struct OptimizedWebSocket {
    stream: WebSocketStream<MaybeTlsStream<TcpStream>>,
    send_buffer: VecDeque<Message>,
    batch_size: usize,
}

impl OptimizedWebSocket {
    async fn send_batched(&mut self) -> Result<(), WebSocketError> {
        if self.send_buffer.len() >= self.batch_size {
            let batch: Vec<_> = self.send_buffer.drain(..).collect();
            let combined_message = self.combine_messages(batch);
            self.stream.send(combined_message).await?;
        }
        Ok(())
    }
    
    fn combine_messages(&self, messages: Vec<Message>) -> tungstenite::Message {
        // Combine multiple messages into a single frame
        let combined_data = messages.into_iter()
            .map(|msg| msg.to_bytes())
            .collect::<Vec<_>>()
            .concat();
        
        tungstenite::Message::Binary(combined_data)
    }
}
}

I/O Optimization

1. Async I/O Best Practices

#![allow(unused)]
fn main() {
use tokio::fs::File;
use tokio::io::{AsyncReadExt, AsyncWriteExt, BufReader, BufWriter};

// Efficient file processing
async fn process_large_file(path: &str) -> Result<(), std::io::Error> {
    let file = File::open(path).await?;
    let mut reader = BufReader::with_capacity(64 * 1024, file); // 64KB buffer
    
    let output_file = File::create("output.txt").await?;
    let mut writer = BufWriter::with_capacity(64 * 1024, output_file);
    
    let mut buffer = vec![0; 8192]; // 8KB read buffer
    
    loop {
        let bytes_read = reader.read(&mut buffer).await?;
        if bytes_read == 0 {
            break;
        }
        
        // Process data in chunks
        let processed = process_chunk(&buffer[..bytes_read]).await;
        writer.write_all(&processed).await?;
    }
    
    writer.flush().await?;
    Ok(())
}

// Parallel file processing
async fn process_files_parallel(file_paths: Vec<String>) -> Result<(), std::io::Error> {
    let semaphore = Arc::new(Semaphore::new(10)); // Limit concurrent file operations
    
    let tasks: Vec<_> = file_paths.into_iter()
        .map(|path| {
            let sem = semaphore.clone();
            tokio::spawn(async move {
                let _permit = sem.acquire().await.unwrap();
                process_large_file(&path).await
            })
        })
        .collect();
    
    futures::future::try_join_all(tasks).await?;
    Ok(())
}
}

2. Database Optimization

#![allow(unused)]
fn main() {
use sqlx::{Pool, Postgres, Row};

// Optimized database operations
struct OptimizedDbActor {
    pool: Pool<Postgres>,
    prepared_statements: HashMap<String, String>,
}

impl OptimizedDbActor {
    async fn new(database_url: &str) -> Result<Self, sqlx::Error> {
        let pool = sqlx::postgres::PgPoolOptions::new()
            .max_connections(20)
            .min_connections(5)
            .acquire_timeout(Duration::from_secs(3))
            .idle_timeout(Duration::from_secs(600))
            .max_lifetime(Duration::from_secs(1800))
            .connect(database_url)
            .await?;
        
        Ok(Self {
            pool,
            prepared_statements: HashMap::new(),
        })
    }
    
    async fn batch_insert(&self, records: Vec<Record>) -> Result<(), sqlx::Error> {
        let mut tx = self.pool.begin().await?;
        
        for chunk in records.chunks(1000) { // Process in batches of 1000
            let query = self.build_batch_insert_query(chunk);
            sqlx::query(&query).execute(&mut *tx).await?;
        }
        
        tx.commit().await?;
        Ok(())
    }
    
    async fn execute_prepared(&self, statement_name: &str, params: &[&dyn sqlx::Encode<'_, Postgres>]) -> Result<Vec<Row>, sqlx::Error> {
        if let Some(sql) = self.prepared_statements.get(statement_name) {
            let mut query = sqlx::query(sql);
            for param in params {
                query = query.bind(param);
            }
            query.fetch_all(&self.pool).await
        } else {
            Err(sqlx::Error::RowNotFound)
        }
    }
}
}

Workflow-Specific Optimizations

1. Pipeline Optimization

#![allow(unused)]
fn main() {
// Optimized pipeline with backpressure
use tokio::sync::mpsc;

struct OptimizedPipeline {
    stages: Vec<Box<dyn Actor>>,
    buffer_sizes: Vec<usize>,
    channels: Vec<mpsc::Sender<Message>>,
}

impl OptimizedPipeline {
    fn new() -> Self {
        Self {
            stages: Vec::new(),
            buffer_sizes: Vec::new(),
            channels: Vec::new(),
        }
    }
    
    fn add_stage(&mut self, actor: Box<dyn Actor>, buffer_size: usize) {
        self.stages.push(actor);
        self.buffer_sizes.push(buffer_size);
        
        let (tx, rx) = mpsc::channel(buffer_size);
        self.channels.push(tx);
    }
    
    async fn execute_with_backpressure(&mut self, input: Message) -> Result<Message, ActorError> {
        let mut current_message = input;
        
        for (i, stage) in self.stages.iter_mut().enumerate() {
            // Apply backpressure using channel capacity
            if let Some(tx) = self.channels.get(i) {
                tx.send(current_message.clone()).await
                    .map_err(|_| ActorError::ChannelClosed)?;
            }
            
            let inputs = HashMap::from([("input".to_string(), current_message)]);
            let outputs = stage.process(inputs)?;
            
            current_message = outputs.get("output")
                .ok_or(ActorError::MissingOutput)?
                .clone();
        }
        
        Ok(current_message)
    }
}
}

2. Dynamic Load Balancing

#![allow(unused)]
fn main() {
use std::sync::atomic::{AtomicUsize, Ordering};

struct LoadBalancer {
    workers: Vec<Box<dyn Actor>>,
    load_counters: Vec<AtomicUsize>,
    strategy: LoadBalanceStrategy,
}

impl LoadBalancer {
    fn select_worker(&self) -> usize {
        match self.strategy {
            LoadBalanceStrategy::RoundRobin => {
                static COUNTER: AtomicUsize = AtomicUsize::new(0);
                COUNTER.fetch_add(1, Ordering::Relaxed) % self.workers.len()
            }
            LoadBalanceStrategy::LeastLoaded => {
                self.load_counters
                    .iter()
                    .enumerate()
                    .min_by_key(|(_, counter)| counter.load(Ordering::Relaxed))
                    .map(|(index, _)| index)
                    .unwrap_or(0)
            }
            LoadBalanceStrategy::WeightedRoundRobin => {
                // Implement weighted selection based on worker capacity
                self.select_weighted_worker()
            }
        }
    }
    
    fn update_load_metrics(&self, worker_index: usize, processing_time: Duration) {
        // Update load metrics for adaptive load balancing
        let load_score = self.calculate_load_score(processing_time);
        self.load_counters[worker_index].store(load_score, Ordering::Relaxed);
    }
}
}

Monitoring and Optimization

1. Real-time Metrics

#![allow(unused)]
fn main() {
use prometheus::{Counter, Histogram, Gauge, register_counter, register_histogram, register_gauge};

struct PerformanceMonitor {
    message_counter: Counter,
    processing_time: Histogram,
    memory_usage: Gauge,
    active_connections: Gauge,
}

impl PerformanceMonitor {
    fn new() -> Self {
        Self {
            message_counter: register_counter!("reflow_messages_total", "Total messages processed").unwrap(),
            processing_time: register_histogram!("reflow_processing_duration_seconds", "Processing time in seconds").unwrap(),
            memory_usage: register_gauge!("reflow_memory_usage_bytes", "Memory usage in bytes").unwrap(),
            active_connections: register_gauge!("reflow_active_connections", "Number of active connections").unwrap(),
        }
    }
    
    fn record_message_processed(&self, processing_time: Duration) {
        self.message_counter.inc();
        self.processing_time.observe(processing_time.as_secs_f64());
    }
    
    fn update_memory_usage(&self, bytes: u64) {
        self.memory_usage.set(bytes as f64);
    }
    
    async fn collect_system_metrics(&self) {
        if let Some(usage) = memory_stats::memory_stats() {
            self.update_memory_usage(usage.physical_mem as u64);
        }
        
        // Collect other system metrics
        let cpu_usage = get_cpu_usage().await;
        // ... record other metrics
    }
}
}

2. Adaptive Optimization

#![allow(unused)]
fn main() {
struct AdaptiveOptimizer {
    performance_history: VecDeque<PerformanceSnapshot>,
    optimization_strategies: Vec<Box<dyn OptimizationStrategy>>,
    current_config: OptimizationConfig,
}

impl AdaptiveOptimizer {
    async fn optimize_based_on_metrics(&mut self, current_metrics: &PerformanceMetrics) {
        let snapshot = PerformanceSnapshot {
            timestamp: std::time::Instant::now(),
            metrics: current_metrics.clone(),
            config: self.current_config.clone(),
        };
        
        self.performance_history.push_back(snapshot);
        if self.performance_history.len() > 100 {
            self.performance_history.pop_front();
        }
        
        // Analyze trends and apply optimizations
        if let Some(optimization) = self.analyze_and_suggest_optimization() {
            self.apply_optimization(optimization).await;
        }
    }
    
    fn analyze_and_suggest_optimization(&self) -> Option<OptimizationAction> {
        // Machine learning-based optimization suggestions
        let trend_analyzer = TrendAnalyzer::new(&self.performance_history);
        
        if trend_analyzer.detect_memory_pressure() {
            Some(OptimizationAction::ReduceMemoryUsage)
        } else if trend_analyzer.detect_cpu_bottleneck() {
            Some(OptimizationAction::IncreaseParallelism)
        } else if trend_analyzer.detect_io_bottleneck() {
            Some(OptimizationAction::OptimizeIo)
        } else {
            None
        }
    }
}
}

Platform-Specific Optimizations

1. Linux-Specific Optimizations

#![allow(unused)]
fn main() {
#[cfg(target_os = "linux")]
mod linux_optimizations {
    use libc::{sched_setaffinity, cpu_set_t, CPU_SET, CPU_ZERO};
    
    pub fn set_cpu_affinity(thread_id: u32, cpu_cores: &[usize]) -> Result<(), std::io::Error> {
        unsafe {
            let mut cpuset: cpu_set_t = std::mem::zeroed();
            CPU_ZERO(&mut cpuset);
            
            for &core in cpu_cores {
                CPU_SET(core, &mut cpuset);
            }
            
            let result = sched_setaffinity(
                thread_id, 
                std::mem::size_of::<cpu_set_t>(), 
                &cpuset
            );
            
            if result == 0 {
                Ok(())
            } else {
                Err(std::io::Error::last_os_error())
            }
        }
    }
    
    pub fn configure_memory_policy() {
        // Configure NUMA memory policy for optimal performance
        use libc::{mbind, MPOL_BIND};
        // Implementation details...
    }
}
}

2. macOS-Specific Optimizations

#![allow(unused)]
fn main() {
#[cfg(target_os = "macos")]
mod macos_optimizations {
    use std::ffi::CString;
    use libc::{pthread_t, pthread_self, thread_policy_set, THREAD_PRECEDENCE_POLICY};
    
    pub fn set_thread_priority(priority: i32) -> Result<(), std::io::Error> {
        unsafe {
            let thread = pthread_self();
            let policy = THREAD_PRECEDENCE_POLICY;
            
            let result = thread_policy_set(
                thread as *mut _,
                policy,
                &priority as *const _ as *const _,
                1
            );
            
            if result == 0 {
                Ok(())
            } else {
                Err(std::io::Error::last_os_error())
            }
        }
    }
}
}

Best Practices Summary

1. General Optimization Principles

  • Measure First: Always profile before optimizing
  • Optimize Bottlenecks: Focus on the slowest components
  • Cache Wisely: Cache expensive computations, not cheap ones
  • Batch Operations: Group similar operations together
  • Use Appropriate Data Structures: Choose the right tool for the job

2. Memory Management

  • Pool Resources: Use object pools for frequently allocated items
  • Minimize Allocations: Reuse buffers and data structures
  • Compress Large Data: Use compression for large payloads
  • Monitor Memory Usage: Track allocation patterns

3. Concurrency and Parallelism

  • Match Threading to Workload: CPU-bound vs I/O-bound considerations
  • Avoid Lock Contention: Use lock-free data structures when possible
  • Balance Load: Distribute work evenly across threads
  • Handle Backpressure: Prevent memory exhaustion in pipelines

4. Network and I/O

  • Connection Pooling: Reuse network connections
  • Batch Network Operations: Reduce round-trip overhead
  • Async I/O: Use non-blocking I/O operations
  • Buffer Sizing: Optimize buffer sizes for your workload

Troubleshooting Performance Issues

Common Performance Problems

  1. Memory Leaks: Use memory profilers to identify leaks
  2. CPU Hotspots: Profile CPU usage to find bottlenecks
  3. Lock Contention: Monitor lock wait times
  4. I/O Blocking: Identify blocking I/O operations
  5. Network Latency: Measure network round-trip times

Performance Testing

#![allow(unused)]
fn main() {
#[cfg(test)]
mod performance_tests {
    use super::*;
    use criterion::{criterion_group, criterion_main, Criterion};
    
    fn benchmark_workflow_throughput(c: &mut Criterion) {
        c.bench_function("workflow_1000_messages", |b| {
            b.iter(|| {
                let rt = tokio::runtime::Runtime::new().unwrap();
                rt.block_on(async {
                    let network = create_test_network().await;
                    let messages = create_test_messages(1000);
                    
                    let start = std::time::Instant::now();
                    for message in messages {
                        network.process_message(message).await.unwrap();
                    }
                    start.elapsed()
                })
            })
        });
    }
    
    criterion_group!(benches, benchmark_workflow_throughput);
    criterion_main!(benches);
}
}

Next Steps

Distributed Workflow Example

Learn how to build and deploy distributed workflows using Reflow's distributed networking capabilities.

Overview

This tutorial demonstrates how to create a complete distributed workflow that spans multiple network instances. We'll build a real-world example: a distributed data processing and machine learning pipeline.

What You'll Build

A distributed system with three network instances:

  1. Data Instance: Collects and processes raw data
  2. ML Instance: Trains and evaluates machine learning models
  3. API Instance: Serves predictions and provides monitoring
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  Data Instance  │───▶│  ML Instance    │───▶│  API Instance   │
│                 │    │                 │    │                 │
│ • Data Collector│    │ • Feature Eng.  │    │ • Prediction API│
│ • Data Processor│    │ • Model Trainer │    │ • Monitoring    │
│ • Data Validator│    │ • Model Eval.   │    │ • Dashboard     │
└─────────────────┘    └─────────────────┘    └─────────────────┘

Prerequisites

  • Rust development environment
  • Basic understanding of Reflow actors and networks
  • Familiarity with distributed systems concepts

Step 1: Project Setup

Create the project structure:

mkdir distributed_ml_pipeline
cd distributed_ml_pipeline

# Create instance directories
mkdir -p instances/{data,ml,api}
mkdir -p shared/actors
mkdir -p shared/types

# Initialize Cargo workspace
cargo init --name distributed_ml_pipeline

Cargo.toml

[workspace]
members = [
    "instances/data",
    "instances/ml", 
    "instances/api",
    "shared/actors",
    "shared/types"
]

[workspace.dependencies]
reflow_network = { path = "../../crates/reflow_network" }
actor_macro = { path = "../../crates/actor_macro" }
tokio = { version = "1.0", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
tracing = "0.1"
tracing-subscriber = "0.3"
uuid = { version = "1.0", features = ["v4"] }
chrono = { version = "0.4", features = ["serde"] }

Step 2: Shared Types and Actors

Shared Types

Create shared/types/src/lib.rs:

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
use chrono::{DateTime, Utc};

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DataRecord {
    pub id: String,
    pub timestamp: DateTime<Utc>,
    pub features: Vec<f64>,
    pub metadata: std::collections::HashMap<String, serde_json::Value>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProcessedData {
    pub record_id: String,
    pub processed_features: Vec<f64>,
    pub quality_score: f64,
    pub processing_timestamp: DateTime<Utc>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrainingData {
    pub features: Vec<Vec<f64>>,
    pub labels: Vec<f64>,
    pub metadata: TrainingMetadata,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrainingMetadata {
    pub total_samples: usize,
    pub feature_count: usize,
    pub training_timestamp: DateTime<Utc>,
    pub data_source: String,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrainedModel {
    pub model_id: String,
    pub model_data: Vec<u8>, // Serialized model
    pub performance_metrics: ModelMetrics,
    pub training_timestamp: DateTime<Utc>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ModelMetrics {
    pub accuracy: f64,
    pub precision: f64,
    pub recall: f64,
    pub f1_score: f64,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PredictionRequest {
    pub request_id: String,
    pub features: Vec<f64>,
    pub model_version: Option<String>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PredictionResponse {
    pub request_id: String,
    pub prediction: f64,
    pub confidence: f64,
    pub model_version: String,
    pub processing_time_ms: u64,
}
}

Shared Actors

Create shared/actors/src/lib.rs:

#![allow(unused)]
fn main() {
use reflow_network::{
    actor::{Actor, ActorConfig, ActorContext, ActorLoad, MemoryState, Port},
    message::{Message, EncodableValue},
};
use shared_types::*;
use std::{collections::HashMap, sync::Arc};
use actor_macro::actor;
use anyhow::Error;

/// Logging actor that can be shared across all instances
#[actor(
    DistributedLoggerActor,
    inports::<100>(Input),
    outports::<50>(Output),
    state(MemoryState)
)]
pub async fn distributed_logger_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let config = context.get_config();
    
    let instance_name = config.get_string("instance_name").unwrap_or("unknown".to_string());
    let log_level = config.get_string("log_level").unwrap_or("info".to_string());
    
    for (port, message) in payload.iter() {
        let timestamp = chrono::Utc::now().format("%Y-%m-%d %H:%M:%S%.3f");
        
        match message {
            Message::String(s) => {
                println!("[{}] [{}] [{}]: {}", timestamp, instance_name, log_level.to_uppercase(), s);
            },
            Message::Object(obj) => {
                if let Ok(json_str) = serde_json::to_string_pretty(obj) {
                    println!("[{}] [{}] [{}]:\n{}", timestamp, instance_name, log_level.to_uppercase(), json_str);
                }
            },
            _ => {
                println!("[{}] [{}] [{}]: {:?}", timestamp, instance_name, log_level.to_uppercase(), message);
            }
        }
    }
    
    Ok(HashMap::new())
}

/// Metrics collector for monitoring distributed system performance
#[actor(
    MetricsCollectorActor,
    inports::<100>(Input),
    outports::<50>(Output, Alert),
    state(MemoryState)
)]
pub async fn metrics_collector_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let mut output = HashMap::new();
    
    for (port, message) in payload.iter() {
        if let Message::Object(metric_data) = message {
            // Store metrics in state
            {
                let mut state_lock = state.lock();
                if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                    let metrics_key = format!("metrics_{}", chrono::Utc::now().timestamp());
                    state_data.insert(metrics_key, metric_data.as_value().clone());
                    
                    // Keep only last 100 metrics entries
                    let keys: Vec<String> = state_data.data().keys()
                        .filter(|k| k.starts_with("metrics_"))
                        .cloned()
                        .collect();
                    
                    if keys.len() > 100 {
                        let mut sorted_keys = keys;
                        sorted_keys.sort();
                        for key in sorted_keys.into_iter().take(keys.len() - 100) {
                            state_data.data_mut().remove(&key);
                        }
                    }
                }
            }
            
            // Check for alert conditions
            if let Some(error_rate) = metric_data.as_value().get("error_rate").and_then(|v| v.as_f64()) {
                if error_rate > 0.1 { // 10% error rate threshold
                    let alert = Message::object(EncodableValue::from(serde_json::json!({
                        "type": "high_error_rate",
                        "error_rate": error_rate,
                        "timestamp": chrono::Utc::now().to_rfc3339(),
                        "severity": "warning"
                    })));
                    output.insert("Alert".to_string(), alert);
                }
            }
            
            // Forward metrics for further processing
            output.insert("Output".to_string(), message.clone());
        }
    }
    
    Ok(output)
}
}

Step 3: Data Instance

Create the data processing instance in instances/data/src/main.rs:

use reflow_network::{
    actor::{Actor, ActorConfig, ActorContext, ActorLoad, MemoryState, Port},
    distributed_network::{DistributedConfig, DistributedNetwork},
    message::{Message, EncodableValue},
    network::NetworkConfig,
};
use shared_actors::{DistributedLoggerActor, MetricsCollectorActor};
use shared_types::*;
use std::{collections::HashMap, sync::Arc, time::Duration};
use actor_macro::actor;
use anyhow::Error;
use tokio::time::sleep;

/// Data collector that simulates collecting raw data
#[actor(
    DataCollectorActor,
    inports::<100>(Trigger),
    outports::<50>(Output, Metrics),
    state(MemoryState)
)]
async fn data_collector_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let mut output = HashMap::new();
    
    if payload.contains_key("Trigger") {
        // Generate sample data
        let record = DataRecord {
            id: uuid::Uuid::new_v4().to_string(),
            timestamp: chrono::Utc::now(),
            features: (0..10).map(|_| rand::random::<f64>()).collect(),
            metadata: HashMap::from([
                ("source".to_string(), serde_json::json!("sensor_array")),
                ("quality".to_string(), serde_json::json!("high")),
            ]),
        };
        
        // Update collection count
        let count = {
            let mut state_lock = state.lock();
            if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                let count = state_data.get("collection_count")
                    .and_then(|v| v.as_i64())
                    .unwrap_or(0) + 1;
                state_data.insert("collection_count".to_string(), serde_json::json!(count));
                count
            } else {
                1
            }
        };
        
        // Send data for processing
        let data_message = Message::object(EncodableValue::from(serde_json::to_value(record)?));
        output.insert("Output".to_string(), data_message);
        
        // Send metrics
        let metrics = Message::object(EncodableValue::from(serde_json::json!({
            "actor": "data_collector",
            "records_collected": count,
            "timestamp": chrono::Utc::now().to_rfc3339(),
            "instance": "data"
        })));
        output.insert("Metrics".to_string(), metrics);
    }
    
    Ok(output)
}

/// Data processor that cleans and validates data
#[actor(
    DataProcessorActor,
    inports::<100>(Input),
    outports::<50>(Output, Metrics, Log),
    state(MemoryState)
)]
async fn data_processor_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let mut output = HashMap::new();
    
    for (port, message) in payload.iter() {
        if port == "Input" {
            if let Message::Object(obj) = message {
                if let Ok(record) = serde_json::from_value::<DataRecord>(obj.as_value().clone()) {
                    // Simulate data processing
                    let processed = ProcessedData {
                        record_id: record.id.clone(),
                        processed_features: record.features.iter()
                            .map(|&f| f * 2.0 + 1.0) // Simple transformation
                            .collect(),
                        quality_score: record.features.iter().sum::<f64>() / record.features.len() as f64,
                        processing_timestamp: chrono::Utc::now(),
                    };
                    
                    // Send processed data
                    let processed_message = Message::object(EncodableValue::from(serde_json::to_value(processed)?));
                    output.insert("Output".to_string(), processed_message);
                    
                    // Send log message
                    let log_message = Message::String(
                        format!("Processed data record {} with quality score {:.2}", 
                            record.id, processed.quality_score).into()
                    );
                    output.insert("Log".to_string(), log_message);
                    
                    // Send metrics
                    let metrics = Message::object(EncodableValue::from(serde_json::json!({
                        "actor": "data_processor",
                        "processing_time_ms": 10, // Simulated
                        "quality_score": processed.quality_score,
                        "timestamp": chrono::Utc::now().to_rfc3339(),
                        "instance": "data"
                    })));
                    output.insert("Metrics".to_string(), metrics);
                }
            }
        }
    }
    
    Ok(output)
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    tracing_subscriber::fmt::init();
    
    println!("🚀 Starting Data Instance");
    
    // Configure distributed network
    let config = DistributedConfig {
        network_id: "data_instance".to_string(),
        instance_id: "data_001".to_string(),
        bind_address: "127.0.0.1".to_string(),
        bind_port: 9001,
        discovery_endpoints: vec![],
        auth_token: Some("data_token".to_string()),
        max_connections: 10,
        heartbeat_interval_ms: 30000,
        local_network_config: NetworkConfig::default(),
    };
    
    // Create distributed network
    let mut network = DistributedNetwork::new(config).await?;
    
    // Register local actors
    network.register_local_actor("data_collector", DataCollectorActor::new(), None)?;
    network.register_local_actor("data_processor", DataProcessorActor::new(), None)?;
    network.register_local_actor("logger", DistributedLoggerActor::new(), Some(HashMap::from([
        ("instance_name".to_string(), serde_json::json!("data")),
    ])))?;
    network.register_local_actor("metrics", MetricsCollectorActor::new(), None)?;
    
    // Start the network
    network.start().await?;
    
    // Get local network for workflow setup
    {
        let local_net = network.get_local_network();
        let mut net = local_net.write();
        
        // Create workflow connections
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "data_collector".to_string(),
                port: "Output".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "data_processor".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
        
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "data_processor".to_string(),
                port: "Log".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "logger".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
        
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "data_processor".to_string(),
                port: "Metrics".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "metrics".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
    }
    
    println!("✅ Data Instance ready on 127.0.0.1:9001");
    
    // Start data collection loop
    tokio::spawn(async move {
        loop {
            sleep(Duration::from_secs(5)).await;
            
            // Trigger data collection
            let trigger_message = Message::Boolean(true);
            if let Ok(local_net) = network.get_local_network().try_read() {
                let _ = local_net.send_to_actor("data_collector", "Trigger", trigger_message);
            }
        }
    });
    
    // Keep running
    loop {
        sleep(Duration::from_secs(1)).await;
    }
}

Step 4: ML Instance

Create the ML training instance in instances/ml/src/main.rs:

use reflow_network::{
    actor::{Actor, ActorConfig, ActorContext, ActorLoad, MemoryState, Port},
    distributed_network::{DistributedConfig, DistributedNetwork},
    message::{Message, EncodableValue},
    network::NetworkConfig,
};
use shared_actors::{DistributedLoggerActor, MetricsCollectorActor};
use shared_types::*;
use std::{collections::HashMap, sync::Arc, time::Duration};
use actor_macro::actor;
use anyhow::Error;
use tokio::time::sleep;

/// Feature engineer that prepares data for ML training
#[actor(
    FeatureEngineerActor,
    inports::<100>(Input),
    outports::<50>(Output, Log, Metrics),
    state(MemoryState)
)]
async fn feature_engineer_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let mut output = HashMap::new();
    
    for (port, message) in payload.iter() {
        if port == "Input" {
            if let Message::Object(obj) = message {
                if let Ok(processed) = serde_json::from_value::<ProcessedData>(obj.as_value().clone()) {
                    // Accumulate features for batch training
                    {
                        let mut state_lock = state.lock();
                        if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                            let mut features: Vec<Vec<f64>> = state_data.get("accumulated_features")
                                .and_then(|v| serde_json::from_value(v.clone()).ok())
                                .unwrap_or_default();
                            
                            let mut labels: Vec<f64> = state_data.get("accumulated_labels")
                                .and_then(|v| serde_json::from_value(v.clone()).ok())
                                .unwrap_or_default();
                            
                            features.push(processed.processed_features.clone());
                            labels.push(processed.quality_score); // Use quality score as label
                            
                            state_data.insert("accumulated_features".to_string(), serde_json::to_value(&features)?);
                            state_data.insert("accumulated_labels".to_string(), serde_json::to_value(&labels)?);
                            
                            // Send training data when we have enough samples
                            if features.len() >= 10 {
                                let training_data = TrainingData {
                                    features: features.clone(),
                                    labels: labels.clone(),
                                    metadata: TrainingMetadata {
                                        total_samples: features.len(),
                                        feature_count: features[0].len(),
                                        training_timestamp: chrono::Utc::now(),
                                        data_source: "data_instance".to_string(),
                                    },
                                };
                                
                                let training_message = Message::object(EncodableValue::from(serde_json::to_value(training_data)?));
                                output.insert("Output".to_string(), training_message);
                                
                                // Reset accumulation
                                state_data.insert("accumulated_features".to_string(), serde_json::json!([]));
                                state_data.insert("accumulated_labels".to_string(), serde_json::json!([]));
                                
                                let log_message = Message::String(
                                    format!("Generated training batch with {} samples", features.len()).into()
                                );
                                output.insert("Log".to_string(), log_message);
                            }
                        }
                    }
                    
                    // Send metrics
                    let metrics = Message::object(EncodableValue::from(serde_json::json!({
                        "actor": "feature_engineer",
                        "features_processed": 1,
                        "timestamp": chrono::Utc::now().to_rfc3339(),
                        "instance": "ml"
                    })));
                    output.insert("Metrics".to_string(), metrics);
                }
            }
        }
    }
    
    Ok(output)
}

/// Model trainer that trains ML models
#[actor(
    ModelTrainerActor,
    inports::<100>(Input),
    outports::<50>(Output, Log, Metrics),
    state(MemoryState)
)]
async fn model_trainer_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let mut output = HashMap::new();
    
    for (port, message) in payload.iter() {
        if port == "Input" {
            if let Message::Object(obj) = message {
                if let Ok(training_data) = serde_json::from_value::<TrainingData>(obj.as_value().clone()) {
                    // Simulate model training
                    sleep(Duration::from_millis(100)).await; // Simulate training time
                    
                    let model = TrainedModel {
                        model_id: uuid::Uuid::new_v4().to_string(),
                        model_data: vec![1, 2, 3, 4, 5], // Dummy model data
                        performance_metrics: ModelMetrics {
                            accuracy: 0.85 + rand::random::<f64>() * 0.1,
                            precision: 0.82 + rand::random::<f64>() * 0.15,
                            recall: 0.78 + rand::random::<f64>() * 0.2,
                            f1_score: 0.80 + rand::random::<f64>() * 0.15,
                        },
                        training_timestamp: chrono::Utc::now(),
                    };
                    
                    // Send trained model
                    let model_message = Message::object(EncodableValue::from(serde_json::to_value(model.clone())?));
                    output.insert("Output".to_string(), model_message);
                    
                    // Send log message
                    let log_message = Message::String(
                        format!("Trained model {} with accuracy {:.3}", 
                            model.model_id, model.performance_metrics.accuracy).into()
                    );
                    output.insert("Log".to_string(), log_message);
                    
                    // Send metrics
                    let metrics = Message::object(EncodableValue::from(serde_json::json!({
                        "actor": "model_trainer",
                        "model_id": model.model_id,
                        "accuracy": model.performance_metrics.accuracy,
                        "training_samples": training_data.metadata.total_samples,
                        "timestamp": chrono::Utc::now().to_rfc3339(),
                        "instance": "ml"
                    })));
                    output.insert("Metrics".to_string(), metrics);
                }
            }
        }
    }
    
    Ok(output)
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    tracing_subscriber::fmt::init();
    
    println!("🚀 Starting ML Instance");
    
    // Configure distributed network
    let config = DistributedConfig {
        network_id: "ml_instance".to_string(),
        instance_id: "ml_001".to_string(),
        bind_address: "127.0.0.1".to_string(),
        bind_port: 9002,
        discovery_endpoints: vec![],
        auth_token: Some("ml_token".to_string()),
        max_connections: 10,
        heartbeat_interval_ms: 30000,
        local_network_config: NetworkConfig::default(),
    };
    
    // Create distributed network
    let mut network = DistributedNetwork::new(config).await?;
    
    // Register local actors
    network.register_local_actor("feature_engineer", FeatureEngineerActor::new(), None)?;
    network.register_local_actor("model_trainer", ModelTrainerActor::new(), None)?;
    network.register_local_actor("logger", DistributedLoggerActor::new(), Some(HashMap::from([
        ("instance_name".to_string(), serde_json::json!("ml")),
    ])))?;
    network.register_local_actor("metrics", MetricsCollectorActor::new(), None)?;
    
    // Start the network
    network.start().await?;
    
    // Get local network for workflow setup
    {
        let local_net = network.get_local_network();
        let mut net = local_net.write();
        
        // Create workflow connections
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "feature_engineer".to_string(),
                port: "Output".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "model_trainer".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
        
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "feature_engineer".to_string(),
                port: "Log".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "logger".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
        
        net.add_connection(reflow_network::connector::Connector {
            from: reflow_network::connector::ConnectionPoint {
                actor: "model_trainer".to_string(),
                port: "Log".to_string(),
                ..Default::default()
            },
            to: reflow_network::connector::ConnectionPoint {
                actor: "logger".to_string(),
                port: "Input".to_string(),
                ..Default::default()
            },
        })?;
    }
    
    println!("✅ ML Instance ready on 127.0.0.1:9002");
    
    // Connect to data instance
    println!("🔌 Connecting to data instance...");
    network.connect_to_network("127.0.0.1:9001").await?;
    
    // Register remote actors from data instance
    network.register_remote_actor("data_processor", "data_instance").await?;
    
    // Connect data processor to feature engineer
    {
        let local_net = network.get_local_network();
        let net = local_net.read();
        // Note: This would connect via the proxy actor created for data_processor
    }
    
    println!("✅ Connected to data instance");
    
    // Keep running
    loop {
        sleep(Duration::from_secs(1)).await;
    }
}

Step 5: API Instance

Create the API serving instance in instances/api/src/main.rs:

#![allow(unused)]
fn main() {
use reflow_network::{
    actor::{Actor, ActorConfig, ActorContext, ActorLoad, MemoryState, Port},
    distributed_network::{DistributedConfig, DistributedNetwork},
    message::{Message, EncodableValue},
    network::NetworkConfig,
};
use shared_actors::{DistributedLoggerActor, MetricsCollectorActor};
use shared_types::*;
use std::{collections::HashMap, sync::Arc, time::Duration};
use actor_macro::actor;
use anyhow::Error;
use tokio::time::sleep;

/// Prediction service that serves ML predictions
#[actor(
    PredictionServiceActor,
    inports::<100>(ModelUpdate, PredictionRequest),
    outports::<50>(PredictionResponse, Log, Metrics),
    state(MemoryState)
)]
async fn prediction_service_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let mut output = HashMap::new();
    
    for (port, message) in payload.iter() {
        match port.as_str() {
            "ModelUpdate" => {
                if let Message::Object(obj) = message {
                    if let Ok(model) = serde_json::from_value::<TrainedModel>(obj.as_value().clone()) {
                        // Store the latest model
                        {
                            let mut state_lock = state.lock();
                            if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                                state_data.insert("current_model".to_string(), serde_json::to_value(model.clone())?);
                                state_data.insert("model_version".to_string(), serde_json::json!(model.model_id));
                            }
                        }
                        
                        let log_message = Message::String(
                            format!("Updated prediction model to {} (accuracy: {:.3})",
}

Multi-Graph Workspace Tutorial

Learn how to build and manage complex multi-graph workflows using Reflow's workspace discovery and composition system.

Overview

This tutorial demonstrates how to create a multi-graph workspace that automatically discovers and composes multiple graph files into a unified workflow. We'll build a complete example with data processing, machine learning, and monitoring components.

What You'll Build

A workspace containing multiple interconnected graphs:

workspace/
├── data/
│   ├── ingestion/
│   │   └── collector.graph.json      # Data collection pipeline
│   └── processing/
│       └── transformer.graph.json    # Data transformation pipeline
├── ml/
│   └── training/
│       └── trainer.graph.json        # ML training pipeline
├── monitoring/
│   └── system_monitor.graph.json     # System monitoring
└── simple/
    ├── generator.graph.json          # Simple data generator
    └── processor.graph.json          # Simple data processor

Prerequisites

  • Basic understanding of Reflow actors and graphs
  • Familiarity with JSON graph definitions
  • Understanding of dependency management concepts

Step 1: Project Setup

Create the workspace structure:

mkdir multi_graph_workspace
cd multi_graph_workspace

# Create the directory structure
mkdir -p data/ingestion data/processing ml/training monitoring simple src

# Initialize Cargo project
cargo init --name multi_graph_workspace

Cargo.toml

[package]
name = "multi_graph_workspace"
version = "0.1.0"
edition = "2021"

[dependencies]
reflow_network = { path = "../../crates/reflow_network" }
actor_macro = { path = "../../crates/actor_macro" }
tokio = { version = "1.0", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
tracing = "0.1"
tracing-subscriber = "0.3"
uuid = { version = "1.0", features = ["v4"] }
chrono = { version = "0.4", features = ["serde"] }

Step 2: Create Custom Actors

First, let's create the custom actors we'll use across our graphs in src/actors.rs:

#![allow(unused)]
fn main() {
//! Custom actors for the multi-graph workspace example

use std::{collections::HashMap, sync::Arc};
use actor_macro::actor;
use anyhow::Error;
use reflow_network::{
    actor::{Actor, ActorConfig, ActorBehavior, ActorContext, ActorLoad, MemoryState, Port},
    message::Message,
    message::EncodableValue
};

/// Simple timer actor that emits periodic events
#[actor(
    SimpleTimerActor,
    inports::<100>(Start),
    outports::<50>(Output),
    state(MemoryState)
)]
pub async fn simple_timer_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();
    let outport_channels = context.get_outports();

    let interval_secs = context.get_config().get_number("interval").unwrap_or(1000.0);

    // Check if we should start the timer
    if let Some(start_msg) = payload.get("Start") {
        let should_start = match start_msg {
            Message::Boolean(b) => *b,
            Message::Integer(i) => *i != 0,
            Message::String(s) => !s.is_empty(),
            _ => true,
        };

        if should_start {
            // Store timer state
            {
                let mut state_lock = state.lock();
                if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                    state_data.insert("running", serde_json::json!(true));
                    state_data.insert("interval", serde_json::json!(interval_secs));
                    state_data.insert("tick_count", serde_json::json!(0));
                }
            }

            // Get max ticks (default to 10 for demos)
            let max_ticks = payload
                .get("MaxTicks")
                .and_then(|m| match m {
                    Message::Integer(i) => Some(*i as u64),
                    Message::Float(f) => Some(*f as u64),
                    _ => None,
                })
                .unwrap_or(10);

            // Spawn timer task with proper load management
            let state_clone = state.clone();
            let outports = outport_channels.clone();
            let load = context.get_load();
            
            // Increase load count for the background task
            load.lock().inc();
            
            tokio::spawn(async move {
                let mut tick_count = 0;
                
                // Ensure we decrease load count when the task finishes
                let _load_guard = scopeguard::guard(load.clone(), |load| {
                    load.lock().dec();
                });
                
                loop {
                    // Check if timer should still be running
                    let should_continue = {
                        let state_lock = state_clone.lock();
                        if let Some(state_data) = state_lock.as_any().downcast_ref::<MemoryState>() {
                            let running = state_data
                                .get("running")
                                .and_then(|v| v.as_bool())
                                .unwrap_or(false);
                            let current_ticks = state_data
                                .get("tick_count")
                                .and_then(|v| v.as_i64())
                                .unwrap_or(0) as u64;
                            running && current_ticks < max_ticks
                        } else {
                            false
                        }
                    };

                    if !should_continue {
                        break;
                    }

                    // Wait for interval
                    tokio::time::sleep(tokio::time::Duration::from_secs(interval_secs as u64)).await;

                    // Increment tick count
                    tick_count += 1;
                    
                    // Update state
                    {
                        let mut state_lock = state_clone.lock();
                        if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                            state_data.insert("tick_count", serde_json::json!(tick_count));
                        }
                    }

                    // Send tick message
                    let tick_message = Message::object(
                        EncodableValue::from(serde_json::json!({
                            "tick": tick_count,
                            "timestamp": chrono::Utc::now().to_rfc3339(),
                            "source": "SimpleTimerActor",
                            "max_ticks": max_ticks
                        }))
                    );

                    if outports.0.send_async(HashMap::from([
                        ("Output".to_owned(), tick_message)
                    ])).await.is_err() {
                        break;
                    }
                }
            });

            println!("✅ SimpleTimerActor started with interval: {}s", interval_secs);
        }
    }

    Ok(HashMap::new())
}

/// Simple logger actor that logs incoming messages
#[actor(
    SimpleLoggerActor,
    inports::<100>(Input, Prefix),
    outports::<50>(Output),
    state(MemoryState)
)]
pub async fn simple_logger_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();

    if let Some(input_msg) = payload.get("Input") {
        // Get prefix from payload or state
        let prefix = if let Some(Message::String(p)) = payload.get("Prefix") {
            p.clone()
        } else {
            let state_lock = state.lock();
            if let Some(state_data) = state_lock.as_any().downcast_ref::<MemoryState>() {
                state_data
                    .get("prefix")
                    .and_then(|v| v.as_str())
                    .unwrap_or("LOG")
                    .to_string().into()
            } else {
                "LOG".to_string().into()
            }
        };

        // Log the message with timestamp
        let timestamp = chrono::Utc::now().format("%H:%M:%S%.3f");
        println!("[{}] {}: {:?}", timestamp, prefix, input_msg);

        // Update log count in state
        {
            let mut state_lock = state.lock();
            if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                let count = state_data
                    .get("log_count")
                    .and_then(|v| v.as_i64())
                    .unwrap_or(0) + 1;
                state_data.insert("log_count", serde_json::json!(count));
            }
        }

        // Pass through the input
        Ok(HashMap::from([("Output".to_owned(), input_msg.clone())]))
    } else {
        Ok(HashMap::new())
    }
}

/// Data generator actor that creates sample data
#[actor(
    DataGeneratorActor,
    inports::<100>(Trigger, Type),
    outports::<50>(Output),
    state(MemoryState)
)]
pub async fn data_generator_actor(
    context: ActorContext,
) -> Result<HashMap<String, Message>, Error> {
    let payload = context.get_payload();
    let state = context.get_state();

    if payload.contains_key("Trigger") {
        // Get data type from payload or state
        let data_type = if let Some(Message::String(t)) = payload.get("Type") {
            t.clone()
        } else {
            let state_lock = state.lock();
            if let Some(state_data) = state_lock.as_any().downcast_ref::<MemoryState>() {
                state_data
                    .get("data_type")
                    .and_then(|v| v.as_str())
                    .unwrap_or("number")
                    .to_string().into()
            } else {
                "number".to_string().into()
            }
        };

        // Update generation count
        let generation_count = {
            let mut state_lock = state.lock();
            if let Some(state_data) = state_lock.as_mut_any().downcast_mut::<MemoryState>() {
                let count = state_data
                    .get("generation_count")
                    .and_then(|v| v.as_i64())
                    .unwrap_or(0) + 1;
                state_data.insert("generation_count", serde_json::json!(count));
                count
            } else {
                1
            }
        };

        // Generate data based on type
        let generated_data = match data_type.as_str() {
            "number" => Message::Integer(generation_count),
            "string" => Message::String(format!("generated_data_{}", generation_count).into()),
            "object" => Message::object(
                EncodableValue::from(serde_json::json!({
                    "id": generation_count,
                    "timestamp": chrono::Utc::now().to_rfc3339(),
                    "type": "generated",
                    "value": format!("sample_value_{}", generation_count)
                }))
            ),
            _ => Message::String(format!("unknown_type_data_{}", generation_count).into()),
        };

        Ok(HashMap::from([("Output".to_owned(), generated_data)]))
    } else {
        Ok(HashMap::new())
    }
}
}

Step 3: Define Graph Files

Simple Data Generator (simple/generator.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "generator",
    "description": "Simple data generator",
    "version": "1.0.0",
    "namespace": "simple"
  },
  "processes": {
    "timer": {
      "component": "SimpleTimerActor",
      "metadata": {
        "description": "Generates periodic triggers"
      }
    },
    "data_generator": {
      "component": "DataGeneratorActor", 
      "metadata": {
        "description": "Generates sample data"
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "timer", "portId": "Output" },
      "to": { "nodeId": "data_generator", "portId": "Trigger" },
      "metadata": {}
    }
  ],
  "inports": {
    "start": {
      "nodeId": "timer",
      "portId": "Start"
    }
  },
  "outports": {
    "data": {
      "nodeId": "data_generator",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "data_output": {
      "interfaceId": "data_output",
      "processName": "data_generator",
      "portName": "Output",
      "dataType": "GeneratedData",
      "description": "Generated sample data",
      "required": false
    }
  },
  "requiredInterfaces": {},
  "graphDependencies": [],
  "externalConnections": []
}

Simple Data Processor (simple/processor.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "processor",
    "description": "Simple data processor",
    "version": "1.0.0",
    "namespace": "simple",
    "dependencies": ["generator"]
  },
  "processes": {
    "logger": {
      "component": "SimpleLoggerActor",
      "metadata": {
        "description": "Logs processed data"
      }
    }
  },
  "connections": [],
  "inports": {
    "data": {
      "nodeId": "logger",
      "portId": "Input"
    }
  },
  "outports": {
    "processed": {
      "nodeId": "logger",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "processed_output": {
      "interfaceId": "processed_output",
      "processName": "logger",
      "portName": "Output",
      "dataType": "ProcessedData",
      "description": "Processed data output",
      "required": false
    }
  },
  "requiredInterfaces": {
    "data_input": {
      "interfaceId": "data_input",
      "processName": "logger",
      "portName": "Input",
      "dataType": "GeneratedData",
      "description": "Input data to process",
      "required": true
    }
  },
  "graphDependencies": [
    {
      "graphName": "generator",
      "namespace": "simple",
      "versionConstraint": ">=1.0.0",
      "required": true,
      "description": "Requires data generator for input"
    }
  ],
  "externalConnections": [
    {
      "connectionId": "generator_to_processor",
      "targetGraph": "generator",
      "targetNamespace": "simple",
      "fromProcess": "data_generator",
      "fromPort": "Output",
      "toProcess": "logger",
      "toPort": "Input",
      "description": "Connect generator output to processor input"
    }
  ]
}

Data Collection Pipeline (data/ingestion/collector.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "collector",
    "description": "Data collection pipeline",
    "version": "1.0.0",
    "namespace": "data/ingestion"
  },
  "processes": {
    "api_collector": {
      "component": "DataGeneratorActor",
      "metadata": {
        "description": "Collects data from API endpoints",
        "config": {
          "data_type": "object",
          "collection_rate": "high"
        }
      }
    },
    "validator": {
      "component": "SimpleLoggerActor",
      "metadata": {
        "description": "Validates collected data"
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "api_collector", "portId": "Output" },
      "to": { "nodeId": "validator", "portId": "Input" },
      "metadata": {}
    }
  ],
  "inports": {
    "trigger": {
      "nodeId": "api_collector",
      "portId": "Trigger"
    }
  },
  "outports": {
    "validated_data": {
      "nodeId": "validator",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "raw_data_output": {
      "interfaceId": "raw_data_output",
      "processName": "validator",
      "portName": "Output",
      "dataType": "ValidatedData",
      "description": "Validated raw data output",
      "required": false
    }
  },
  "requiredInterfaces": {},
  "graphDependencies": [],
  "externalConnections": []
}

Data Transformation Pipeline (data/processing/transformer.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "transformer",
    "description": "Data transformation pipeline",
    "version": "1.0.0",
    "namespace": "data/processing",
    "dependencies": ["collector"]
  },
  "processes": {
    "cleaner": {
      "component": "SimpleLoggerActor",
      "metadata": {
        "description": "Cleans and normalizes data"
      }
    },
    "enricher": {
      "component": "DataGeneratorActor",
      "metadata": {
        "description": "Enriches data with additional context"
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "cleaner", "portId": "Output" },
      "to": { "nodeId": "enricher", "portId": "Trigger" },
      "metadata": {}
    }
  ],
  "inports": {
    "raw_data": {
      "nodeId": "cleaner",
      "portId": "Input"
    }
  },
  "outports": {
    "clean_data": {
      "nodeId": "enricher",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "clean_data_output": {
      "interfaceId": "clean_data_output",
      "processName": "enricher",
      "portName": "Output",
      "dataType": "CleanData",
      "description": "Cleaned and enriched data",
      "required": false
    }
  },
  "requiredInterfaces": {
    "raw_data_input": {
      "interfaceId": "raw_data_input",
      "processName": "cleaner",
      "portName": "Input",
      "dataType": "ValidatedData",
      "description": "Raw data input from collector",
      "required": true
    }
  },
  "graphDependencies": [
    {
      "graphName": "collector",
      "namespace": "data/ingestion",
      "versionConstraint": ">=1.0.0",
      "required": true,
      "description": "Requires data collector for input"
    }
  ],
  "externalConnections": [
    {
      "connectionId": "collector_to_transformer",
      "targetGraph": "collector",
      "targetNamespace": "data/ingestion",
      "fromProcess": "validator",
      "fromPort": "Output",
      "toProcess": "cleaner",
      "toPort": "Input",
      "description": "Connect collector output to transformer input"
    }
  ]
}

ML Training Pipeline (ml/training/trainer.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "trainer",
    "description": "ML training pipeline",
    "version": "1.0.0",
    "namespace": "ml/training",
    "dependencies": ["transformer"]
  },
  "processes": {
    "feature_engineer": {
      "component": "SimpleLoggerActor",
      "metadata": {
        "description": "Engineers features for ML training"
      }
    },
    "model_trainer": {
      "component": "DataGeneratorActor",
      "metadata": {
        "description": "Trains ML models",
        "config": {
          "data_type": "object"
        }
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "feature_engineer", "portId": "Output" },
      "to": { "nodeId": "model_trainer", "portId": "Trigger" },
      "metadata": {}
    }
  ],
  "inports": {
    "training_data": {
      "nodeId": "feature_engineer",
      "portId": "Input"
    }
  },
  "outports": {
    "trained_model": {
      "nodeId": "model_trainer",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "model_output": {
      "interfaceId": "model_output",
      "processName": "model_trainer",
      "portName": "Output",
      "dataType": "TrainedModel",
      "description": "Trained ML model",
      "required": false
    }
  },
  "requiredInterfaces": {
    "clean_data_input": {
      "interfaceId": "clean_data_input",
      "processName": "feature_engineer",
      "portName": "Input",
      "dataType": "CleanData",
      "description": "Clean data for training",
      "required": true
    }
  },
  "graphDependencies": [
    {
      "graphName": "transformer",
      "namespace": "data/processing",
      "versionConstraint": ">=1.0.0",
      "required": true,
      "description": "Requires clean data from transformer"
    }
  ],
  "externalConnections": [
    {
      "connectionId": "transformer_to_trainer",
      "targetGraph": "transformer",
      "targetNamespace": "data/processing",
      "fromProcess": "enricher",
      "fromPort": "Output",
      "toProcess": "feature_engineer",
      "toPort": "Input",
      "description": "Connect transformer output to trainer input"
    }
  ]
}

System Monitor (monitoring/system_monitor.graph.json)

{
  "caseSensitive": false,
  "properties": {
    "name": "system_monitor",
    "description": "System monitoring and metrics collection",
    "version": "1.0.0",
    "namespace": "monitoring",
    "dependencies": ["trainer", "transformer", "collector"]
  },
  "processes": {
    "metrics_collector": {
      "component": "SimpleLoggerActor",
      "metadata": {
        "description": "Collects system metrics"
      }
    },
    "alert_manager": {
      "component": "DataGeneratorActor",
      "metadata": {
        "description": "Manages alerts and notifications"
      }
    }
  },
  "connections": [
    {
      "from": { "nodeId": "metrics_collector", "portId": "Output" },
      "to": { "nodeId": "alert_manager", "portId": "Trigger" },
      "metadata": {}
    }
  ],
  "inports": {
    "metrics": {
      "nodeId": "metrics_collector",
      "portId": "Input"
    }
  },
  "outports": {
    "alerts": {
      "nodeId": "alert_manager",
      "portId": "Output"
    }
  },
  "groups": [],
  "providedInterfaces": {
    "alert_output": {
      "interfaceId": "alert_output",
      "processName": "alert_manager",
      "portName": "Output",
      "dataType": "Alert",
      "description": "System alerts and notifications",
      "required": false
    }
  },
  "requiredInterfaces": {
    "metrics_input": {
      "interfaceId": "metrics_input",
      "processName": "metrics_collector",
      "portName": "Input",
      "dataType": "SystemMetrics",
      "description": "System metrics for monitoring",
      "required": true
    }
  },
  "graphDependencies": [
    {
      "graphName": "trainer",
      "namespace": "ml/training",
      "versionConstraint": ">=1.0.0",
      "required": false,
      "description": "Monitors ML training pipeline"
    },
    {
      "graphName": "transformer",
      "namespace": "data/processing",
      "versionConstraint": ">=1.0.0",
      "required": false,
      "description": "Monitors data processing pipeline"
    },
    {
      "graphName": "collector",
      "namespace": "data/ingestion",
      "versionConstraint": ">=1.0.0",
      "required": false,
      "description": "Monitors data collection pipeline"
    }
  ],
  "externalConnections": []
}

Step 4: Workspace Discovery Example

Create the main application in src/main.rs:

use reflow_network::{
    multi_graph::{
        workspace::{WorkspaceDiscovery, WorkspaceConfig},
        GraphComposer, GraphComposition, GraphSource,
    },
};
use std::{collections::HashMap, path::PathBuf};

mod actors;
pub use actors::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    tracing_subscriber::fmt::init();

    println!("🚀 Multi-Graph Workspace Example");
    println!("===============================");

    // Configure workspace discovery
    let workspace_config = WorkspaceConfig {
        root_path: PathBuf::from("."),
        graph_patterns: vec![
            "**/*.graph.json".to_string(),
            "**/*.graph.yaml".to_string(),
        ],
        excluded_paths: vec![
            "**/target/**".to_string(),
            "**/.git/**".to_string(),
        ],
        max_depth: Some(5),
        namespace_strategy: reflow_network::multi_graph::NamespaceStrategy::FolderStructure,
    };

    // Discover workspace
    let discovery = WorkspaceDiscovery::new(workspace_config);
    let workspace = discovery.discover_workspace().await?;

    println!("📊 Workspace Discovery Results:");
    println!("  Discovered {} graphs across {} namespaces",
        workspace.graphs.len(),
        workspace.namespaces.len()
    );

    // Print discovered graphs by namespace
    for (namespace, info) in &workspace.namespaces {
        println!("\n📁 Namespace: {}", namespace);
        println!("  Path: {}", info.path.display());
        println!("  Graphs:");
        for graph_name in &info.graphs {
            let graph_meta = workspace.graphs.iter()
                .find(|g| g.graph.properties.get("name").and_then(|v| v.as_str()).unwrap_or("") == graph_name)
                .unwrap();
            println!("    📈 {} ({})", graph_name, graph_meta.file_info.path.file_name().unwrap().to_string_lossy());
            
            // Show dependencies
            if let Some(deps) = graph_meta.graph.properties.get("dependencies").and_then(|v| v.as_array()) {
                if !deps.is_empty() {
                    print!("      Dependencies: ");
                    for (i, dep) in deps.iter().enumerate() {
                        if i > 0 { print!(", "); }
                        print!("{}", dep.as_str().unwrap_or("unknown"));
                    }
                    println!();
                }
            }
        }
    }

    // Analyze dependencies
    println!("\n🔍 Dependency Analysis:");
    if !workspace.analysis.dependencies.is_empty() {
        for dep in &workspace.analysis.dependencies {
            println!("  📦 {} depends on {} ({})",
                dep.dependent_graph,
                dep.dependency_graph,
                if dep.required { "required" } else { "optional" }
            );
        }
    } else {
        println!("  No dependencies declared");
    }

    // Show provided and required interfaces
    println!("\n🔌 Interface Analysis:");
    
    if !workspace.analysis.provided_interfaces.is_empty() {
        println!("  Provided Interfaces:");
        for interface in &workspace.analysis.provided_interfaces {
            println!("    📤 {}: {} provides {}",
                interface.namespace,
                interface.graph_name,
                interface.interface_name
            );
        }
    }

    if !workspace.analysis.required_interfaces.is_empty() {
        println!("  Required Interfaces:");
        for interface in &workspace.analysis.required_interfaces {
            println!("    📥 {}: {} requires {}",
                interface.namespace,
                interface.graph_name,
                interface.interface_name
            );
        }
    }

    // Create graph composition
    println!("\n🔧 Creating Graph Composition...");
    
    let sources: Vec<GraphSource> = workspace.graphs.iter()
        .map(|g| GraphSource::GraphExport(g.graph.clone()))
        .collect();

    let composition = GraphComposition {
        sources,
        connections: vec![], // Inter-graph connections would go here
        shared_resources: vec![],
        properties: HashMap::from([
            ("name".to_string(), serde_json::json!("multi_graph_workspace")),
            ("description".to_string(), serde_json::json!("Composed multi-graph workspace")),
        ]),
        case_sensitive: Some(false),
        metadata: None,
    };

    // Compose the graphs
    let mut composer = GraphComposer::new();
    let composed_graph = composer.compose_graphs(composition).await?;

    println!("✅ Successfully composed workspace into unified graph!");
    println!("  Total processes: {}", composed_graph.export().processes.len());
    println!("  Total connections: {}", composed_graph.export().connections.len());

    // Show composed processes by namespace
    println!("\n📋 Composed Graph Structure:");
    let mut namespaced_processes: HashMap<String, Vec<String>> = HashMap::new();
    
    for (process_name, _) in &composed_graph.export().processes {
        if let Some(namespace_sep) = process_name.find('/') {
            let namespace = &process_name[..namespace_sep];
            let process = &process_name[namespace_sep + 1..];
            namespaced_processes
                .entry(namespace.to_string())
                .or_insert_with(Vec::new)
                .push(process.to_string());
        } else {
            namespaced_processes
                .entry("root".to_string())
                .or_insert_with(Vec::new)
                .push(process_name.clone());
        }
    }
    
    for (namespace, processes) in &namespaced_processes {
        println!("  📁 {}: {} processes", namespace, processes.len());
        for process in processes {
            println!("    📈 {}", process);
        }
    }

    println!("\n🎯 Workspace composition complete!");
    
    Ok(())
}

Step 5: Running the Example

Create a simple workspace example in simple_workspace_example.rs:

use reflow_network::{
    multi_graph::workspace::{WorkspaceDiscovery, WorkspaceConfig},
    network::{Network, NetworkConfig},
};
use std::path::PathBuf;

mod actors;
use actors::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    tracing_subscriber::fmt::init();

    println!("🚀 Simple Multi-Graph Workspace Example");

    // Simple workspace discovery
    let workspace_config = WorkspaceConfig {
        root_path: PathBuf::from("simple"),
        graph_patterns: vec!["*.graph.json".to_string()],
        excluded_paths: vec![],
        max_depth: Some(2),
        namespace_strategy: reflow_network::multi_graph::NamespaceStrategy::FolderStructure,
    };

    let discovery = WorkspaceDiscovery::new(workspace_config);
    let workspace = discovery.discover_workspace().await?;

    println!("Found {} graphs:", workspace.graphs.len());
    for graph_meta in &workspace.graphs {
        println!("  - {} ({})", 
            graph_meta.graph.properties.get("name").and_then(|v| v.as_str()).unwrap_or("unnamed"),
            graph_meta.discovered_namespace
        );
    }

    // Create a simple network to test one of the graphs
    let mut network = Network::new(NetworkConfig::default());

    // Register our actors
    network.register_actor("timer", SimpleTimerActor::new())?;
    network.register_actor("generator", DataGeneratorActor::new())?;
    network.register_actor("logger", SimpleLoggerActor::new())?;

    // Create simple workflow nodes
    network.add_node("timer_node", "timer", None)?;
    network.add_node("generator_node", "generator", None)?;
    network.add_node("logger_node", "logger", None)?;

    // Connect them
    network.add_connection(reflow_network::connector::Connector {
        from: reflow_network::connector::ConnectionPoint {
            actor: "timer_node".to_string(),
            port: "Output".to_string(),
            ..Default::default()
        },
        to: reflow_network::connector::ConnectionPoint {
            actor: "generator_node".to_string(),
            port: "Trigger".to_string(),
            ..Default::default()
        },
    })?;

    network.add_connection(reflow_network::connector::Connector {
        from: reflow_network::connector::ConnectionPoint {
            actor: "generator_node".to_string(),
            port: "Output".to_string(),
            ..Default::default()
        },
        to: reflow_network::connector::ConnectionPoint {
            actor: "logger_node".to_string(),
            port: "Input".to_string(),
            ..Default::default()
        },
    })?;

    // Start the network
    network.start().await?;

    println!("✅ Network started. Starting timer...");

    // Start the timer
    network.send_to_actor("timer_node", "Start", reflow_network::message::Message::Boolean(true))?;

    // Let it run for a bit
    tokio::time::sleep(tokio::time::Duration::from_secs(15)).await;

    network.shutdown();
    println!("🎯 Simple example complete!");

    Ok(())
}

Usage

Running the Full Workspace Example

# In your workspace directory
cargo run --bin multi_graph_workspace

Expected output:

🚀 Multi-Graph Workspace Example
📊 Workspace Discovery Results:
  Discovered 6 graphs across 4 namespaces

📁 Namespace: simple
  Path: ./simple
  Graphs:
    📈 generator (generator.graph.json)
    📈 processor (processor.graph.json)
      Dependencies: generator

📁 Namespace: data/ingestion
  Path: ./data/ingestion
  Graphs:
    📈 collector (collector.graph.json)

📁 Namespace: data/processing
  Path: ./data/processing
  Graphs:
    📈 transformer (transformer.graph.json)
      Dependencies: collector

📁 Namespace: ml/training
  Path: ./ml/training
  Graphs:
    📈 trainer (trainer.graph.json)
      Dependencies: transformer

📁 Namespace: monitoring
  Path: ./monitoring
  Graphs:
    📈 system_monitor (system_monitor.graph.json)
      Dependencies: trainer, transformer, collector

🔍 Dependency Analysis:
  📦 processor depends on generator (required)
  📦 transformer depends on collector (required)
  📦 trainer depends on transformer (required)
  📦 system_monitor depends on trainer (optional)
  📦 system_monitor depends on transformer (optional)
  📦 system_monitor depends on collector (optional)

🔌 Interface Analysis:
  Provided Interfaces:
    📤 simple: generator provides data_output
    📤 simple: processor provides processed_output
    📤 data/ingestion: collector provides raw_data_output
    📤 data/processing: transformer provides clean_data_output
    📤 ml/training: trainer provides model_output
    📤 monitoring: system_monitor provides alert_output
  Required Interfaces:
    📥 simple: processor requires data_input
    📥 data/processing: transformer requires raw_data_input
    📥 ml/training: trainer requires clean_data_input
    📥 monitoring: system_monitor requires metrics_input

🔧 Creating Graph Composition...
✅ Successfully composed workspace into unified graph!
  Total processes: 12
  Total connections: 8

📋 Composed Graph Structure:
  📁 simple: 3 processes
    📈 timer
    📈 data_generator
    📈 logger
  📁 data: 3 processes
    📈 api_collector
    📈 validator
    📈 cleaner
    📈 enricher
  📁 ml: 2 processes
    📈 feature_engineer
    📈 model_trainer
  📁 monitoring: 2 processes
    📈 metrics_collector
    📈 alert_manager

🎯 Workspace composition complete!

Running the Simple Example

cargo run --bin simple_workspace_example

Expected output:

🚀 Simple Multi-Graph Workspace Example
Found 2 graphs:
  - generator (simple)
  - processor (simple)
✅ Network started. Starting timer...
[12:34:56.123] LOG: {"tick":1,"timestamp":"2023-12-01T12:34:56.123Z","source":"SimpleTimerActor","max_ticks":10}
[12:34:57.124] LOG: {"tick":2,"timestamp":"2023-12-01T12:34:57.124Z","source":"SimpleTimerActor","max_ticks":10}
...
🎯 Simple example complete!

Key Concepts Demonstrated

1. Automatic Discovery

  • Workspace automatically finds all .graph.json files
  • Uses folder structure as natural namespaces
  • Handles dependency analysis

2. Namespace Organization

  • simple/simple namespace
  • data/ingestion/data/ingestion namespace
  • ml/training/ml/training namespace

3. Dependency Management

  • Graphs declare dependencies on other graphs
  • System validates and orders graphs by dependencies
  • Supports optional and required dependencies

4. Interface Definitions

  • Graphs declare provided and required interfaces
  • System analyzes interface compatibility
  • Enables automatic connection suggestions

5. Graph Composition

  • Multiple graphs composed into unified workflow
  • Namespace prefixes prevent name conflicts
  • Maintains original graph structure and relationships

Advanced Features

Custom Namespace Strategies

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::NamespaceStrategy;

// Custom semantic-based namespacing
let custom_strategy = NamespaceStrategy::custom("semantic_based", None)?;

let workspace_config = WorkspaceConfig {
    namespace_strategy: custom_strategy,
    ..Default::default()
};
}

Selective Graph Loading

#![allow(unused)]
fn main() {
// Only load graphs matching specific patterns
let workspace_config = WorkspaceConfig {
    graph_patterns: vec![
        "data/**/*.graph.json".to_string(),  // Only data pipelines
        "ml/**/*.graph.json".to_string(),    // Only ML pipelines
    ],
    ..Default::default()
};
}

Interface-Based Connections

#![allow(unused)]
fn main() {
use reflow_network::multi_graph::GraphConnectionBuilder;

// Connect graphs using interface definitions
let mut connection_builder = GraphConnectionBuilder::new(workspace);

connection_builder
    .connect_interface(
        "generator",     // Source graph
        "data_output",   // Source interface
        "processor",     // Target graph
        "data_input"     // Target interface
    )?;

let connections = connection_builder.build();
}

Best Practices

1. Graph Organization

  • Use descriptive folder structures
  • Group related graphs in same namespace
  • Keep dependencies minimal and explicit

2. Interface Design

  • Define clear input/output interfaces
  • Use descriptive interface names
  • Document expected data types

3. Dependency Management

  • Declare all dependencies explicitly
  • Use version constraints for stability
  • Minimize circular dependencies

4. Testing

  • Test individual graphs before composition
  • Validate interfaces between graphs
  • Test composed workflows end-to-end

Next Steps

  1. Try the distributed networks tutorial to learn about cross-network communication
  2. Explore the graph composition API for advanced composition scenarios
  3. Build your own multi-graph workspace with domain-specific actors and workflows

The multi-graph workspace system enables you to build complex, modular workflows that scale naturally with your project's complexity while maintaining clean separation of concerns.

Browser Actor Development Tutorial

Learn how to develop actors specifically for browser environments using Reflow's WebAssembly bindings.

Overview

This tutorial covers creating actors that leverage browser APIs, handle asynchronous operations, and provide interactive user experiences. We'll build several example actors that demonstrate common patterns in browser-based workflow development.

Prerequisites

Tutorial Structure

We'll build increasingly complex actors:

  1. Data Transformation Actor - Basic processing patterns
  2. Web API Client Actor - HTTP requests and async operations
  3. File Processing Actor - Browser file handling
  4. Real-time Data Actor - WebSocket connections and streaming
  5. Interactive UI Actor - DOM manipulation and user interaction

1. Data Transformation Actor

Let's start with a versatile data transformation actor that handles various data formats.

Creating the Actor

class DataTransformActor {
    constructor() {
        this.inports = ["data", "config"];
        this.outports = ["result", "error", "stats"];
        
        // Default transformation configuration
        this.config = {
            operation: "normalize",  // normalize, aggregate, filter, map
            outputFormat: "json",    // json, csv, xml
            precision: 2,           // for numeric operations
            batchSize: 100          // for batch processing
        };
    }

    run(context) {
        // Update configuration if provided
        if (context.input.config) {
            this.updateConfig(context.input.config);
        }

        // Process incoming data
        if (context.input.data !== undefined) {
            try {
                const result = this.transform(context.input.data, context);
                this.updateStats(context, result);
                context.send({ result });
            } catch (error) {
                this.handleError(error, context);
            }
        }
    }

    updateConfig(newConfig) {
        this.config = { ...this.config, ...newConfig };
        console.log("Configuration updated:", this.config);
    }

    transform(data, context) {
        switch (this.config.operation) {
            case "normalize":
                return this.normalizeData(data);
            case "aggregate": 
                return this.aggregateData(data, context);
            case "filter":
                return this.filterData(data);
            case "map":
                return this.mapData(data);
            default:
                throw new Error(`Unknown operation: ${this.config.operation}`);
        }
    }

    normalizeData(data) {
        if (!Array.isArray(data)) {
            data = [data];
        }

        return data.map(item => {
            if (typeof item === 'number') {
                return Number(item.toFixed(this.config.precision));
            }
            
            if (typeof item === 'string') {
                return item.trim().toLowerCase();
            }
            
            if (typeof item === 'object' && item !== null) {
                const normalized = {};
                for (const [key, value] of Object.entries(item)) {
                    // Normalize keys to camelCase
                    const normalizedKey = key.replace(/_([a-z])/g, (g) => g[1].toUpperCase());
                    normalized[normalizedKey] = typeof value === 'string' ? 
                        value.trim() : value;
                }
                return normalized;
            }
            
            return item;
        });
    }

    aggregateData(data, context) {
        if (!Array.isArray(data)) {
            data = [data];
        }

        // Get existing aggregation state
        const aggregated = context.state.get('aggregated') || [];
        const combined = aggregated.concat(data);
        
        // Keep only recent data based on batch size
        const recent = combined.slice(-this.config.batchSize);
        context.state.set('aggregated', recent);

        // Calculate statistics
        const numbers = recent.filter(x => typeof x === 'number');
        const result = {
            count: recent.length,
            numericCount: numbers.length,
            sum: numbers.reduce((a, b) => a + b, 0),
            average: numbers.length > 0 ? 
                numbers.reduce((a, b) => a + b, 0) / numbers.length : 0,
            min: numbers.length > 0 ? Math.min(...numbers) : null,
            max: numbers.length > 0 ? Math.max(...numbers) : null,
            latest: recent[recent.length - 1],
            timestamp: Date.now()
        };

        return result;
    }

    filterData(data) {
        if (!Array.isArray(data)) {
            return data;
        }

        // Apply various filters based on configuration
        return data.filter(item => {
            // Filter out null/undefined
            if (item == null) return false;
            
            // Filter by type if specified
            if (this.config.filterType) {
                if (typeof item !== this.config.filterType) return false;
            }
            
            // Filter by value range for numbers
            if (typeof item === 'number') {
                if (this.config.minValue !== undefined && item < this.config.minValue) return false;
                if (this.config.maxValue !== undefined && item > this.config.maxValue) return false;
            }
            
            // Filter by pattern for strings
            if (typeof item === 'string' && this.config.pattern) {
                const regex = new RegExp(this.config.pattern, 'i');
                if (!regex.test(item)) return false;
            }
            
            return true;
        });
    }

    mapData(data) {
        if (!Array.isArray(data)) {
            data = [data];
        }

        return data.map((item, index) => {
            const mapped = {
                original: item,
                index: index,
                timestamp: Date.now(),
                processed: true
            };

            // Apply transformations based on type
            if (typeof item === 'number') {
                mapped.doubled = item * 2;
                mapped.squared = item * item;
                mapped.formatted = item.toFixed(this.config.precision);
            }
            
            if (typeof item === 'string') {
                mapped.length = item.length;
                mapped.uppercase = item.toUpperCase();
                mapped.words = item.split(/\s+/).length;
            }
            
            if (typeof item === 'object' && item !== null) {
                mapped.keys = Object.keys(item);
                mapped.keyCount = Object.keys(item).length;
            }

            return mapped;
        });
    }

    updateStats(context, result) {
        const stats = context.state.get('stats') || {
            processedCount: 0,
            lastProcessed: null,
            totalDataSize: 0,
            operationCounts: {}
        };

        stats.processedCount++;
        stats.lastProcessed = Date.now();
        stats.totalDataSize += JSON.stringify(result).length;
        
        const operation = this.config.operation;
        stats.operationCounts[operation] = (stats.operationCounts[operation] || 0) + 1;

        context.state.set('stats', stats);
        
        // Send stats periodically
        if (stats.processedCount % 10 === 0) {
            context.send({ stats });
        }
    }

    handleError(error, context) {
        const errorInfo = {
            message: error.message,
            operation: this.config.operation,
            timestamp: Date.now(),
            config: this.config
        };

        console.error("DataTransformActor error:", errorInfo);
        context.send({ error: errorInfo });
    }
}

Testing the Transform Actor

// Test the data transformation actor
async function testDataTransformActor() {
    const graph = new Graph("DataTransformTest", true);
    
    // Add the transform actor
    graph.addNode("transformer", "DataTransformActor", {
        x: 200, y: 100,
        description: "Data transformation processor"
    });

    // Add test data
    const testData = [
        1.23456, 2.67890, 3.14159,
        "  Hello World  ", "  JAVASCRIPT  ",
        { first_name: "John", last_name: "Doe", age: 30 },
        { product_id: 123, product_name: "Widget", price: 9.99 }
    ];

    graph.addInitial(testData, "transformer", "data");
    
    // Test different operations
    const operations = [
        { operation: "normalize", precision: 2 },
        { operation: "aggregate", batchSize: 50 },
        { operation: "filter", filterType: "number", minValue: 2 },
        { operation: "map" }
    ];

    const network = new GraphNetwork(graph);
    network.registerActor("DataTransformActor", new DataTransformActor());

    // Monitor results
    network.next((event) => {
        if (event._type === "FlowTrace" && event.to.port === "result") {
            console.log("Transform result:", event.from.data);
        }
    });

    await network.start();

    // Test different operations
    for (const config of operations) {
        console.log(`\nTesting operation: ${config.operation}`);
        const result = await network.executeActor("transformer", {
            data: testData,
            config: config
        });
        console.log("Result:", result);
    }
}

2. Web API Client Actor

Now let's build an actor that makes HTTP requests and handles various web APIs.

Creating the API Client Actor

class WebAPIClientActor {
    constructor() {
        this.inports = ["request", "config"];
        this.outports = ["response", "error", "progress"];
        
        this.config = {
            baseURL: "",
            timeout: 10000,
            retries: 3,
            retryDelay: 1000,
            headers: {
                "Content-Type": "application/json"
            }
        };
    }

    async run(context) {
        // Update configuration
        if (context.input.config) {
            this.config = { ...this.config, ...context.input.config };
        }

        // Process request
        if (context.input.request) {
            await this.makeRequest(context.input.request, context);
        }
    }

    async makeRequest(request, context) {
        const url = this.buildURL(request.url);
        const options = this.buildRequestOptions(request);
        
        try {
            const response = await this.fetchWithRetry(url, options, context);
            const data = await this.parseResponse(response, request.responseType);
            
            context.send({
                response: {
                    data: data,
                    status: response.status,
                    statusText: response.statusText,
                    headers: Object.fromEntries(response.headers.entries()),
                    url: response.url,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: error.message,
                    type: error.name,
                    url: url,
                    request: request,
                    timestamp: Date.now()
                }
            });
        }
    }

    buildURL(url) {
        if (url.startsWith('http')) {
            return url;
        }
        return `${this.config.baseURL}${url}`;
    }

    buildRequestOptions(request) {
        const options = {
            method: request.method || 'GET',
            headers: { ...this.config.headers, ...request.headers }
        };

        if (request.body) {
            if (typeof request.body === 'object') {
                options.body = JSON.stringify(request.body);
            } else {
                options.body = request.body;
            }
        }

        return options;
    }

    async fetchWithRetry(url, options, context) {
        let lastError;
        
        for (let attempt = 1; attempt <= this.config.retries; attempt++) {
            try {
                // Set up timeout
                const controller = new AbortController();
                const timeoutId = setTimeout(() => controller.abort(), this.config.timeout);
                
                options.signal = controller.signal;

                // Send progress update
                context.send({
                    progress: {
                        attempt: attempt,
                        maxAttempts: this.config.retries,
                        url: url,
                        status: "requesting"
                    }
                });

                const response = await fetch(url, options);
                clearTimeout(timeoutId);

                if (!response.ok) {
                    throw new Error(`HTTP ${response.status}: ${response.statusText}`);
                }

                return response;

            } catch (error) {
                lastError = error;
                
                if (attempt < this.config.retries) {
                    const delay = this.config.retryDelay * Math.pow(2, attempt - 1);
                    
                    context.send({
                        progress: {
                            attempt: attempt,
                            maxAttempts: this.config.retries,
                            error: error.message,
                            retryIn: delay,
                            status: "retrying"
                        }
                    });

                    await new Promise(resolve => setTimeout(resolve, delay));
                }
            }
        }
        
        throw lastError;
    }

    async parseResponse(response, responseType = 'json') {
        switch (responseType.toLowerCase()) {
            case 'json':
                return await response.json();
            case 'text':
                return await response.text();
            case 'blob':
                return await response.blob();
            case 'arraybuffer':
                return await response.arrayBuffer();
            default:
                return await response.json();
        }
    }
}

API Actor Examples

// Example: Weather API actor
class WeatherAPIActor extends WebAPIClientActor {
    constructor() {
        super();
        this.inports = ["location", "config"];
        this.outports = ["weather", "error"];
        
        this.config = {
            ...this.config,
            baseURL: "https://api.openweathermap.org/data/2.5",
            apiKey: "" // Set your API key
        };
    }

    async run(context) {
        if (context.input.location) {
            const request = {
                url: `/weather?q=${encodeURIComponent(context.input.location)}&appid=${this.config.apiKey}&units=metric`,
                method: "GET",
                responseType: "json"
            };

            await this.makeRequest(request, context);
        }
    }
}

// Example: REST API actor
class RESTAPIActor extends WebAPIClientActor {
    constructor() {
        super();
        this.inports = ["operation", "config"];
        this.outports = ["result", "error"];
    }

    async run(context) {
        if (context.input.config) {
            this.config = { ...this.config, ...context.input.config };
        }

        if (context.input.operation) {
            const op = context.input.operation;
            const request = {
                url: op.endpoint,
                method: op.method || 'GET',
                headers: op.headers,
                body: op.data,
                responseType: op.responseType || 'json'
            };

            await this.makeRequest(request, context);
        }
    }
}

3. File Processing Actor

Let's create an actor that handles file operations in the browser.

class FileProcessorActor {
    constructor() {
        this.inports = ["file", "operation", "config"];
        this.outports = ["content", "metadata", "progress", "error"];
        
        this.config = {
            chunkSize: 64 * 1024, // 64KB chunks
            supportedTypes: ['text', 'json', 'csv', 'xml'],
            maxFileSize: 10 * 1024 * 1024 // 10MB
        };
    }

    run(context) {
        if (context.input.config) {
            this.config = { ...this.config, ...context.input.config };
        }

        if (context.input.file && context.input.operation) {
            this.processFile(context.input.file, context.input.operation, context);
        }
    }

    processFile(file, operation, context) {
        // Validate file
        if (!file || !(file instanceof File)) {
            context.send({ error: "Valid File object required" });
            return;
        }

        if (file.size > this.config.maxFileSize) {
            context.send({ 
                error: `File too large: ${file.size} bytes (max: ${this.config.maxFileSize})` 
            });
            return;
        }

        // Send file metadata
        context.send({
            metadata: {
                name: file.name,
                size: file.size,
                type: file.type,
                lastModified: new Date(file.lastModified),
                operation: operation
            }
        });

        // Process based on operation
        switch (operation.type) {
            case 'read':
                this.readFile(file, operation, context);
                break;
            case 'parse':
                this.parseFile(file, operation, context);
                break;
            case 'analyze':
                this.analyzeFile(file, operation, context);
                break;
            default:
                context.send({ error: `Unknown operation: ${operation.type}` });
        }
    }

    readFile(file, operation, context) {
        const reader = new FileReader();
        
        reader.onprogress = (event) => {
            if (event.lengthComputable) {
                context.send({
                    progress: {
                        loaded: event.loaded,
                        total: event.total,
                        percentage: (event.loaded / event.total) * 100,
                        operation: 'reading'
                    }
                });
            }
        };

        reader.onload = (event) => {
            const result = event.target.result;
            context.send({
                content: {
                    data: result,
                    encoding: operation.encoding || 'utf-8',
                    format: operation.format || 'text',
                    size: result.length || result.byteLength
                }
            });
        };

        reader.onerror = () => {
            context.send({
                error: `Failed to read file: ${reader.error.message}`
            });
        };

        // Choose reading method
        const format = operation.format || 'text';
        switch (format) {
            case 'text':
                reader.readAsText(file, operation.encoding || 'utf-8');
                break;
            case 'dataurl':
                reader.readAsDataURL(file);
                break;
            case 'binary':
                reader.readAsArrayBuffer(file);
                break;
            default:
                reader.readAsText(file);
        }
    }

    parseFile(file, operation, context) {
        const reader = new FileReader();
        
        reader.onload = (event) => {
            try {
                const text = event.target.result;
                let parsed;

                switch (operation.format) {
                    case 'json':
                        parsed = JSON.parse(text);
                        break;
                    case 'csv':
                        parsed = this.parseCSV(text);
                        break;
                    case 'xml':
                        parsed = this.parseXML(text);
                        break;
                    default:
                        parsed = text;
                }

                context.send({
                    content: {
                        data: parsed,
                        format: operation.format,
                        recordCount: Array.isArray(parsed) ? parsed.length : 1
                    }
                });

            } catch (error) {
                context.send({
                    error: `Failed to parse ${operation.format}: ${error.message}`
                });
            }
        };

        reader.readAsText(file);
    }

    parseCSV(text) {
        const lines = text.split('\n').filter(line => line.trim());
        if (lines.length === 0) return [];

        const headers = lines[0].split(',').map(h => h.trim());
        const data = [];

        for (let i = 1; i < lines.length; i++) {
            const values = lines[i].split(',').map(v => v.trim());
            const row = {};
            
            headers.forEach((header, index) => {
                row[header] = values[index] || '';
            });
            
            data.push(row);
        }

        return data;
    }

    parseXML(text) {
        const parser = new DOMParser();
        const doc = parser.parseFromString(text, 'text/xml');
        
        if (doc.querySelector('parsererror')) {
            throw new Error('Invalid XML format');
        }

        return this.xmlToObject(doc.documentElement);
    }

    xmlToObject(element) {
        const obj = {};
        
        // Add attributes
        if (element.attributes.length > 0) {
            obj['@attributes'] = {};
            for (const attr of element.attributes) {
                obj['@attributes'][attr.name] = attr.value;
            }
        }

        // Add child elements
        for (const child of element.children) {
            const name = child.tagName;
            const value = child.children.length > 0 ? 
                this.xmlToObject(child) : child.textContent;
            
            if (obj[name]) {
                if (!Array.isArray(obj[name])) {
                    obj[name] = [obj[name]];
                }
                obj[name].push(value);
            } else {
                obj[name] = value;
            }
        }

        return obj;
    }

    analyzeFile(file, operation, context) {
        const reader = new FileReader();
        
        reader.onload = (event) => {
            const text = event.target.result;
            const analysis = {
                size: text.length,
                lines: text.split('\n').length,
                words: text.split(/\s+/).filter(w => w).length,
                characters: text.length,
                charactersNoSpaces: text.replace(/\s/g, '').length,
                encoding: 'utf-8',
                detectedFormat: this.detectFormat(text, file.name),
                timestamp: Date.now()
            };

            context.send({ content: analysis });
        };

        reader.readAsText(file);
    }

    detectFormat(text, filename) {
        const extension = filename.split('.').pop().toLowerCase();
        
        // Try to detect format
        try {
            JSON.parse(text);
            return 'json';
        } catch {}

        if (text.includes('<?xml') || text.includes('<html')) {
            return 'xml';
        }

        if (text.includes(',') && text.split('\n').length > 1) {
            return 'csv';
        }

        return extension || 'text';
    }
}

4. Real-time Data Actor

Let's build an actor that handles real-time data streams via WebSockets.

class WebSocketActor {
    constructor() {
        this.inports = ["connect", "disconnect", "send", "config"];
        this.outports = ["message", "status", "error"];
        
        this.config = {
            reconnectAttempts: 5,
            reconnectDelay: 1000,
            pingInterval: 30000,
            maxMessageSize: 1024 * 1024 // 1MB
        };
        
        this.socket = null;
        this.reconnectCount = 0;
        this.pingTimer = null;
    }

    run(context) {
        if (context.input.config) {
            this.config = { ...this.config, ...context.input.config };
        }

        if (context.input.connect) {
            this.connect(context.input.connect, context);
        }

        if (context.input.disconnect) {
            this.disconnect(context);
        }

        if (context.input.send) {
            this.sendMessage(context.input.send, context);
        }
    }

    connect(connectionInfo, context) {
        if (this.socket && this.socket.readyState === WebSocket.OPEN) {
            context.send({ status: "Already connected" });
            return;
        }

        try {
            this.socket = new WebSocket(connectionInfo.url, connectionInfo.protocols);
            
            this.socket.onopen = () => {
                this.reconnectCount = 0;
                context.send({ 
                    status: {
                        type: "connected",
                        url: connectionInfo.url,
                        timestamp: Date.now()
                    }
                });
                
                // Start ping timer
                this.startPing(context);
            };

            this.socket.onmessage = (event) => {
                try {
                    const data = this.parseMessage(event.data);
                    context.send({
                        message: {
                            data: data,
                            timestamp: Date.now(),
                            size: event.data.length
                        }
                    });
                } catch (error) {
                    context.send({
                        error: {
                            message: "Failed to parse incoming message",
                            data: event.data,
                            error: error.message
                        }
                    });
                }
            };

            this.socket.onclose = (event) => {
                this.stopPing();
                
                const closeInfo = {
                    type: "disconnected",
                    code: event.code,
                    reason: event.reason,
                    wasClean: event.wasClean,
                    timestamp: Date.now()
                };

                context.send({ status: closeInfo });

                // Attempt reconnection if not a clean close
                if (!event.wasClean && this.reconnectCount < this.config.reconnectAttempts) {
                    this.attemptReconnect(connectionInfo, context);
                }
            };

            this.socket.onerror = (error) => {
                context.send({
                    error: {
                        message: "WebSocket error",
                        timestamp: Date.now()
                    }
                });
            };

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to create WebSocket connection",
                    error: error.message,
                    url: connectionInfo.url
                }
            });
        }
    }

    attemptReconnect(connectionInfo, context) {
        this.reconnectCount++;
        const delay = this.config.reconnectDelay * Math.pow(2, this.reconnectCount - 1);

        context.send({
            status: {
                type: "reconnecting",
                attempt: this.reconnectCount,
                maxAttempts: this.config.reconnectAttempts,
                delay: delay
            }
        });

        setTimeout(() => {
            this.connect(connectionInfo, context);
        }, delay);
    }

    disconnect(context) {
        if (this.socket) {
            this.stopPing();
            this.socket.close(1000, "Client disconnect");
            this.socket = null;
        }

        context.send({
            status: {
                type: "disconnected",
                reason: "Client initiated",
                timestamp: Date.now()
            }
        });
    }

    sendMessage(messageData, context) {
        if (!this.socket || this.socket.readyState !== WebSocket.OPEN) {
            context.send({
                error: {
                    message: "WebSocket not connected",
                    messageData: messageData
                }
            });
            return;
        }

        try {
            const message = this.formatMessage(messageData);
            
            if (message.length > this.config.maxMessageSize) {
                context.send({
                    error: {
                        message: "Message too large",
                        size: message.length,
                        maxSize: this.config.maxMessageSize
                    }
                });
                return;
            }

            this.socket.send(message);
            
            context.send({
                status: {
                    type: "message_sent",
                    size: message.length,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to send message",
                    error: error.message,
                    messageData: messageData
                }
            });
        }
    }

    parseMessage(data) {
        // Try to parse as JSON first
        try {
            return JSON.parse(data);
        } catch {
            return data; // Return as string if not JSON
        }
    }

    formatMessage(data) {
        if (typeof data === 'string') {
            return data;
        }
        return JSON.stringify(data);
    }

    startPing(context) {
        this.stopPing();
        
        this.pingTimer = setInterval(() => {
            if (this.socket && this.socket.readyState === WebSocket.OPEN) {
                this.socket.send(JSON.stringify({ type: 'ping', timestamp: Date.now() }));
            }
        }, this.config.pingInterval);
    }

    stopPing() {
        if (this.pingTimer) {
            clearInterval(this.pingTimer);
            this.pingTimer = null;
        }
    }
}

5. Interactive UI Actor

Finally, let's create an actor that can interact with the DOM and handle user interactions.

class UIInteractionActor {
    constructor() {
        this.inports = ["createElement", "updateElement", "removeElement", "addEventListener"];
        this.outports = ["element", "event", "error"];
        
        this.config = {
            containerSelector: "#app",
            eventTypes: ["click", "input", "change", "submit"]
        };
        
        this.elements = new Map(); // Track created elements
        this.listeners = new Map(); // Track event listeners
    }

    run(context) {
        if (context.input.createElement) {
            this.createElement(context.input.createElement, context);
        }

        if (context.input.updateElement) {
            this.updateElement(context.input.updateElement, context);
        }

        if (context.input.removeElement) {
            this.removeElement(context.input.removeElement, context);
        }

        if (context.input.addEventListener) {
            this.addEventListener(context.input.addEventListener, context);
        }
    }

    createElement(elementData, context) {
        try {
            const element = document.createElement(elementData.tag || 'div');
            
            // Set attributes
            if (elementData.attributes) {
                for (const [key, value] of Object.entries(elementData.attributes)) {
                    element.setAttribute(key, value);
                }
            }

            // Set properties
            if (elementData.properties) {
                for (const [key, value] of Object.entries(elementData.properties)) {
                    element[key] = value;
                }
            }

            // Set styles
            if (elementData.styles) {
                for (const [key, value] of Object.entries(elementData.styles)) {
                    element.style[key] = value;
                }
            }

            // Set content
            if (elementData.textContent) {
                element.textContent = elementData.textContent;
            }
            
            if (elementData.innerHTML) {
                element.innerHTML = elementData.innerHTML;
            }

            // Add to container
            const container = document.querySelector(this.config.containerSelector);
            if (container) {
                container.appendChild(element);
            }

            // Generate unique ID if not provided
            const elementId = elementData.id || `element_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
            if (!element.id) {
                element.id = elementId;
            }

            // Store element reference
            this.elements.set(elementId, element);

            context.send({
                element: {
                    id: elementId,
                    tag: element.tagName.toLowerCase(),
                    created: true,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to create element",
                    error: error.message,
                    elementData: elementData
                }
            });
        }
    }

    updateElement(updateData, context) {
        try {
            const element = this.elements.get(updateData.id) || 
                           document.getElementById(updateData.id);
            
            if (!element) {
                context.send({
                    error: {
                        message: "Element not found",
                        id: updateData.id
                    }
                });
                return;
            }

            // Update attributes
            if (updateData.attributes) {
                for (const [key, value] of Object.entries(updateData.attributes)) {
                    if (value === null) {
                        element.removeAttribute(key);
                    } else {
                        element.setAttribute(key, value);
                    }
                }
            }

            // Update properties
            if (updateData.properties) {
                for (const [key, value] of Object.entries(updateData.properties)) {
                    element[key] = value;
                }
            }

            // Update styles
            if (updateData.styles) {
                for (const [key, value] of Object.entries(updateData.styles)) {
                    element.style[key] = value;
                }
            }

            // Update content
            if (updateData.textContent !== undefined) {
                element.textContent = updateData.textContent;
            }
            
            if (updateData.innerHTML !== undefined) {
                element.innerHTML = updateData.innerHTML;
            }

            context.send({
                element: {
                    id: updateData.id,
                    updated: true,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to update element",
                    error: error.message,
                    updateData: updateData
                }
            });
        }
    }

    removeElement(removeData, context) {
        try {
            const element = this.elements.get(removeData.id) || 
                           document.getElementById(removeData.id);
            
            if (!element) {
                context.send({
                    error: {
                        message: "Element not found",
                        id: removeData.id
                    }
                });
                return;
            }

            // Remove event listeners
            const listeners = this.listeners.get(removeData.id);
            if (listeners) {
                for (const listener of listeners) {
                    element.removeEventListener(listener.type, listener.handler);
                }
                this.listeners.delete(removeData.id);
            }

            // Remove from DOM
            element.remove();

            // Remove from tracking
            this.elements.delete(removeData.id);

            context.send({
                element: {
                    id: removeData.id,
                    removed: true,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to remove element",
                    error: error.message,
                    removeData: removeData
                }
            });
        }
    }

    addEventListener(listenerData, context) {
        try {
            const element = this.elements.get(listenerData.elementId) || 
                           document.getElementById(listenerData.elementId);
            
            if (!element) {
                context.send({
                    error: {
                        message: "Element not found",
                        id: listenerData.elementId
                    }
                });
                return;
            }

            const handler = (event) => {
                const eventData = {
                    type: event.type,
                    elementId: listenerData.elementId,
                    timestamp: Date.now(),
                    target: {
                        id: event.target.id,
                        tagName: event.target.tagName,
                        value: event.target.value,
                        checked: event.target.checked
                    }
                };

                // Add specific event data
                if (event.type === 'click') {
                    eventData.coordinates = {
                        clientX: event.clientX,
                        clientY: event.clientY,
                        pageX: event.pageX,
                        pageY: event.pageY
                    };
                }

                if (event.type === 'input' || event.type === 'change') {
                    eventData.value = event.target.value;
                }

                context.send({ event: eventData });
            };

            element.addEventListener(listenerData.eventType, handler);

            // Track the listener
            if (!this.listeners.has(listenerData.elementId)) {
                this.listeners.set(listenerData.elementId, []);
            }
            this.listeners.get(listenerData.elementId).push({
                type: listenerData.eventType,
                handler: handler
            });

            context.send({
                element: {
                    id: listenerData.elementId,
                    eventType: listenerData.eventType,
                    listenerAdded: true,
                    timestamp: Date.now()
                }
            });

        } catch (error) {
            context.send({
                error: {
                    message: "Failed to add event listener",
                    error: error.message,
                    listenerData: listenerData
                }
            });
        }
    }
}

Complete Working Example

Let's build a complete application that uses all the actors we've created:

// Complete Browser Actor Demo Application
class BrowserDemo {
    constructor() {
        this.graph = null;
        this.network = null;
        this.isRunning = false;
    }

    async initialize() {
        // Initialize Browser
        await init();
        init_panic_hook();

        // Create demo graph
        this.createDemoGraph();
        
        // Register all actors
        this.registerActors();
        
        // Setup UI
        this.setupUI();
        
        console.log("✅ Browser Actor Demo initialized");
    }

    createDemoGraph() {
        this.graph = new Graph("BrowserActorDemo", true, {
            description: "Comprehensive Browser actor demonstration",
            version: "1.0.0"
        });

        // Add actors
        this.graph.addNode("dataTransform", "DataTransformActor", {
            x: 100, y: 100,
            description: "Data transformation processor"
        });

        this.graph.addNode("fileProcessor", "FileProcessorActor", {
            x: 300, y: 100,
            description: "File processing handler"
        });

        this.graph.addNode("apiClient", "WebAPIClientActor", {
            x: 500, y: 100,
            description: "Web API client"
        });

        this.graph.addNode("websocket", "WebSocketActor", {
            x: 100, y: 300,
            description: "WebSocket connection"
        });

        this.graph.addNode("uiManager", "UIInteractionActor", {
            x: 300, y: 300,
            description: "UI interaction manager"
        });

        // Create connections
        this.graph.addConnection("dataTransform", "result", "fileProcessor", "operation");
        this.graph.addConnection("fileProcessor", "content", "apiClient", "request");
        this.graph.addConnection("apiClient", "response", "websocket", "send");
        this.graph.addConnection("websocket", "message", "uiManager", "createElement");

        // Add some initial data
        this.graph.addInitial([1, 2, 3, 4, 5], "dataTransform", "data");
    }

    registerActors() {
        this.network = new GraphNetwork(this.graph);
        
        this.network.registerActor("DataTransformActor", new DataTransformActor());
        this.network.registerActor("FileProcessorActor", new FileProcessorActor());
        this.network.registerActor("WebAPIClientActor", new WebAPIClientActor());
        this.network.registerActor("WebSocketActor", new WebSocketActor());
        this.network.registerActor("UIInteractionActor", new UIInteractionActor());

        // Monitor network events
        this.network.next((event) => {
            this.handleNetworkEvent(event);
        });
    }

    setupUI() {
        document.body.innerHTML = `
            <div id="demo-app">
                <h1>Browser Actor Development Demo</h1>
                
                <div class="controls">
                    <button id="startNetwork">Start Network</button>
                    <button id="stopNetwork">Stop Network</button>
                    <button id="testDataTransform">Test Data Transform</button>
                    <button id="testFileUpload">Test File Upload</button>
                    <button id="testAPI">Test API Call</button>
                    <button id="testWebSocket">Test WebSocket</button>
                </div>
                
                <div class="output">
                    <h3>Network Events:</h3>
                    <div id="events" style="height: 200px; overflow-y: auto; border: 1px solid #ccc; padding: 10px;"></div>
                </div>
                
                <div class="file-upload">
                    <h3>File Upload Test:</h3>
                    <input type="file" id="fileInput" />
                    <select id="fileOperation">
                        <option value="read">Read</option>
                        <option value="parse">Parse</option>
                        <option value="analyze">Analyze</option>
                    </select>
                </div>
                
                <div id="dynamic-content">
                    <h3>Dynamic UI Elements:</h3>
                    <!-- UI actor will add elements here -->
                </div>
            </div>
        `;

        // Add event listeners
        document.getElementById('startNetwork').onclick = () => this.startNetwork();
        document.getElementById('stopNetwork').onclick = () => this.stopNetwork();
        document.getElementById('testDataTransform').onclick = () => this.testDataTransform();
        document.getElementById('testFileUpload').onclick = () => this.testFileUpload();
        document.getElementById('testAPI').onclick = () => this.testAPI();
        document.getElementById('testWebSocket').onclick = () => this.testWebSocket();
    }

    async startNetwork() {
        if (!this.isRunning) {
            await this.network.start();
            this.isRunning = true;
            this.logEvent("Network started");
        }
    }

    stopNetwork() {
        if (this.isRunning) {
            this.network.shutdown();
            this.isRunning = false;
            this.logEvent("Network stopped");
        }
    }

    async testDataTransform() {
        const testData = [
            Math.random() * 100,
            Math.random() * 100,
            "  Test String  ",
            { test_field: "value", another_field: 123 }
        ];

        const result = await this.network.executeActor("dataTransform", {
            data: testData,
            config: { operation: "normalize", precision: 2 }
        });

        this.logEvent("Data transform result", result);
    }

    testFileUpload() {
        const fileInput = document.getElementById('fileInput');
        const operation = document.getElementById('fileOperation').value;
        
        if (fileInput.files.length > 0) {
            const file = fileInput.files[0];
            this.network.executeActor("fileProcessor", {
                file: file,
                operation: { type: operation, format: 'auto' }
            });
        } else {
            alert("Please select a file first");
        }
    }

    async testAPI() {
        // Test with a public API
        const result = await this.network.executeActor("apiClient", {
            request: {
                url: "https://jsonplaceholder.typicode.com/posts/1",
                method: "GET",
                responseType: "json"
            }
        });

        this.logEvent("API call result", result);
    }

    testWebSocket() {
        // Test WebSocket connection (you'll need a WebSocket server)
        this.network.executeActor("websocket", {
            connect: {
                url: "wss://echo.websocket.org/",
                protocols: []
            }
        });
    }

    handleNetworkEvent(event) {
        switch (event._type) {
            case "FlowTrace":
                this.logEvent(`Data flow: ${event.from.actorId}:${event.from.port} → ${event.to.actorId}:${event.to.port}`, event.from.data);
                break;
                
            case "ActorStarted":
                this.logEvent(`Actor started: ${event.actorId}`);
                break;
                
            case "ActorStopped":
                this.logEvent(`Actor stopped: ${event.actorId}`);
                break;
                
            case "ProcessError":
                this.logEvent(`Error in ${event.actorId}: ${event.error}`, null, "error");
                break;
                
            default:
                this.logEvent(`Network event: ${event._type}`, event);
        }
    }

    logEvent(message, data = null, type = "info") {
        const eventsDiv = document.getElementById('events');
        const timestamp = new Date().toLocaleTimeString();
        
        const eventElement = document.createElement('div');
        eventElement.className = `event event-${type}`;
        eventElement.style.marginBottom = '5px';
        eventElement.style.padding = '5px';
        eventElement.style.backgroundColor = type === 'error' ? '#ffe6e6' : '#e6f3ff';
        
        let content = `[${timestamp}] ${message}`;
        if (data) {
            content += `\nData: ${JSON.stringify(data, null, 2)}`;
        }
        
        eventElement.textContent = content;
        eventsDiv.appendChild(eventElement);
        eventsDiv.scrollTop = eventsDiv.scrollHeight;
    }
}

// Initialize the demo when the page loads
document.addEventListener('DOMContentLoaded', async () => {
    const demo = new BrowserActorDemo();
    await demo.initialize();
});

Best Practices and Performance Tips

1. State Management

// ✅ Good: Batch state operations
class EfficientActor {
    run(context) {
        const state = context.state.getAll();
        
        // Modify locally
        state.counter = (state.counter || 0) + 1;
        state.lastUpdate = Date.now();
        state.processedItems = (state.processedItems || []);
        state.processedItems.push(context.input.data);
        
        // Write once
        context.state.setAll(state);
    }
}

// ❌ Avoid: Multiple state operations
class InefficientActor {
    run(context) {
        const counter = context.state.get('counter') || 0;
        context.state.set('counter', counter + 1);
        context.state.set('lastUpdate', Date.now());
        
        const items = context.state.get('processedItems') || [];
        items.push(context.input.data);
        context.state.set('processedItems', items);
    }
}

2. Error Handling

class RobustActor {
    run(context) {
        try {
            this.processInput(context);
        } catch (error) {
            this.handleError(error, context);
        }
    }
    
    handleError(error, context) {
        // Log error details
        console.error(`${this.constructor.name} error:`, error);
        
        // Send structured error information
        context.send({
            error: {
                message: error.message,
                stack: error.stack,
                input: context.input,
                timestamp: Date.now(),
                actorType: this.constructor.name
            }
        });
        
        // Update error statistics
        const stats = context.state.get('errorStats') || { count: 0, lastError: null };
        stats.count++;
        stats.lastError = Date.now();
        context.state.set('errorStats', stats);
    }
}

3. Memory Management

class MemoryAwareActor {
    constructor() {
        this.inports = ["input"];
        this.outports = ["output"];
        this.config = { maxCacheSize: 1000 };
    }
    
    run(context) {
        // Clean up old cache entries
        this.cleanupCache(context);
        
        // Process input
        const result = this.processData(context.input.input);
        
        // Cache result if within limits
        this.cacheResult(result, context);
        
        context.send({ output: result });
    }
    
    cleanupCache(context) {
        const cache = context.state.get('cache') || {};
        const entries = Object.entries(cache);
        
        if (entries.length > this.config.maxCacheSize) {
            // Sort by timestamp and keep only recent entries
            const sorted = entries.sort((a, b) => b[1].timestamp - a[1].timestamp);
            const cleaned = Object.fromEntries(sorted.slice(0, this.config.maxCacheSize));
            context.state.set('cache', cleaned);
        }
    }
}

Conclusion

This tutorial covered the essential patterns for developing Browser actors in browser environments:

  1. Data Transformation - Processing and manipulating data with stateful operations
  2. Web API Integration - Making HTTP requests with retry logic and error handling
  3. File Processing - Handling browser file operations with progress tracking
  4. Real-time Communication - WebSocket connections with automatic reconnection
  5. UI Interaction - DOM manipulation and event handling

Key Takeaways

  • State Management: Use batch operations for better performance
  • Error Handling: Implement comprehensive error reporting and recovery
  • Async Operations: Handle promises and timeouts properly
  • Memory Management: Clean up resources and limit cache sizes
  • Browser APIs: Leverage native browser capabilities effectively

Next Steps

The examples in this tutorial provide a solid foundation for building sophisticated browser-based workflow applications using Reflow's Browser bindings.

Game Programming with Reflow

Reflow's game architecture follows the Entity-Component-System pattern. The AssetDB is the world. Components are queryable data. DAG actors are systems.

AssetDB (World)          Reflow DAG (Systems)         External Tools
┌──────────────┐        ┌──────────────────┐        ┌──────────────┐
│ player:      │◀──────▶│ PhysicsSystem    │        │ Zeal Editor  │
│   transform  │        │ CameraSystem     │        │ Debug Tools  │
│   rigidbody  │        │ LightCollector   │        │ Scripts      │
│   collider   │        │ MaterialSystem   │        │ Unit Tests   │
│   mesh       │        │ RenderSystem     │        │              │
│   material   │        └──────────────────┘        └──────────────┘
│              │                                           │
│ sun:light    │◀──────────────────────────────────────────┘
│ main:camera  │         any tool reads/writes the same DB
└──────────────┘

Quick Start

1. Set up the world

Create entities by putting components into the AssetDB. An entity is just a name prefix. A component is a type suffix.

#![allow(unused)]
fn main() {
use reflow_assets::get_or_create_db;
use serde_json::json;

let db = get_or_create_db("./game.db")?;

// Player entity
db.set_component_json("player", "transform", json!({
    "position": [0.0, 1.0, 0.0],
    "rotation": [0.0, 0.0, 0.0, 1.0],
    "scale": [1.0, 1.0, 1.0],
}), json!({}))?;

db.set_component_json("player", "rigidbody", json!({
    "bodyType": "dynamic",
    "mass": 80.0,
    "linearDamping": 0.1,
    "gravityScale": 1.0,
}), json!({}))?;

db.set_component_json("player", "collider", json!({
    "shape": "capsule",
    "radius": 0.3,
    "height": 1.8,
    "friction": 0.5,
    "restitution": 0.1,
}), json!({}))?;

// Ground
db.set_component_json("ground", "transform", json!({
    "position": [0.0, 0.0, 0.0],
}), json!({}))?;

db.set_component_json("ground", "rigidbody", json!({
    "bodyType": "static",
}), json!({}))?;

db.set_component_json("ground", "collider", json!({
    "shape": "box",
    "halfExtents": [50.0, 0.1, 50.0],
}), json!({}))?;

// Camera
db.set_component_json("main", "camera", json!({
    "mode": "thirdPerson",
    "target": "player",
    "fov": 60.0,
    "distance": 5.0,
    "height": 2.0,
    "orbitPitch": 0.3,
    "active": true,
}), json!({}))?;

// Sun light
db.set_component_json("sun", "light", json!({
    "type": "directional",
    "direction": [0.0, -1.0, 0.5],
    "color": [1.0, 1.0, 0.9],
    "intensity": 2.0,
    "castShadow": true,
}), json!({}))?;

// Torch (point light)
db.set_component_json("torch", "transform", json!({
    "position": [3.0, 2.0, 1.0],
}), json!({}))?;

db.set_component_json("torch", "light", json!({
    "type": "point",
    "color": [1.0, 0.6, 0.2],
    "range": 10.0,
    "intensity": 3.0,
}), json!({}))?;

// Material
db.set_component_json("player", "material", json!({
    "albedo": [0.8, 0.2, 0.1],
    "metallic": 0.0,
    "roughness": 0.5,
}), json!({}))?;
}

2. Wire the game loop DAG

The DAG connects systems. Each system reads components, processes, writes results back.

#![allow(unused)]
fn main() {
use reflow_network::{network::{Network, NetworkConfig}, message::Message};

let mut net = Network::new(NetworkConfig::default());

// Register system actors
for tpl in [
    "tpl_interval_trigger",
    "tpl_scene_physics",
    "tpl_scene_camera",
    "tpl_scene_light_collector",
    "tpl_scene_material",
] {
    net.register_actor_arc(tpl, reflow_components::get_actor_for_template(tpl).unwrap())?;
}

// Game tick at 60fps
net.add_node("tick", "tpl_interval_trigger", config(json!({
    "interval": 16, "startImmediately": true,
})))?;

// Systems — all read/write the same AssetDB
net.add_node("physics", "tpl_scene_physics", config(json!({
    "$db": "./game.db",
    "gravity": [0.0, -9.81, 0.0],
    "dt": 0.016,
})))?;

net.add_node("camera", "tpl_scene_camera", config(json!({
    "$db": "./game.db",
    "aspect": 1.777,
    "cameraTag": "main",
})))?;

net.add_node("lights", "tpl_scene_light_collector", config(json!({
    "$db": "./game.db",
})))?;

net.add_node("materials", "tpl_scene_material", config(json!({
    "$db": "./game.db",
})))?;

// Wire: tick drives all systems
net.add_connection(wire("tick", "trigger", "physics", "tick"));
net.add_connection(wire("tick", "trigger", "camera", "tick"));
net.add_connection(wire("tick", "trigger", "lights", "tick"));
net.add_connection(wire("tick", "trigger", "materials", "tick"));

// Start
net.add_initial(iip("tick", "_trigger", Message::Flow));
net.start()?;
}

3. Query the world from anywhere

The AssetDB is the single source of truth. Any tool can read and write it — not just the DAG.

#![allow(unused)]
fn main() {
// Inspect an entity
let snapshot = db.entity_snapshot("player")?;
println!("{}", serde_json::to_string_pretty(&snapshot)?);
// {
//   "transform": { "position": [0.0, 0.83, 0.0], ... },
//   "rigidbody": { "bodyType": "dynamic", "mass": 80.0, ... },
//   "collider": { "shape": "capsule", ... },
//   "velocity": { "linear": [0.0, -0.12, 0.0], ... }
// }

// Find all dynamic bodies
let dynamic_entities = db.query_dsl(&json!({
    "type": "rigidbody",
    "metadata.bodyType": "dynamic",
}))?;

// Find all entities with both mesh and material
let renderable = db.entities_with(&["mesh", "material", "transform"])?;

// Teleport the player
db.set_component_json("player", "transform", json!({
    "position": [10.0, 5.0, 0.0],
    "rotation": [0.0, 0.0, 0.0, 1.0],
    "scale": [1.0, 1.0, 1.0],
}), json!({}))?;
}

Component Reference

transform

Position, rotation, and scale of an entity in the world.

{
    "position": [0.0, 0.0, 0.0],
    "rotation": [0.0, 0.0, 0.0, 1.0],
    "scale": [1.0, 1.0, 1.0]
}

rigidbody

Physics body properties. The physics system picks up any entity with both rigidbody and transform.

{
    "bodyType": "dynamic",
    "mass": 1.0,
    "linearDamping": 0.1,
    "angularDamping": 0.1,
    "gravityScale": 1.0,
    "ccd": false
}

Body types: "dynamic" (simulated), "static" (immovable), "kinematic" (user-driven).

collider

Collision shape attached to a rigidbody.

{
    "shape": "capsule",
    "radius": 0.3,
    "height": 1.8,
    "friction": 0.5,
    "restitution": 0.3,
    "isSensor": false
}

Shapes: "box" (halfExtents), "sphere" (radius), "capsule" (radius + height), "cylinder" (radius + height).

camera

View configuration. Multiple cameras per scene are supported. Use "active": true or pass a tag to select which renders.

{
    "mode": "thirdPerson",
    "target": "player",
    "fov": 60.0,
    "near": 0.1,
    "far": 1000.0,
    "distance": 5.0,
    "height": 2.0,
    "orbitYaw": 0.0,
    "orbitPitch": 0.3,
    "active": true
}

Modes: "fixed" (position + target), "firstPerson" (attached to entity), "thirdPerson" (follow with offset), "orbit" (rotate around center).

light

Light source. Position comes from the entity's transform component for point/spot lights.

{
    "type": "directional",
    "direction": [0.0, -1.0, 0.5],
    "color": [1.0, 1.0, 0.9],
    "intensity": 2.0,
    "castShadow": true,
    "range": 20.0,
    "innerAngle": 30.0,
    "outerAngle": 45.0
}

Types: "directional" (sun), "point" (bulb), "spot" (cone), "ambient" (fill).

material

PBR material properties. Texture fields reference AssetDB entity IDs.

{
    "albedo": [0.8, 0.2, 0.1],
    "metallic": 0.0,
    "roughness": 0.5,
    "emissive": [0.0, 0.0, 0.0],
    "emissiveStrength": 0.0,
    "ao": 1.0,
    "alphaMode": "opaque",
    "doubleSided": false,
    "albedoTexture": "wood:texture",
    "normalTexture": "wood_normal:texture"
}

System Actors

ActorTemplate IDReadsWritesOutputs
ScenePhysicsSystemtpl_scene_physicsrigidbody, collider, transformtransform, velocitycollision pairs
SceneCameraSystemtpl_scene_cameracamera, transform (of target)camera_matricesactive camera data
SceneLightCollectortpl_scene_light_collectorlight, transformpacked light buffer
SceneMaterialSystemtpl_scene_materialmaterialpacked material buffer

Spawning Entities

Use spawn_from to instantiate prefabs:

#![allow(unused)]
fn main() {
// Define a template once
db.set_component_json("crate_template", "transform", json!({
    "position": [0.0, 0.0, 0.0],
}), json!({}))?;
db.set_component_json("crate_template", "rigidbody", json!({
    "bodyType": "dynamic", "mass": 10.0,
}), json!({}))?;
db.set_component_json("crate_template", "collider", json!({
    "shape": "box", "halfExtents": [0.5, 0.5, 0.5],
}), json!({}))?;
db.set_component_json("crate_template", "material", json!({
    "albedo": [0.6, 0.4, 0.2], "roughness": 0.8,
}), json!({}))?;

// Spawn 10 crates at different positions
for i in 0..10 {
    let name = format!("crate_{}", i);
    db.spawn_from("crate_template", &name)?;
    db.set_component_json(&name, "transform", json!({
        "position": [i as f64 * 2.0, 5.0, 0.0],
    }), json!({}))?;
}
}

The physics system picks them up automatically on the next tick.

Importing Assets

Load a Mixamo character into the world:

#![allow(unused)]
fn main() {
// Load file
let glb_data = std::fs::read("character.glb")?;

// Import extracts mesh, skeleton, animation, skin
// Wire in DAG: FileLoad → GltfImport → AssetStore
// Or programmatically:
db.put("character:mesh", &mesh_bytes, json!({"stride": 24}))?;
db.put_json("character:skeleton", skeleton_json, json!({}))?;
db.put_json("character:animation", clip_json, json!({}))?;

// Create entity using imported assets
db.set_component_json("npc", "transform", json!({
    "position": [5.0, 0.0, 3.0],
}), json!({}))?;
db.set_component_json("npc", "rigidbody", json!({
    "bodyType": "dynamic", "mass": 60.0,
}), json!({}))?;
db.set_component_json("npc", "collider", json!({
    "shape": "capsule", "radius": 0.3, "height": 1.6,
}), json!({}))?;
}

Multiple Cameras

Switch cameras by tag:

#![allow(unused)]
fn main() {
// Define cameras
db.set_component_json("gameplay_cam", "camera", json!({
    "mode": "thirdPerson", "target": "player", "fov": 60, "active": true,
}), json!({}))?;
db.tag("gameplay_cam:camera", &["gameplay"])?;

db.set_component_json("cutscene_cam", "camera", json!({
    "mode": "fixed", "position": [10, 5, 0], "target": [0, 0, 0], "active": false,
}), json!({}))?;
db.tag("cutscene_cam:camera", &["cutscene"])?;

// In the DAG, the camera system accepts a tag input:
// wire("camera_selector", "tag", "camera", "camera_tag")
// Send "cutscene" to switch to the cutscene camera
}

Collision Handling

The physics system outputs collision pairs. Wire a handler in the DAG:

tick → ScenePhysicsSystem
           │
           └→ collisions → YourCollisionHandler (user actor)
                               │
                               └→ reads collision pairs:
                                  [{ "a": "player", "b": "coin_3" }]
                                  → removes coin, adds score

Architecture Summary

ConcernWhere it livesWhy
Entity dataAssetDB componentsQueryable by any tool, not coupled to DAG
Game logicDAG actors (systems)Visual wiring, reorderable, hot-swappable
Execution orderDAG connectionsExplicit, debuggable dataflow
PersistenceAssetDB storage backendFile, IndexedDB, or S3 — same API
Editor integrationDirect AssetDB reads/writesNo DAG needed for inspection/editing

ActorConfig Migration Guide

This guide helps you migrate existing actors from the old HashMap-based configuration approach to the new ActorConfig system, providing a smooth transition path with minimal breaking changes.

Migration Overview

The ActorConfig system replaces the previous set_config(HashMap<String, serde_json::Value>) method with a more robust create_process(ActorConfig) approach that provides:

  • Type Safety: Strongly typed configuration with validation
  • Better Error Handling: Clear configuration error messages
  • Dynamic Updates: Runtime configuration changes
  • Multiple Sources: Support for JSON, YAML, environment variables
  • Schema Validation: Built-in validation and defaults

Quick Migration Steps

1. Update Actor Trait Implementation

Before (Old Pattern):

#![allow(unused)]
fn main() {
use reflow_network::actor::{Actor, ActorContext};
use std::collections::HashMap;

pub struct DataProcessor {
    batch_size: usize,
    timeout: Duration,
    enable_retry: bool,
}

impl Actor for DataProcessor {
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.batch_size = config.get("batch_size")
            .and_then(|v| v.as_f64())
            .unwrap_or(10.0) as usize;
        
        self.timeout = Duration::from_millis(
            config.get("timeout_ms")
                .and_then(|v| v.as_f64()) 
                .unwrap_or(5000.0) as u64
        );
        
        self.enable_retry = config.get("enable_retry")
            .and_then(|v| v.as_bool())
            .unwrap_or(true);
    }
    
    async fn run(&self, context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
        // Actor logic using self.batch_size, self.timeout, etc.
        // ...
    }
}
}

After (New Pattern):

#![allow(unused)]
fn main() {
use reflow_network::actor::{Actor, ActorConfig, ActorContext};
use std::collections::HashMap;

pub struct DataProcessor;

impl DataProcessor {
    pub fn new() -> Self {
        Self
    }
}

impl Actor for DataProcessor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Extract configuration values
        let batch_size = config.get_number("batch_size").unwrap_or(10.0) as usize;
        let timeout = Duration::from_millis(config.get_number("timeout_ms").unwrap_or(5000.0) as u64);
        let enable_retry = config.get_boolean("enable_retry").unwrap_or(true);
        
        Box::pin(async move {
            // Actor logic using configuration values
            // ...
        })
    }
    
    // Remove the old set_config method
    // fn set_config(&mut self, config: HashMap<String, serde_json::Value>) { ... }
    
    // Remove the old run method  
    // async fn run(&self, context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> { ... }
}
}

2. Update Actor Registration

Before:

#![allow(unused)]
fn main() {
let mut network = Network::new();
let mut processor = DataProcessor::new();

// Configure actor with HashMap
let config = HashMap::from([
    ("batch_size".to_string(), serde_json::Value::Number(50.into())),
    ("timeout_ms".to_string(), serde_json::Value::Number(10000.into())),
    ("enable_retry".to_string(), serde_json::Value::Bool(false)),
]);

processor.set_config(config);
network.register_actor("processor", processor)?;
}

After:

#![allow(unused)]
fn main() {
let mut network = Network::new();
let processor = DataProcessor::new();

// Configuration is provided when adding to network
let config = ActorConfig::from_json(r#"
{
    "batch_size": 50,
    "timeout_ms": 10000,
    "enable_retry": false
}
"#)?;

network.register_actor("processor", processor)?;
network.add_node_with_config("processor", "processor", Some(config))?;
}

Migration Patterns

Pattern 1: Simple State-Based Actor

Before:

#![allow(unused)]
fn main() {
struct TimerActor {
    interval_ms: u64,
    max_ticks: Option<u64>,
    current_ticks: u64,
}

impl Actor for TimerActor {
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.interval_ms = config.get("interval_ms")
            .and_then(|v| v.as_f64())
            .unwrap_or(1000.0) as u64;
        
        self.max_ticks = config.get("max_ticks")
            .and_then(|v| v.as_f64())
            .map(|v| v as u64);
    }
    
    async fn run(&self, context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
        let mut output = HashMap::new();
        
        if self.current_ticks < self.max_ticks.unwrap_or(u64::MAX) {
            // Emit tick
            output.insert("tick".to_string(), Message::Integer(self.current_ticks as i64));
        }
        
        Ok(output)
    }
}
}

After:

#![allow(unused)]
fn main() {
struct TimerActor;

impl Actor for TimerActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        let interval_ms = config.get_number("interval_ms").unwrap_or(1000.0) as u64;
        let max_ticks = config.get_number("max_ticks").map(|v| v as u64);
        
        Box::pin(async move {
            let mut current_ticks = 0u64;
            let interval = Duration::from_millis(interval_ms);
            
            loop {
                if let Some(max) = max_ticks {
                    if current_ticks >= max {
                        break;
                    }
                }
                
                // Emit tick
                current_ticks += 1;
                
                tokio::time::sleep(interval).await;
            }
        })
    }
}
}

Pattern 2: Complex Configuration with Validation

Before:

#![allow(unused)]
fn main() {
struct DatabaseActor {
    connection_string: String,
    pool_size: u32,
    query_timeout: Duration,
}

impl Actor for DatabaseActor {
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.connection_string = config.get("connection_string")
            .and_then(|v| v.as_str())
            .unwrap_or("postgresql://localhost/db")
            .to_string();
        
        let pool_size = config.get("pool_size")
            .and_then(|v| v.as_f64())
            .unwrap_or(10.0) as u32;
        
        // Manual validation
        self.pool_size = if pool_size > 0 && pool_size <= 100 {
            pool_size
        } else {
            eprintln!("Invalid pool_size {}, using default", pool_size);
            10
        };
        
        self.query_timeout = Duration::from_millis(
            config.get("query_timeout_ms")
                .and_then(|v| v.as_f64())
                .unwrap_or(30000.0) as u64
        );
    }
}
}

After (with typed configuration):

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize)]
struct DatabaseConfig {
    #[serde(default = "default_connection_string")]
    connection_string: String,
    
    #[serde(default = "default_pool_size")]
    pool_size: u32,
    
    #[serde(default = "default_query_timeout")]
    query_timeout_ms: u64,
}

fn default_connection_string() -> String {
    "postgresql://localhost/db".to_string()
}

fn default_pool_size() -> u32 { 10 }
fn default_query_timeout() -> u64 { 30000 }

impl ActorConfigSchema for DatabaseConfig {
    fn validate(&self) -> Result<(), String> {
        if self.connection_string.is_empty() {
            return Err("connection_string cannot be empty".to_string());
        }
        
        if self.pool_size == 0 || self.pool_size > 100 {
            return Err("pool_size must be between 1 and 100".to_string());
        }
        
        if self.query_timeout_ms == 0 {
            return Err("query_timeout_ms must be positive".to_string());
        }
        
        Ok(())
    }
}

struct DatabaseActor;

impl Actor for DatabaseActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Parse and validate configuration
        let db_config: DatabaseConfig = config.parse_typed().expect("Invalid configuration");
        
        let connection_string = db_config.connection_string;
        let pool_size = db_config.pool_size;
        let query_timeout = Duration::from_millis(db_config.query_timeout_ms);
        
        Box::pin(async move {
            // Database actor implementation
            // Configuration is guaranteed to be valid
        })
    }
}
}

Pattern 3: Actors with Complex State Management

Before:

#![allow(unused)]
fn main() {
struct StatefulProcessor {
    state: Arc<Mutex<ProcessorState>>,
    config: ProcessorConfig,
}

#[derive(Clone)]
struct ProcessorConfig {
    batch_size: usize,
    processing_mode: ProcessingMode,
}

impl Actor for StatefulProcessor {
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.config.batch_size = config.get("batch_size")
            .and_then(|v| v.as_f64())
            .unwrap_or(10.0) as usize;
        
        let mode_str = config.get("processing_mode")
            .and_then(|v| v.as_str())
            .unwrap_or("sequential");
        
        self.config.processing_mode = match mode_str {
            "parallel" => ProcessingMode::Parallel,
            "batch" => ProcessingMode::Batch,
            _ => ProcessingMode::Sequential,
        };
    }
    
    async fn run(&self, context: ActorContext) -> Result<HashMap<String, Message>, anyhow::Error> {
        // Use self.config and self.state
        // ...
    }
}
}

After:

#![allow(unused)]
fn main() {
struct StatefulProcessor;

impl Actor for StatefulProcessor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        let batch_size = config.get_number("batch_size").unwrap_or(10.0) as usize;
        let processing_mode = match config.get_string("processing_mode").as_deref() {
            Some("parallel") => ProcessingMode::Parallel,
            Some("batch") => ProcessingMode::Batch,
            _ => ProcessingMode::Sequential,
        };
        
        Box::pin(async move {
            // Create state inside the process
            let state = Arc::new(Mutex::new(ProcessorState::new()));
            
            // Actor implementation with local state
            // ...
        })
    }
}
}

Graph Migration

Updating Graph Definitions

Before:

{
  "processes": {
    "processor": {
      "component": "DataProcessor",
      "metadata": {
        "batch_size": 50,
        "timeout_ms": 10000,
        "enable_retry": false
      }
    }
  }
}

After:

{
  "processes": {
    "processor": {
      "component": "DataProcessor", 
      "metadata": {
        "config": {
          "batch_size": 50,
          "timeout_ms": 10000,
          "enable_retry": false
        }
      }
    }
  }
}

The configuration is now nested under a "config" key in the metadata, which the system automatically extracts and converts to an ActorConfig.

Migration Utilities

Automatic Configuration Migration

#![allow(unused)]
fn main() {
use reflow_network::actor::ActorConfig;

// Helper to migrate old graph metadata format
pub fn migrate_graph_metadata(old_metadata: &serde_json::Value) -> serde_json::Value {
    if let Some(obj) = old_metadata.as_object() {
        // Check if it already has a "config" key
        if obj.contains_key("config") {
            return old_metadata.clone(); // Already migrated
        }
        
        // Wrap existing metadata in "config" key
        let mut new_metadata = serde_json::Map::new();
        new_metadata.insert("config".to_string(), old_metadata.clone());
        
        serde_json::Value::Object(new_metadata)
    } else {
        old_metadata.clone()
    }
}

// Helper to migrate legacy HashMap config to ActorConfig
impl ActorConfig {
    pub fn from_legacy_hashmap(legacy: HashMap<String, serde_json::Value>) -> Self {
        let mut config = ActorConfig::default();
        
        for (key, value) in legacy {
            config.set(&key, value);
        }
        
        config
    }
}
}

Migration Script

// migration_script.rs - Tool to migrate existing graph files
use std::path::Path;
use tokio::fs;

pub async fn migrate_graph_file(path: &Path) -> Result<(), Box<dyn std::error::Error>> {
    let content = fs::read_to_string(path).await?;
    let mut graph: serde_json::Value = serde_json::from_str(&content)?;
    
    // Migrate processes metadata
    if let Some(processes) = graph.get_mut("processes").and_then(|p| p.as_object_mut()) {
        for (_, process) in processes.iter_mut() {
            if let Some(metadata) = process.get_mut("metadata") {
                *metadata = migrate_graph_metadata(metadata);
            }
        }
    }
    
    // Write back the migrated graph
    let migrated_content = serde_json::to_string_pretty(&graph)?;
    fs::write(path, migrated_content).await?;
    
    println!("Migrated: {}", path.display());
    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let graph_files = glob::glob("**/*.graph.json")?;
    
    for entry in graph_files {
        if let Ok(path) = entry {
            migrate_graph_file(&path).await?;
        }
    }
    
    println!("Migration completed!");
    Ok(())
}

Backward Compatibility

Temporary Compatibility Layer

If you need to maintain compatibility with old and new systems during migration:

#![allow(unused)]
fn main() {
use reflow_network::actor::ActorConfig;

pub struct CompatibilityActor {
    // Store configuration in both formats during transition
    legacy_config: Option<HashMap<String, serde_json::Value>>,
    actor_config: Option<ActorConfig>,
}

impl CompatibilityActor {
    pub fn new() -> Self {
        Self {
            legacy_config: None,
            actor_config: None,
        }
    }
    
    // Helper to get config value from either format
    fn get_config_value<T>(&self, key: &str) -> Option<T> 
    where
        T: serde::de::DeserializeOwned + Clone,
    {
        // Try new format first
        if let Some(config) = &self.actor_config {
            if let Ok(value) = config.get::<T>(key) {
                return Some(value);
            }
        }
        
        // Fall back to legacy format
        if let Some(legacy) = &self.legacy_config {
            if let Some(value) = legacy.get(key) {
                if let Ok(parsed) = serde_json::from_value::<T>(value.clone()) {
                    return Some(parsed);
                }
            }
        }
        
        None
    }
}

impl Actor for CompatibilityActor {
    // Support old method during transition
    fn set_config(&mut self, config: HashMap<String, serde_json::Value>) {
        self.legacy_config = Some(config);
    }
    
    // Implement new method
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        // Use helper method to get values from either format
        let batch_size = self.get_config_value::<f64>("batch_size").unwrap_or(10.0) as usize;
        let timeout_ms = self.get_config_value::<f64>("timeout_ms").unwrap_or(5000.0) as u64;
        
        Box::pin(async move {
            // Actor implementation
        })
    }
}
}

Testing Migration

Unit Tests for Migrated Actors

#![allow(unused)]
fn main() {
#[cfg(test)]
mod migration_tests {
    use super::*;
    use reflow_network::actor::testing::TestActorConfig;
    
    #[tokio::test]
    async fn test_migrated_actor_with_legacy_values() {
        // Test that migrated actor works with old-style values
        let config = TestActorConfig::builder()
            .with_number("batch_size", 50.0)
            .with_number("timeout_ms", 10000.0)
            .with_boolean("enable_retry", false)
            .build();
        
        let actor = DataProcessor::new();
        
        // Should not panic with valid configuration
        let process = actor.create_process(config.into());
        
        // Test that process can be spawned
        let handle = tokio::spawn(process);
        
        // Clean shutdown for test
        tokio::time::sleep(Duration::from_millis(100)).await;
        handle.abort();
    }
    
    #[test]
    fn test_configuration_migration_helper() {
        let legacy_config = HashMap::from([
            ("batch_size".to_string(), serde_json::Value::Number(25.into())),
            ("timeout_ms".to_string(), serde_json::Value::Number(15000.into())),
        ]);
        
        let actor_config = ActorConfig::from_legacy_hashmap(legacy_config);
        
        assert_eq!(actor_config.get_number("batch_size"), Some(25.0));
        assert_eq!(actor_config.get_number("timeout_ms"), Some(15000.0));
    }
}
}

Common Migration Issues

Issue 1: Missing Configuration Values

Problem: Actor expects configuration values that aren't provided.

Solution: Use default values and graceful degradation:

#![allow(unused)]
fn main() {
// Before: Could panic
let batch_size = config.get("batch_size").unwrap().as_f64().unwrap() as usize;

// After: Graceful with defaults
let batch_size = config.get_number("batch_size").unwrap_or(10.0) as usize;
}

Issue 2: Type Conversion Errors

Problem: Configuration values have different types than expected.

Solution: Use explicit type checking and conversion:

#![allow(unused)]
fn main() {
// Robust type handling
let batch_size = match config.get_number("batch_size") {
    Some(size) if size > 0.0 => size as usize,
    Some(invalid) => {
        eprintln!("Invalid batch_size: {}, using default", invalid);
        10
    },
    None => {
        println!("No batch_size specified, using default");
        10
    }
};
}

Issue 3: State Management Migration

Problem: Actors with complex internal state need restructuring.

Solution: Move state into the process:

#![allow(unused)]
fn main() {
// Before: State as struct fields
struct StatefulActor {
    state: ProcessorState,
    config: Config,
}

// After: State managed in process
impl Actor for StatefulActor {
    fn create_process(&self, config: ActorConfig) -> Pin<Box<dyn Future<Output = ()> + Send + 'static>> {
        Box::pin(async move {
            let mut state = ProcessorState::new();
            
            loop {
                // Process using local state
                // ...
            }
        })
    }
}
}

Migration Checklist

Pre-Migration

  • Identify all actors using set_config
  • Document current configuration formats
  • Create backup of existing graph files
  • Plan migration order (dependencies first)

During Migration

  • Update actor trait implementations
  • Migrate configuration extraction logic
  • Add typed configuration schemas (recommended)
  • Update graph file metadata format
  • Update actor registration code

Post-Migration

  • Test all actors with new configuration system
  • Verify graph loading and execution
  • Remove old set_config implementations
  • Update documentation and examples
  • Performance testing with new system

Validation

  • All actors receive expected configuration
  • Configuration validation works correctly
  • Default values are applied appropriately
  • Error handling for invalid configurations
  • Dynamic configuration updates (if used)

Performance Considerations

Before and After Performance

The new ActorConfig system provides better performance in several areas:

  1. Configuration Parsing: One-time parsing vs repeated HashMap lookups
  2. Type Safety: Compile-time validation reduces runtime errors
  3. Memory Usage: More efficient internal representation
  4. Validation: Built-in validation vs manual checking

Benchmarking Migration

#![allow(unused)]
fn main() {
// benchmark_migration.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn benchmark_old_config(c: &mut Criterion) {
    let config = HashMap::from([
        ("batch_size".to_string(), serde_json::Value::Number(50.into())),
        ("timeout_ms".to_string(), serde_json::Value::Number(10000.into())),
    ]);
    
    c.bench_function("old_config_extraction", |b| b.iter(|| {
        let batch_size = black_box(config.get("batch_size")
            .and_then(|v| v.as_f64())
            .unwrap_or(10.0) as usize);
        let timeout = black_box(config.get("timeout_ms")
            .and_then(|v| v.as_f64())
            .unwrap_or(5000.0) as u64);
    }));
}

fn benchmark_new_config(c: &mut Criterion) {
    let config = ActorConfig::from_json(r#"
    {
        "batch_size": 50,
        "timeout_ms": 10000
    }
    "#).unwrap();
    
    c.bench_function("new_config_extraction", |b| b.iter(|| {
        let batch_size = black_box(config.get_number("batch_size").unwrap_or(10.0) as usize);
        let timeout = black_box(config.get_number("timeout_ms").unwrap_or(5000.0) as u64);
    }));
}

criterion_group!(benches, benchmark_old_config, benchmark_new_config);
criterion_main!(benches);
}

Next Steps

After completing the migration:

  1. Remove Legacy Code: Clean up old set_config implementations
  2. Add Validation: Implement typed configuration schemas for better validation
  3. Dynamic Configuration: Consider adding runtime configuration updates
  4. Documentation: Update all examples and documentation
  5. Monitoring: Add configuration monitoring and alerting

Getting Help

If you encounter issues during migration:

  1. Check Examples: Look at migrated examples in the documentation
  2. Configuration Validation: Use typed schemas to catch issues early
  3. Testing: Write comprehensive tests for migrated actors
  4. Community: Ask for help in the Reflow community forums
  5. GitHub Issues: Report bugs or ask for clarification

The migration to ActorConfig provides significant benefits in terms of type safety, validation, and maintainability. While it requires some initial effort, the improved developer experience and runtime reliability make it worthwhile.

Native Deployment

This guide covers deploying Reflow workflows as native applications on various platforms.

Overview

Native deployment provides:

  • Maximum performance - Direct OS integration
  • Resource efficiency - No containerization overhead
  • Platform integration - Native system services
  • Debugging capabilities - Full toolchain access

Deployment Options

Standalone Binary

Compile workflows into self-contained executables:

# Build optimized release binary
cargo build --release

# Binary includes all dependencies
./target/release/my-workflow

# Cross-compilation for different targets
cargo build --release --target x86_64-pc-windows-gnu
cargo build --release --target aarch64-apple-darwin

System Service

Deploy as a system service for automatic startup:

Linux (systemd)

Create /etc/systemd/system/reflow-workflow.service:

[Unit]
Description=Reflow Workflow Service
After=network.target
Wants=network.target

[Service]
Type=exec
User=reflow
Group=reflow
ExecStart=/opt/reflow/bin/my-workflow
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal

# Environment variables
Environment=RUST_LOG=info
Environment=REFLOW_CONFIG=/etc/reflow/config.toml

# Security settings
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/reflow /var/log/reflow

[Install]
WantedBy=multi-user.target

Enable and start the service:

sudo systemctl enable reflow-workflow
sudo systemctl start reflow-workflow
sudo systemctl status reflow-workflow

macOS (launchd)

Create /Library/LaunchDaemons/com.yourcompany.reflow.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.yourcompany.reflow</string>
    
    <key>ProgramArguments</key>
    <array>
        <string>/usr/local/bin/my-workflow</string>
    </array>
    
    <key>RunAtLoad</key>
    <true/>
    
    <key>KeepAlive</key>
    <true/>
    
    <key>StandardOutPath</key>
    <string>/usr/local/var/log/reflow.log</string>
    
    <key>StandardErrorPath</key>
    <string>/usr/local/var/log/reflow.error.log</string>
    
    <key>EnvironmentVariables</key>
    <dict>
        <key>RUST_LOG</key>
        <string>info</string>
    </dict>
</dict>
</plist>

Load the service:

sudo launchctl load /Library/LaunchDaemons/com.yourcompany.reflow.plist

Windows Service

Using the windows-service crate:

#![allow(unused)]
fn main() {
// Cargo.toml
[dependencies]
windows-service = "0.6"

// src/main.rs
use windows_service::{
    define_windows_service,
    service::{
        ServiceAccess, ServiceErrorControl, ServiceInfo, ServiceStartType,
        ServiceState, ServiceType,
    },
    service_control_handler::{self, ServiceControlHandlerResult},
    service_dispatcher, Result,
};

define_windows_service!(ffi_service_main, my_service_main);

fn my_service_main(arguments: Vec<std::ffi::OsString>) {
    if let Err(_e) = run_service(arguments) {
        // Handle error
    }
}

fn run_service(_arguments: Vec<std::ffi::OsString>) -> Result<()> {
    let event_handler = move |control_event| -> ServiceControlHandlerResult {
        match control_event {
            ServiceControl::Stop => {
                // Stop the workflow
                ServiceControlHandlerResult::NoError
            }
            ServiceControl::Interrogate => ServiceControlHandlerResult::NoError,
            _ => ServiceControlHandlerResult::NotImplemented,
        }
    };

    let status_handle = service_control_handler::register("reflow", event_handler)?;

    // Start workflow
    start_workflow();

    Ok(())
}
}

Configuration Management

Configuration Files

Create hierarchical configuration:

# /etc/reflow/config.toml (system-wide)
[runtime]
thread_pool_size = 8
max_memory_mb = 1024

[logging]
level = "info"
output = "/var/log/reflow/app.log"

[network]
bind_address = "0.0.0.0:8080"
# ~/.config/reflow/config.toml (user-specific)
[runtime]
thread_pool_size = 4  # Override system setting

[development]
hot_reload = true
debug_mode = true

Environment Variables

Support environment variable overrides:

#![allow(unused)]
fn main() {
use config::{Config, Environment, File};

fn load_config() -> Result<AppConfig, config::ConfigError> {
    let mut settings = Config::builder()
        // Start with default values
        .add_source(File::with_name("config/default"))
        // Add environment-specific config
        .add_source(File::with_name(&format!("config/{}", env)).required(false))
        // Add local config
        .add_source(File::with_name("config/local").required(false))
        // Add environment variables with REFLOW_ prefix
        .add_source(Environment::with_prefix("REFLOW"))
        .build()?;

    settings.try_deserialize()
}

// Environment variables:
// REFLOW_RUNTIME__THREAD_POOL_SIZE=16
// REFLOW_LOGGING__LEVEL=debug
// REFLOW_NETWORK__BIND_ADDRESS=127.0.0.1:9090
}

Resource Management

Memory Configuration

#![allow(unused)]
fn main() {
use tikv_jemallocator::Jemalloc;

#[global_allocator]
static GLOBAL: Jemalloc = Jemalloc;

fn configure_memory() {
    // Set memory limits
    std::env::set_var("MALLOC_CONF", "lg_dirty_mult:8,lg_muzzy_mult:8");
    
    // Configure actor memory limits
    let config = ActorSystemConfig {
        max_actors: 10000,
        max_memory_per_actor: 100 * 1024 * 1024, // 100MB
        gc_threshold: 0.8,
    };
}
}

File Descriptors

# Increase file descriptor limits
echo "reflow soft nofile 65536" >> /etc/security/limits.conf
echo "reflow hard nofile 65536" >> /etc/security/limits.conf

# For systemd services
echo "LimitNOFILE=65536" >> /etc/systemd/system/reflow-workflow.service

CPU Affinity

Pin actors to specific CPU cores:

#![allow(unused)]
fn main() {
use core_affinity;

fn configure_cpu_affinity() {
    let core_ids = core_affinity::get_core_ids().unwrap();
    
    // Pin high-priority actors to specific cores
    for (i, actor) in high_priority_actors.iter().enumerate() {
        let core_id = core_ids[i % core_ids.len()];
        
        tokio::spawn(async move {
            core_affinity::set_for_current(core_id);
            actor.run().await;
        });
    }
}
}

Monitoring and Observability

Logging Configuration

#![allow(unused)]
fn main() {
use tracing::{info, warn, error};
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};

fn setup_logging() {
    tracing_subscriber::registry()
        .with(
            tracing_subscriber::EnvFilter::try_from_default_env()
                .unwrap_or_else(|_| "reflow=info".into()),
        )
        .with(tracing_subscriber::fmt::layer())
        .with(
            tracing_appender::rolling::daily("/var/log/reflow", "app.log")
        )
        .init();
}
}

Metrics Integration

#![allow(unused)]
fn main() {
use prometheus::{Encoder, TextEncoder, register_counter, register_histogram};

lazy_static! {
    static ref MESSAGES_PROCESSED: prometheus::Counter = register_counter!(
        "reflow_messages_processed_total",
        "Total number of messages processed"
    ).unwrap();
    
    static ref MESSAGE_PROCESSING_TIME: prometheus::Histogram = register_histogram!(
        "reflow_message_processing_seconds",
        "Time spent processing messages"
    ).unwrap();
}

// Expose metrics endpoint
async fn metrics_handler() -> impl warp::Reply {
    let encoder = TextEncoder::new();
    let metric_families = prometheus::gather();
    let mut buffer = Vec::new();
    encoder.encode(&metric_families, &mut buffer).unwrap();
    
    warp::reply::with_header(buffer, "content-type", "text/plain")
}
}

Health Checks

#![allow(unused)]
fn main() {
use warp::Filter;

#[derive(Serialize)]
struct HealthStatus {
    status: String,
    actors: usize,
    uptime: u64,
    memory_usage: u64,
}

async fn health_check() -> Result<impl warp::Reply, warp::Rejection> {
    let status = HealthStatus {
        status: "healthy".to_string(),
        actors: get_active_actor_count(),
        uptime: get_uptime_seconds(),
        memory_usage: get_memory_usage(),
    };
    
    Ok(warp::reply::json(&status))
}

let health = warp::path("health")
    .and(warp::get())
    .and_then(health_check);
}

Security Considerations

User Permissions

Run with minimal privileges:

# Create dedicated user
sudo useradd -r -s /bin/false reflow
sudo mkdir -p /var/lib/reflow /var/log/reflow
sudo chown reflow:reflow /var/lib/reflow /var/log/reflow

File System Sandboxing

#![allow(unused)]
fn main() {
use std::os::unix::fs::PermissionsExt;

fn setup_sandbox() -> Result<(), Box<dyn std::error::Error>> {
    // Create chroot environment
    let sandbox_dir = "/var/lib/reflow/sandbox";
    std::fs::create_dir_all(sandbox_dir)?;
    
    // Set restrictive permissions
    let mut perms = std::fs::metadata(sandbox_dir)?.permissions();
    perms.set_mode(0o700);
    std::fs::set_permissions(sandbox_dir, perms)?;
    
    // Change root directory (requires root privileges)
    // unsafe { libc::chroot(sandbox_dir.as_ptr()) };
    
    Ok(())
}
}

Network Security

#![allow(unused)]
fn main() {
use tokio::net::TcpListener;
use rustls::{Certificate, PrivateKey, ServerConfig};

async fn start_secure_server() -> Result<(), Box<dyn std::error::Error>> {
    // Load TLS certificates
    let certs = load_certs("cert.pem")?;
    let key = load_private_key("key.pem")?;
    
    let config = ServerConfig::builder()
        .with_safe_defaults()
        .with_no_client_auth()
        .with_single_cert(certs, key)?;
    
    let acceptor = tokio_rustls::TlsAcceptor::from(Arc::new(config));
    let listener = TcpListener::bind("0.0.0.0:8443").await?;
    
    while let Ok((stream, _)) = listener.accept().await {
        let acceptor = acceptor.clone();
        
        tokio::spawn(async move {
            if let Ok(tls_stream) = acceptor.accept(stream).await {
                handle_connection(tls_stream).await;
            }
        });
    }
    
    Ok(())
}
}

Performance Optimization

Profile-Guided Optimization

# Build with instrumentation
RUSTFLAGS="-Cprofile-generate=/tmp/pgo-data" \
    cargo build --release

# Run with representative workload
./target/release/my-workflow --benchmark

# Rebuild with optimization data
RUSTFLAGS="-Cprofile-use=/tmp/pgo-data" \
    cargo build --release
# Cargo.toml
[profile.release]
lto = true
codegen-units = 1
panic = "abort"

Memory Pool Configuration

#![allow(unused)]
fn main() {
use object_pool::Pool;

lazy_static! {
    static ref MESSAGE_POOL: Pool<HashMap<String, Message>> = Pool::new(1000, || {
        HashMap::with_capacity(16)
    });
}

fn get_message_buffer() -> object_pool::Reusable<HashMap<String, Message>> {
    MESSAGE_POOL.try_pull().unwrap_or_else(|| {
        MESSAGE_POOL.attach(HashMap::with_capacity(16))
    })
}
}

Deployment Scripts

Automated Deployment

#!/bin/bash
# deploy.sh

set -e

APP_NAME="reflow-workflow"
SERVICE_USER="reflow"
INSTALL_DIR="/opt/reflow"
CONFIG_DIR="/etc/reflow"
LOG_DIR="/var/log/reflow"

echo "Deploying $APP_NAME..."

# Stop existing service
sudo systemctl stop $APP_NAME || true

# Create directories
sudo mkdir -p $INSTALL_DIR/bin $CONFIG_DIR $LOG_DIR
sudo chown $SERVICE_USER:$SERVICE_USER $LOG_DIR

# Copy binary
sudo cp target/release/$APP_NAME $INSTALL_DIR/bin/
sudo chmod +x $INSTALL_DIR/bin/$APP_NAME

# Copy configuration
sudo cp config/production.toml $CONFIG_DIR/config.toml
sudo chown root:$SERVICE_USER $CONFIG_DIR/config.toml
sudo chmod 640 $CONFIG_DIR/config.toml

# Install service file
sudo cp scripts/$APP_NAME.service /etc/systemd/system/
sudo systemctl daemon-reload

# Start service
sudo systemctl enable $APP_NAME
sudo systemctl start $APP_NAME

echo "Deployment complete. Checking status..."
sudo systemctl status $APP_NAME

Rollback Script

#!/bin/bash
# rollback.sh

APP_NAME="reflow-workflow"
BACKUP_DIR="/opt/reflow/backups"

echo "Rolling back $APP_NAME..."

# Stop current service
sudo systemctl stop $APP_NAME

# Restore previous version
LATEST_BACKUP=$(ls -t $BACKUP_DIR/*.tar.gz | head -n1)
sudo tar -xzf $LATEST_BACKUP -C /opt/reflow/

# Restart service
sudo systemctl start $APP_NAME
sudo systemctl status $APP_NAME

echo "Rollback complete."

Troubleshooting

Common Issues

High Memory Usage:

# Check memory allocation
echo "Memory usage by process:"
ps aux --sort=-%mem | grep reflow

# Monitor real-time usage
top -p $(pgrep reflow)

# Check for memory leaks
valgrind --tool=memcheck --leak-check=full ./my-workflow

Performance Issues:

# Profile CPU usage
perf record -g ./my-workflow
perf report

# Check system resources
iostat -x 1
vmstat 1

File Descriptor Limits:

# Check current limits
ulimit -n

# Check process usage
lsof -p $(pgrep reflow) | wc -l

# Monitor file descriptor usage
watch -n 1 'ls /proc/$(pgrep reflow)/fd | wc -l'

Log Analysis

# Real-time log monitoring
tail -f /var/log/reflow/app.log

# Search for errors
grep -i error /var/log/reflow/app.log

# Analyze performance patterns
awk '/processing_time/ {sum += $3; count++} END {print "Average:", sum/count}' app.log

Best Practices

Deployment Checklist

  • Resource limits configured
  • Security permissions set
  • Monitoring enabled
  • Health checks implemented
  • Backup strategy defined
  • Rollback procedure tested
  • Documentation updated

Production Readiness

  1. Load Testing - Validate performance under expected load
  2. Failure Testing - Test recovery from various failure scenarios
  3. Security Audit - Review permissions and access controls
  4. Monitoring Setup - Ensure comprehensive observability
  5. Backup Verification - Test backup and restore procedures

Next Steps

Browser Deployment Guide

Learn how to deploy Reflow workflows in web browsers using WebAssembly (WASM) bindings.

Overview

Reflow provides complete WebAssembly bindings that allow you to run actor-based workflows directly in web browsers. This enables:

  • Interactive workflow editors with real-time visualization
  • Client-side data processing without server dependencies
  • Hybrid applications combining browser UI with Rust performance
  • Educational tools for learning workflow concepts

Quick Start

1. Build WASM Bindings

First, build the WebAssembly bindings using wasm-pack:

# Install wasm-pack if you haven't already
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh

# Navigate to the reflow_network crate
cd crates/reflow_network

# Build the WASM bindings for web
wasm-pack build --target web --out-dir pkg

This generates the pkg/ directory with:

  • reflow_network.js - JavaScript bindings
  • reflow_network.d.ts - TypeScript definitions
  • reflow_network_bg.wasm - WebAssembly binary

2. Create a Basic HTML Page

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Reflow Browser Example</title>
</head>
<body>
    <h1>Reflow in the Browser</h1>
    <button id="runWorkflow">Run Workflow</button>
    <div id="output"></div>

    <script type="module">
        import init, { 
            Network, 
            Graph, 
            MemoryState,
            init_panic_hook 
        } from './pkg/reflow_network.js';

        // Initialize WASM
        await init();
        init_panic_hook();

        // Your workflow code here
        console.log('Reflow WASM loaded successfully!');
    </script>
</body>
</html>

3. Serve with a Local Server

WASM requires files to be served via HTTP (not file://):

# Using Python 3
python -m http.server 8000

# Using Node.js
npx http-server -p 8000

# Using any other static file server

Navigate to http://localhost:8000 to view your application.

Core WASM API

Initialization

import init, { 
    Network, 
    Graph, 
    GraphNetwork,
    GraphHistory,
    MemoryState,
    BrowserActorContext,
    JsBrowserActor,
    ActorRunContext,
    init_panic_hook 
} from './pkg/reflow_network.js';

// Initialize WASM module
await init();

// Set up better error reporting
init_panic_hook();

Creating Actors for Browser

Actors in the browser use a simplified JavaScript interface:

class MyActor {
    constructor() {
        this.inports = ["input"];
        this.outports = ["output"];
        this.state = null; // Managed by WASM bridge
        this.config = { /* actor configuration */ };
    }

    /**
     * Main actor execution method
     * @param {ActorRunContext} context - Execution context
     */
    run(context) {
        // Access input data
        const inputData = context.input.input;
        
        // Read/write state
        const currentCount = context.state.get('count') || 0;
        context.state.set('count', currentCount + 1);
        
        // Process data
        const result = {
            processed: inputData,
            count: currentCount + 1,
            timestamp: Date.now()
        };
        
        // Send output
        context.send({ output: result });
    }
}

Graph Creation

Create graphs with visual positioning for browser-based editors:

// Create a new graph
const graph = new Graph("MyWorkflow", true, {
    description: "A browser-based workflow",
    version: "1.0.0"
});

// Add nodes with positioning
graph.addNode("generator", "GeneratorActor", {
    x: 100, y: 100,
    description: "Generates data"
});

graph.addNode("processor", "ProcessorActor", {
    x: 300, y: 100,
    description: "Processes data"
});

// Add connections
graph.addConnection("generator", "output", "processor", "input", {
    label: "Data flow",
    color: "#4CAF50"
});

// Add initial data
graph.addInitial({ start: true }, "generator", "trigger");

// Add graph-level ports
graph.addInport("start", "generator", "trigger", { type: "flow" });
graph.addOutport("results", "processor", "output", { type: "object" });

Network Composition

Create and run networks in the browser:

// Create network from graph
const network = new GraphNetwork(graph);

// Register actor implementations
network.registerActor("GeneratorActor", new GeneratorActor());
network.registerActor("ProcessorActor", new ProcessorActor());

// Set up event monitoring
let eventCount = 0;
network.next((event) => {
    eventCount++;
    console.log(`Event #${eventCount}:`, {
        type: event._type,
        actor: event.actorId,
        port: event.port,
        hasData: !!event.data
    });
});

// Start the network
await network.start();

// The network will now process data according to your graph

State Management

Share state between JavaScript and Rust:

// Create a memory state
const state = new MemoryState();

// Set values
state.set("counter", 42);
state.set("message", "Hello from JavaScript");
state.set("config", { enabled: true, level: "debug" });

// Get values
const counter = state.get("counter");
const message = state.get("message");

// Check existence
if (state.has("config")) {
    console.log("Config exists");
}

// Get all state as an object
const allState = state.getAll();

// Clear state
state.clear();

// Get state size
const size = state.size();

Advanced Features

Graph History with Undo/Redo

// Create graph with history support
const [graph, history] = Graph.withHistoryAndLimit(50);

// Make changes to the graph
graph.addNode("newNode", "MyActor", { x: 200, y: 200 });

// Process events to update history
history.processEvents(graph);

// Check if undo/redo is available
const state = history.getState();
console.log("Can undo:", state.can_undo);
console.log("Can redo:", state.can_redo);

// Perform undo/redo operations
if (state.can_undo) {
    history.undo(graph);
}

if (history.getState().can_redo) {
    history.redo(graph);
}

Real-time Event Monitoring

// Set up comprehensive event monitoring
network.next((event) => {
    switch (event._type) {
        case "FlowTrace":
            console.log(`Flow: ${event.from.actorId}:${event.from.port} → ${event.to.actorId}:${event.to.port}`);
            break;
        case "ActorStarted":
            console.log(`Actor started: ${event.actorId}`);
            break;
        case "ActorStopped":
            console.log(`Actor stopped: ${event.actorId}`);
            break;
        case "NetworkStarted":
            console.log("Network started");
            break;
        case "NetworkStopped":
            console.log("Network stopped");
            break;
        default:
            console.log("Other event:", event);
    }
});

Direct Actor Execution

Execute actors directly for testing:

// Execute an actor and get results
const result = await network.executeActor("myActor", {
    command: "process",
    data: { value: 100 }
});

console.log("Execution result:", result);

Deployment Considerations

1. File Serving

  • CORS: Ensure proper CORS headers if serving from different domains
  • MIME Types: Configure server to serve .wasm files with correct MIME type
  • Compression: Enable gzip/brotli compression for WASM files

2. Bundle Size Optimization

# Build optimized release version
wasm-pack build --target web --release --out-dir pkg

# Further optimization with wee_alloc (add to Cargo.toml)
[dependencies]
wee_alloc = "0.4"

# In your lib.rs
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;

3. Loading Strategies

// Lazy loading for large applications
async function loadReflowWhenNeeded() {
    const { default: init, Network } = await import('./pkg/reflow_network.js');
    await init();
    return { Network };
}

// Progressive loading with loading indicators
function showLoadingIndicator() {
    document.getElementById('loading').style.display = 'block';
}

function hideLoadingIndicator() {
    document.getElementById('loading').style.display = 'none';
}

showLoadingIndicator();
await init();
hideLoadingIndicator();

4. Error Handling

try {
    await init();
    init_panic_hook();
    
    // Your workflow code
    const network = new Network();
    await network.start();
    
} catch (error) {
    console.error("WASM initialization or execution failed:", error);
    
    // Show user-friendly error message
    document.getElementById('error').textContent = 
        "Failed to load workflow engine. Please refresh the page.";
}

Browser Compatibility

Supported Browsers

  • Chrome/Edge: 57+ (full WebAssembly support)
  • Firefox: 52+ (full WebAssembly support)
  • Safari: 11+ (full WebAssembly support)
  • Mobile: iOS 11+, Android Chrome 57+

Feature Detection

function checkWebAssemblySupport() {
    return typeof WebAssembly === 'object' 
        && typeof WebAssembly.instantiate === 'function';
}

if (!checkWebAssemblySupport()) {
    alert('Your browser does not support WebAssembly. Please update to a modern browser.');
}

Performance Tips

1. Memory Management

// Clean up resources when done
network.shutdown();

// Clear large state objects
state.clear();

// Avoid memory leaks in event listeners
const unsubscribe = network.next(handleEvent);
// Later: unsubscribe();

2. Batch Operations

// Batch multiple graph modifications
graph.addNode("node1", "Actor1", { x: 100, y: 100 });
graph.addNode("node2", "Actor2", { x: 200, y: 100 });
graph.addConnection("node1", "output", "node2", "input");

// Process all changes at once
history.processEvents(graph);

3. Efficient Data Passing

// Prefer structured data over strings
const efficientData = { 
    type: "sensor_reading",
    value: 42.5,
    timestamp: Date.now()
};

// Avoid large JSON strings
const inefficientData = JSON.stringify(largeObject);

Troubleshooting

Common Issues

  1. WASM Module Not Found

    • Ensure pkg/ directory exists and contains generated files
    • Check file paths in import statements
    • Verify files are served from the same origin
  2. CORS Errors

    • Use a local web server instead of opening files directly
    • Configure proper CORS headers if needed
  3. Import/Export Errors

    • Use a modern browser with ES6 module support
    • Check that all imports are correctly spelled
    • Ensure type="module" in script tags
  4. Network Startup Failures

    • Verify all actors are properly registered
    • Check browser console for detailed error messages
    • Ensure graph structure is valid before starting network

Debug Mode

// Enable debug logging
console.log("Network actors:", network.getActorNames());
console.log("Active actors:", network.getActiveActors());
console.log("Actor count:", network.getActorCount());

// Export graph for inspection
const graphData = graph.toJSON();
console.log("Graph structure:", JSON.stringify(graphData, null, 2));

Next Steps

The browser deployment of Reflow opens up exciting possibilities for client-side workflow automation, interactive data processing, and educational applications. Start with the examples above and explore the comprehensive API documentation for advanced usage.

Examples and Tutorials

This section provides practical examples and tutorials for building workflows with Reflow.

Quick Reference

Tutorials

Use Cases

Code Samples

Getting Started Examples

Hello World Workflow

The simplest possible workflow:

use reflow_network::Network;
use reflow_components::{utility::LoggerActor, data_operations::MapActor};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut network = Network::new();
    
    // Create a simple transformer
    let transformer = MapActor::new(|payload| {
        let mut result = HashMap::new();
        result.insert("message".to_string(), 
                     Message::string("Hello, World!"));
        Ok(result)
    });
    
    // Create a logger
    let logger = LoggerActor::new()
        .level(LogLevel::Info)
        .format(LogFormat::Pretty);
    
    // Add to network
    network.add_actor("transformer", Box::new(transformer)).await?;
    network.add_actor("logger", Box::new(logger)).await?;
    
    // Connect them
    network.connect("transformer", "output", "logger", "input").await?;
    
    // Start the network
    network.start().await?;
    
    Ok(())
}

Basic Data Processing

#![allow(unused)]
fn main() {
use reflow_components::*;

async fn create_basic_pipeline() -> Result<Network, Box<dyn std::error::Error>> {
    let mut network = Network::new();
    
    // 1. Data source (HTTP endpoint)
    let source = integration::HttpRequestActor::new()
        .timeout(Duration::from_secs(30));
    
    // 2. Data validation
    let validator = data_operations::ValidatorActor::new()
        .add_rule("required", |v| !matches!(v, Message::Null))
        .add_rule("positive", |v| {
            if let Message::Integer(n) = v { *n > 0 } else { true }
        });
    
    // 3. Data transformation
    let transformer = data_operations::MapActor::new(|payload| {
        let mut result = HashMap::new();
        
        // Transform each field
        for (key, value) in payload {
            let transformed = match value {
                Message::String(s) => Message::String(s.to_uppercase()),
                Message::Integer(n) => Message::Integer(n * 2),
                other => other.clone(),
            };
            result.insert(format!("transformed_{}", key), transformed);
        }
        
        Ok(result)
    });
    
    // 4. Output logging
    let logger = utility::LoggerActor::new();
    
    // Build network
    network.add_actor("source", Box::new(source)).await?;
    network.add_actor("validator", Box::new(validator)).await?;
    network.add_actor("transformer", Box::new(transformer)).await?;
    network.add_actor("logger", Box::new(logger)).await?;
    
    // Connect pipeline
    network.connect("source", "output", "validator", "input").await?;
    network.connect("validator", "valid", "transformer", "input").await?;
    network.connect("transformer", "output", "logger", "input").await?;
    
    Ok(network)
}
}

JavaScript Integration

Deno Script Actor

// scripts/data_processor.js
function process(inputs, context) {
    const data = inputs.data;
    
    if (!Array.isArray(data)) {
        return { error: "Expected array input" };
    }
    
    // Process data
    const processed = data
        .filter(item => item.value > 0)
        .map(item => ({
            ...item,
            processed: true,
            timestamp: new Date().toISOString(),
            hash: calculateHash(item)
        }))
        .sort((a, b) => b.value - a.value);
    
    return {
        processed_data: processed,
        count: processed.length,
        max_value: processed[0]?.value || 0
    };
}

function calculateHash(item) {
    // Simple hash function
    return btoa(JSON.stringify(item)).slice(0, 8);
}

exports.process = process;
#![allow(unused)]
fn main() {
// Rust integration
use reflow_script::{ScriptActor, ScriptConfig, ScriptRuntime, ScriptEnvironment};

let script_config = ScriptConfig {
    environment: ScriptEnvironment::SYSTEM,
    runtime: ScriptRuntime::JavaScript,
    source: std::fs::read("scripts/data_processor.js")?,
    entry_point: "process".to_string(),
    packages: None,
};

let script_actor = ScriptActor::new(script_config);
}

Real-World Patterns

Error Handling with Retry

#![allow(unused)]
fn main() {
use reflow_components::{flow_control::ConditionalActor, utility::RetryActor};

async fn create_robust_pipeline() -> Result<Network, Box<dyn std::error::Error>> {
    let mut network = Network::new();
    
    // Main processor (might fail)
    let processor = data_operations::MapActor::new(|payload| {
        // Simulate occasional failures
        if payload.contains_key("trigger_error") {
            return Err(anyhow::anyhow!("Simulated processing error"));
        }
        
        // Normal processing
        Ok(payload.clone())
    });
    
    // Error detector
    let error_detector = ConditionalActor::new(|payload| {
        payload.contains_key("error")
    });
    
    // Retry actor
    let retry_actor = RetryActor::new()
        .max_attempts(3)
        .backoff_strategy(BackoffStrategy::Exponential)
        .base_delay(Duration::from_millis(100));
    
    // Success logger
    let success_logger = utility::LoggerActor::new()
        .level(LogLevel::Info);
    
    // Error logger
    let error_logger = utility::LoggerActor::new()
        .level(LogLevel::Error);
    
    // Build network
    network.add_actor("processor", Box::new(processor)).await?;
    network.add_actor("error_detector", Box::new(error_detector)).await?;
    network.add_actor("retry_actor", Box::new(retry_actor)).await?;
    network.add_actor("success_logger", Box::new(success_logger)).await?;
    network.add_actor("error_logger", Box::new(error_logger)).await?;
    
    // Connect main flow
    network.connect("processor", "output", "error_detector", "input").await?;
    network.connect("error_detector", "false", "success_logger", "input").await?;
    network.connect("error_detector", "true", "retry_actor", "input").await?;
    
    // Retry loop
    network.connect("retry_actor", "retry", "processor", "input").await?;
    network.connect("retry_actor", "failed", "error_logger", "input").await?;
    
    Ok(network)
}
}

High-Throughput Processing

#![allow(unused)]
fn main() {
use reflow_components::{flow_control::LoadBalancerActor, synchronization::BatchActor};

async fn create_high_throughput_pipeline() -> Result<Network, Box<dyn std::error::Error>> {
    let mut network = Network::new();
    
    // Input batching
    let batcher = BatchActor::new()
        .batch_size(100)
        .timeout(Duration::from_millis(50));
    
    // Load balancer
    let load_balancer = LoadBalancerActor::new()
        .strategy(LoadBalanceStrategy::RoundRobin)
        .worker_count(4);
    
    // Worker actors (parallel processing)
    for i in 0..4 {
        let worker = data_operations::MapActor::new(|payload| {
            // CPU-intensive processing
            process_batch(payload)
        });
        
        network.add_actor(&format!("worker_{}", i), Box::new(worker)).await?;
        network.connect("load_balancer", &format!("output_{}", i),
                       &format!("worker_{}", i), "input").await?;
    }
    
    // Result aggregator
    let aggregator = data_operations::AggregateActor::new()
        .window_size(4) // Collect from all workers
        .timeout(Duration::from_secs(1))
        .aggregation_fn(|results| {
            // Combine results from all workers
            combine_worker_results(results)
        });
    
    // Connect workers to aggregator
    for i in 0..4 {
        network.connect(&format!("worker_{}", i), "output",
                       "aggregator", "input").await?;
    }
    
    network.add_actor("batcher", Box::new(batcher)).await?;
    network.add_actor("load_balancer", Box::new(load_balancer)).await?;
    network.add_actor("aggregator", Box::new(aggregator)).await?;
    
    network.connect("batcher", "output", "load_balancer", "input").await?;
    
    Ok(network)
}

fn process_batch(payload: &HashMap<String, Message>) -> Result<HashMap<String, Message>, anyhow::Error> {
    // Simulate CPU-intensive work
    thread::sleep(Duration::from_millis(10));
    Ok(payload.clone())
}

fn combine_worker_results(results: &[HashMap<String, Message>]) -> HashMap<String, Message> {
    let mut combined = HashMap::new();
    
    let total_processed = results.len() as i64;
    combined.insert("total_processed".to_string(), Message::Integer(total_processed));
    combined.insert("timestamp".to_string(), 
                   Message::String(chrono::Utc::now().to_rfc3339()));
    
    combined
}
}

Testing Workflows

Unit Testing

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    use tokio::time::{timeout, Duration};
    
    #[tokio::test]
    async fn test_data_pipeline() {
        let network = create_basic_pipeline().await.unwrap();
        
        // Send test data
        let test_data = HashMap::from([
            ("value".to_string(), Message::Integer(42)),
            ("name".to_string(), Message::String("test".to_string())),
        ]);
        
        // Get input port and send data
        let input_port = network.get_actor_input("source").unwrap();
        input_port.send_async(test_data).await.unwrap();
        
        // Wait for processing
        timeout(Duration::from_secs(5), async {
            // Check that data was processed
            // This would require network introspection capabilities
        }).await.unwrap();
    }
    
    #[tokio::test]
    async fn test_error_handling() {
        let network = create_robust_pipeline().await.unwrap();
        
        // Send data that triggers error
        let error_data = HashMap::from([
            ("trigger_error".to_string(), Message::Boolean(true)),
        ]);
        
        // Verify error handling works correctly
        // Implementation depends on network monitoring capabilities
    }
}
}

Integration Testing

#![allow(unused)]
fn main() {
use std::sync::Arc;
use tokio::sync::Mutex;

#[tokio::test]
async fn test_full_workflow_integration() {
    // Shared state for test validation
    let results = Arc::new(Mutex::new(Vec::new()));
    let results_clone = results.clone();
    
    // Create custom sink actor for testing
    let test_sink = TestSinkActor::new(move |payload| {
        let results = results_clone.clone();
        Box::pin(async move {
            let mut results_guard = results.lock().await;
            results_guard.push(payload.clone());
            Ok(HashMap::new())
        })
    });
    
    let mut network = Network::new();
    
    // Build test network
    let source = create_test_source();
    let processor = create_test_processor();
    
    network.add_actor("source", Box::new(source)).await.unwrap();
    network.add_actor("processor", Box::new(processor)).await.unwrap();
    network.add_actor("sink", Box::new(test_sink)).await.unwrap();
    
    network.connect("source", "output", "processor", "input").await.unwrap();
    network.connect("processor", "output", "sink", "input").await.unwrap();
    
    // Start network
    let handle = tokio::spawn(async move {
        network.start().await
    });
    
    // Send test data
    // ... implementation details
    
    // Wait and verify results
    tokio::time::sleep(Duration::from_secs(2)).await;
    
    let final_results = results.lock().await;
    assert!(!final_results.is_empty());
    assert_eq!(final_results.len(), 3); // Expected number of processed messages
    
    handle.abort();
}
}

Performance Examples

Benchmarking

#![allow(unused)]
fn main() {
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn benchmark_message_processing(c: &mut Criterion) {
    let rt = tokio::runtime::Runtime::new().unwrap();
    
    c.bench_function("process_1000_messages", |b| {
        b.iter(|| {
            rt.block_on(async {
                let network = create_high_throughput_pipeline().await.unwrap();
                
                // Send 1000 messages
                for i in 0..1000 {
                    let message = HashMap::from([
                        ("id".to_string(), Message::Integer(i)),
                        ("data".to_string(), Message::String(format!("data_{}", i))),
                    ]);
                    
                    // Send message
                    black_box(send_message(&network, message).await);
                }
                
                // Wait for completion
                wait_for_completion(&network).await;
            })
        })
    });
}

criterion_group!(benches, benchmark_message_processing);
criterion_main!(benches);
}

Memory Profiling

#![allow(unused)]
fn main() {
use memory_stats::memory_stats;

async fn profile_memory_usage() {
    let initial_memory = memory_stats().unwrap().physical_mem;
    println!("Initial memory: {} bytes", initial_memory);
    
    // Create large workflow
    let network = create_memory_intensive_workflow().await.unwrap();
    
    let after_creation = memory_stats().unwrap().physical_mem;
    println!("After creation: {} bytes", after_creation);
    println!("Creation overhead: {} bytes", after_creation - initial_memory);
    
    // Process data
    for batch in 0..10 {
        process_large_batch(&network, batch).await;
        
        let current_memory = memory_stats().unwrap().physical_mem;
        println!("After batch {}: {} bytes", batch, current_memory);
    }
    
    // Cleanup
    drop(network);
    tokio::time::sleep(Duration::from_secs(1)).await; // Allow GC
    
    let final_memory = memory_stats().unwrap().physical_mem;
    println!("Final memory: {} bytes", final_memory);
}
}

Configuration Examples

Environment-Based Configuration

# config/development.toml
[runtime]
thread_pool_size = 2
log_level = "debug"
hot_reload = true

[performance]
batch_size = 10
timeout_ms = 1000

[scripts]
enable_deno = true
enable_python = false
# config/production.toml
[runtime]
thread_pool_size = 16
log_level = "info"
hot_reload = false

[performance]
batch_size = 1000
timeout_ms = 5000

[scripts]
enable_deno = true
enable_python = true
#![allow(unused)]
fn main() {
// Configuration loading
use config::{Config, Environment, File};

#[derive(Debug, Deserialize)]
struct AppConfig {
    runtime: RuntimeConfig,
    performance: PerformanceConfig,
    scripts: ScriptConfig,
}

fn load_configuration() -> Result<AppConfig, config::ConfigError> {
    let env = std::env::var("REFLOW_ENV").unwrap_or_else(|_| "development".into());
    
    let settings = Config::builder()
        .add_source(File::with_name("config/default"))
        .add_source(File::with_name(&format!("config/{}", env)).required(false))
        .add_source(File::with_name("config/local").required(false))
        .add_source(Environment::with_prefix("REFLOW").separator("__"))
        .build()?;
    
    settings.try_deserialize()
}
}

Deployment Examples

Docker Composition

# docker-compose.yml
version: '3.8'

services:
  reflow-app:
    build: .
    ports:
      - "8080:8080"
    environment:
      - REFLOW_ENV=production
      - RUST_LOG=info
    volumes:
      - ./config:/app/config:ro
      - ./data:/app/data
    depends_on:
      - postgres
      - redis
    restart: unless-stopped
    
  postgres:
    image: postgres:13
    environment:
      POSTGRES_DB: reflow
      POSTGRES_USER: reflow
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    
  redis:
    image: redis:6-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

Kubernetes Deployment

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reflow-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: reflow-app
  template:
    metadata:
      labels:
        app: reflow-app
    spec:
      containers:
      - name: reflow-app
        image: reflow:latest
        ports:
        - containerPort: 8080
        env:
        - name: REFLOW_ENV
          value: "production"
        - name: RUST_LOG
          value: "info"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

Next Steps

Explore specific tutorials and use cases:

For more advanced topics:

API Reference

Complete API reference for Reflow components and systems.

Core APIs

Graph API

Actor API

Network API

Messaging API

Runtime APIs

JavaScript/Deno Runtime

Python Runtime

WebAssembly Runtime

Component APIs

Standard Library

Data Operations

Flow Control

Configuration Reference

Runtime Configuration

[runtime]
thread_pool_size = 8      # Number of worker threads
log_level = "info"        # Logging level: trace, debug, info, warn, error
hot_reload = false        # Enable hot reloading in development

[memory]
max_heap_size = "1GB"     # Maximum heap size
gc_frequency = 100        # Garbage collection frequency

Network Configuration

[network]
max_connections = 1000    # Maximum concurrent connections
timeout_ms = 5000        # Connection timeout in milliseconds
buffer_size = 8192       # Message buffer size

[websocket]
enable = true            # Enable WebSocket support
port = 8080             # WebSocket port
max_frame_size = 65536  # Maximum frame size

Script Configuration

[scripts.deno]
enable = true
allow_net = false        # Network access permission
allow_read = true        # File read permission
allow_write = false      # File write permission

[scripts.python]
enable = true
virtual_env = "venv"     # Virtual environment path
requirements = "requirements.txt"

[scripts.wasm]
enable = true
max_memory = "64MB"      # Maximum WASM memory
stack_size = "1MB"       # Stack size

Error Codes

Runtime Errors

  • E001 - Actor initialization failed
  • E002 - Message routing error
  • E003 - Network connection failed
  • E004 - Script execution error
  • E005 - Memory allocation failed

Graph Errors

  • G001 - Invalid graph structure
  • G002 - Cycle detected in graph
  • G003 - Port type mismatch
  • G004 - Orphaned node detected
  • G005 - Invalid connection

Component Errors

  • C001 - Component not found
  • C002 - Invalid component configuration
  • C003 - Component lifecycle error
  • C004 - Port compatibility error
  • C005 - Component execution timeout

Type Definitions

Core Types

#![allow(unused)]
fn main() {
// Graph types
pub struct Graph {
    pub name: String,
    pub directed: bool,
    pub metadata: HashMap<String, Value>,
}

pub struct GraphNode {
    pub id: String,
    pub component: String,
    pub metadata: HashMap<String, Value>,
}

pub struct GraphConnection {
    pub from_node: String,
    pub from_port: String,
    pub to_node: String,
    pub to_port: String,
    pub metadata: Option<HashMap<String, Value>>,
}

// Message types
pub enum Message {
    Null,
    Boolean(bool),
    Integer(i64),
    Float(f64),
    String(String),
    Array(Vec<Message>),
    Object(HashMap<String, Message>),
    Binary(Vec<u8>),
}

// Actor types
pub trait Actor: Send + Sync {
    fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError>;
    fn get_input_ports(&self) -> Vec<PortDefinition>;
    fn get_output_ports(&self) -> Vec<PortDefinition>;
}
}

Configuration Types

#![allow(unused)]
fn main() {
pub struct RuntimeConfig {
    pub thread_pool_size: usize,
    pub log_level: String,
    pub hot_reload: bool,
}

pub struct NetworkConfig {
    pub max_connections: usize,
    pub timeout_ms: u64,
    pub buffer_size: usize,
}

pub struct ScriptConfig {
    pub runtime: ScriptRuntime,
    pub source: String,
    pub entry_point: String,
    pub permissions: ScriptPermissions,
}
}

WebAssembly Exports

Graph Management

// Create and manage graphs
const graph = new Graph("MyGraph", true, {});
graph.addNode("node1", "Component", {});
graph.addConnection("node1", "out", "node2", "in", {});

// Graph analysis
const validation = graph.validate();
const cycles = graph.detectCycles();
const layout = graph.calculateLayout();

Network Operations

// Network management
const network = new Network();
network.addActor("processor", processorActor);
network.connect("source", "output", "processor", "input");
await network.start();

Message Handling

// Message creation and handling
const message = Message.fromJson({"key": "value"});
const result = await actor.process({"input": message});

Environment Variables

Runtime Environment

  • REFLOW_LOG_LEVEL - Override logging level
  • REFLOW_THREAD_POOL_SIZE - Override thread pool size
  • REFLOW_CONFIG_PATH - Configuration file path

Development Environment

  • REFLOW_DEV_MODE - Enable development features
  • REFLOW_HOT_RELOAD - Enable hot reloading
  • REFLOW_DEBUG_ACTORS - Enable actor debugging

Production Environment

  • REFLOW_PRODUCTION - Enable production optimizations
  • REFLOW_METRICS_ENDPOINT - Metrics collection endpoint
  • REFLOW_HEALTH_CHECK_PORT - Health check port

Performance Considerations

Memory Management

  • Use memory pooling for frequently allocated objects
  • Configure appropriate garbage collection settings
  • Monitor memory usage with built-in profiling tools

Concurrency

  • Balance thread pool size with available CPU cores
  • Use async operations for I/O bound tasks
  • Implement backpressure for high-throughput scenarios

Optimization

  • Enable compiler optimizations for production builds
  • Use profile-guided optimization when available
  • Monitor performance metrics and bottlenecks

Next Steps

Troubleshooting Guide

Common issues and solutions when working with Reflow.

Installation Issues

Rust Compilation Errors

Problem: Build fails with compiler errors

error[E0432]: unresolved import `reflow_network::Graph`

Solution:

  1. Ensure you have the latest Rust version (1.85+)
  2. Update dependencies: cargo update
  3. Clean build cache: cargo clean && cargo build

Problem: Missing system dependencies

error: linking with `cc` failed: exit status: 1

Solution:

  • Linux: Install build essentials: sudo apt-get install build-essential
  • macOS: Install Xcode command line tools: xcode-select --install
  • Windows: Install Visual Studio Build Tools

WebAssembly Build Issues

Problem: wasm-pack fails to build

Error: failed to execute `wasm-pack build`: No such file or directory

Solution:

  1. Install wasm-pack: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
  2. Add wasm target: rustup target add wasm32-unknown-unknown
  3. Verify installation: wasm-pack --version

Runtime Issues

Actor Initialization Failures

Problem: Actors fail to start

Error: Actor 'data_processor' failed to initialize: E001

Solutions:

  1. Check actor configuration:

    #![allow(unused)]
    fn main() {
    // Verify all required ports are defined
    fn get_input_ports(&self) -> Vec<PortDefinition> {
        vec![
            PortDefinition::new("input", PortType::Any),
        ]
    }
    }
  2. Validate actor state:

    #![allow(unused)]
    fn main() {
    // Ensure actor is in valid initial state
    impl Actor for MyActor {
        fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
            if !self.initialized {
                return Err(ActorError::NotInitialized);
            }
            // ... processing logic
        }
    }
    }
  3. Check dependencies:

    #![allow(unused)]
    fn main() {
    // Verify all dependencies are available
    impl MyActor {
        pub fn new() -> Result<Self, ActorError> {
            let dependency = SomeDependency::connect()
                .map_err(|_| ActorError::DependencyUnavailable)?;
            
            Ok(Self { dependency, initialized: true })
        }
    }
    }

Message Routing Errors

Problem: Messages not reaching destination actors

Warning: Message dropped - no route to 'processor.input'

Solutions:

  1. Verify connections:

    #![allow(unused)]
    fn main() {
    // Check connection exists
    network.connect("source", "output", "processor", "input").await?;
    
    // Verify actor and port names
    let actors = network.list_actors();
    println!("Available actors: {:?}", actors);
    }
  2. Check port compatibility:

    #![allow(unused)]
    fn main() {
    // Ensure port types match
    source_actor.get_output_ports(); // Returns Vec<PortDefinition>
    processor_actor.get_input_ports(); // Should have compatible types
    }
  3. Monitor message flow:

    #![allow(unused)]
    fn main() {
    network.enable_message_tracing(true);
    // Check logs for message routing information
    }

Memory Issues

Problem: Out of memory errors

Error: Memory allocation failed: E005

Solutions:

  1. Configure memory limits:

    [memory]
    max_heap_size = "2GB"
    gc_frequency = 50
    enable_memory_pooling = true
    
  2. Implement proper cleanup:

    #![allow(unused)]
    fn main() {
    impl Drop for MyActor {
        fn drop(&mut self) {
            // Clean up resources
            self.cleanup_connections();
            self.release_buffers();
        }
    }
    }
  3. Use memory profiling:

    #![allow(unused)]
    fn main() {
    use memory_stats::memory_stats;
    
    if let Some(usage) = memory_stats() {
        println!("Memory usage: {} bytes", usage.physical_mem);
    }
    }

Network Issues

Connection Timeouts

Problem: Network operations timeout

Error: Connection timeout after 5000ms: E003

Solutions:

  1. Increase timeout values:

    [network]
    timeout_ms = 30000  # Increase to 30 seconds
    
  2. Implement retry logic:

    #![allow(unused)]
    fn main() {
    use reflow_components::utility::RetryActor;
    
    let retry_actor = RetryActor::new()
        .max_attempts(3)
        .backoff_strategy(BackoffStrategy::Exponential)
        .base_delay(Duration::from_millis(100));
    }
  3. Check network connectivity:

    # Test connectivity
    curl -I http://your-endpoint
    ping your-server
    

WebSocket Connection Issues

Problem: WebSocket connections fail

Error: WebSocket connection failed: Connection refused

Solutions:

  1. Verify server configuration:

    [websocket]
    enable = true
    port = 8080
    bind_address = "0.0.0.0"
    
  2. Check firewall settings:

    # Linux
    sudo ufw allow 8080
    
    # macOS
    sudo pfctl -f /etc/pf.conf
    
  3. Test WebSocket endpoint:

    // Test in browser console
    const ws = new WebSocket('ws://localhost:8080');
    ws.onopen = () => console.log('Connected');
    ws.onerror = (error) => console.error('Error:', error);
    

Script Runtime Issues

Deno Permission Errors

Problem: Deno scripts fail due to permissions

Error: Requires read access to "./data", run again with --allow-read

Solutions:

  1. Configure permissions:

    [scripts.deno]
    allow_read = true
    allow_net = false
    allow_write = false
    
  2. Specify allowed paths:

    #![allow(unused)]
    fn main() {
    let config = ScriptConfig {
        runtime: ScriptRuntime::JavaScript,
        permissions: ScriptPermissions {
            allow_read: Some(vec!["./data".to_string(), "./config".to_string()]),
            allow_net: Some(vec!["api.example.com".to_string()]),
            allow_write: None,
        },
        ..Default::default()
    };
    }

Python Import Errors

Problem: Python modules not found

ModuleNotFoundError: No module named 'requests'

Solutions:

  1. Install dependencies:

    pip install -r requirements.txt
    
  2. Configure virtual environment:

    [scripts.python]
    virtual_env = "./venv"
    requirements = "requirements.txt"
    
  3. Verify Python path:

    import sys
    print(sys.path)
    

WebAssembly Module Loading

Problem: WASM modules fail to load

Error: Invalid WASM module format

Solutions:

  1. Verify WASM file:

    wasm-objdump -h module.wasm
    
  2. Check module exports:

    wasm-objdump -x module.wasm | grep Export
    
  3. Validate memory configuration:

    [scripts.wasm]
    max_memory = "64MB"
    stack_size = "1MB"
    

Graph Issues

Cycle Detection Errors

Problem: Graph contains cycles

Error: Cycle detected in graph: node1 -> node2 -> node3 -> node1

Solutions:

  1. Analyze graph structure:

    #![allow(unused)]
    fn main() {
    let analysis = graph.analyze_structure();
    if analysis.has_cycles {
        println!("Cycles found: {:?}", analysis.cycles);
    }
    }
  2. Remove problematic connections:

    #![allow(unused)]
    fn main() {
    // Remove cycle-causing connection
    graph.remove_connection("node3", "output", "node1", "input")?;
    }
  3. Implement cycle breaking:

    #![allow(unused)]
    fn main() {
    let cycles = graph.detect_cycles();
    for cycle in cycles {
        // Break cycle by removing weakest connection
        let weakest_connection = find_weakest_connection(&cycle);
        graph.remove_connection_by_id(&weakest_connection.id)?;
    }
    }

Port Type Mismatches

Problem: Incompatible port types

Error: Port type mismatch: Cannot connect String output to Integer input

Solutions:

  1. Add type conversion:

    #![allow(unused)]
    fn main() {
    use reflow_components::data_operations::ConverterActor;
    
    let converter = ConverterActor::new()
        .add_conversion(PortType::String, PortType::Integer, |value| {
            if let Message::String(s) = value {
                s.parse::<i64>().map(Message::Integer).ok()
            } else {
                None
            }
        });
    }
  2. Use flexible port types:

    #![allow(unused)]
    fn main() {
    PortDefinition::new("input", PortType::Any)
    }
  3. Implement custom validation:

    #![allow(unused)]
    fn main() {
    fn validate_connection(&self, output_type: &PortType, input_type: &PortType) -> bool {
        match (output_type, input_type) {
            (PortType::String, PortType::Integer) => true, // Allow with conversion
            (PortType::Any, _) => true,
            (_, PortType::Any) => true,
            (a, b) => a == b,
        }
    }
    }

Performance Issues

High CPU Usage

Problem: Actors consuming excessive CPU

Warning: Actor 'data_processor' CPU usage: 95%

Solutions:

  1. Profile actor performance:

    #![allow(unused)]
    fn main() {
    use std::time::Instant;
    
    impl Actor for MyActor {
        fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
            let start = Instant::now();
            
            // ... processing logic
            
            let duration = start.elapsed();
            if duration.as_millis() > 100 {
                log::warn!("Slow processing: {:?}", duration);
            }
            
            Ok(result)
        }
    }
    }
  2. Implement batching:

    #![allow(unused)]
    fn main() {
    use reflow_components::synchronization::BatchActor;
    
    let batcher = BatchActor::new()
        .batch_size(100)
        .timeout(Duration::from_millis(50));
    }
  3. Use async processing:

    #![allow(unused)]
    fn main() {
    async fn process_async(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
        // Use tokio::task::yield_now() to yield control
        tokio::task::yield_now().await;
        
        // CPU-intensive work
        let result = heavy_computation(inputs).await;
        
        Ok(result)
    }
    }

Memory Leaks

Problem: Memory usage continuously increases

Warning: Memory usage increased to 2.1GB (threshold: 2GB)

Solutions:

  1. Monitor memory allocation:

    #![allow(unused)]
    fn main() {
    use reflow_network::profiling::MemoryProfiler;
    
    let profiler = MemoryProfiler::new();
    profiler.start_monitoring();
    
    // ... run workflows
    
    let report = profiler.generate_report();
    println!("Memory hotspots: {:?}", report.hotspots);
    }
  2. Implement proper cleanup:

    #![allow(unused)]
    fn main() {
    impl Actor for MyActor {
        fn process(&mut self, inputs: HashMap<String, Message>) -> Result<HashMap<String, Message>, ActorError> {
            // Process inputs
            let result = self.do_processing(inputs)?;
            
            // Clean up temporary data
            self.cleanup_temp_data();
            
            Ok(result)
        }
    }
    }
  3. Use memory limits:

    #![allow(unused)]
    fn main() {
    let config = ActorConfig {
        memory_limit: Some(100 * 1024 * 1024), // 100MB limit
        ..Default::default()
    };
    }

Debugging Tips

Enable Debug Logging

[logging]
level = "debug"
targets = [
    "reflow_network=debug",
    "reflow_components=info",
    "my_app=trace"
]

Use Network Introspection

#![allow(unused)]
fn main() {
// Enable network monitoring
network.enable_monitoring(true);

// Get network statistics
let stats = network.get_statistics();
println!("Messages processed: {}", stats.total_messages);
println!("Average latency: {:?}", stats.average_latency);

// List active actors
let actors = network.list_active_actors();
for (id, status) in actors {
    println!("Actor {}: {:?}", id, status);
}
}

Actor State Inspection

#![allow(unused)]
fn main() {
// Enable actor introspection
actor.enable_introspection(true);

// Get actor state
let state = actor.get_internal_state();
println!("Actor state: {:?}", state);

// Monitor port activity
let port_stats = actor.get_port_statistics();
for (port, stats) in port_stats {
    println!("Port {}: {} messages", port, stats.message_count);
}
}

Graph Visualization

#![allow(unused)]
fn main() {
// Export graph for visualization
let dot_format = graph.export_dot();
std::fs::write("graph.dot", dot_format)?;

// Generate SVG visualization
// Use graphviz: dot -Tsvg graph.dot -o graph.svg
}

Common Error Patterns

Error E001: Actor Initialization Failed

  • Check actor dependencies
  • Verify configuration parameters
  • Ensure required resources are available

Error E002: Message Routing Error

  • Verify connection exists
  • Check port names and types
  • Ensure target actor is running

Error E003: Network Connection Failed

  • Check network connectivity
  • Verify server endpoints
  • Review firewall settings

Error G002: Cycle Detected

  • Analyze graph structure
  • Remove cycle-causing connections
  • Consider using async patterns

Error C004: Port Compatibility Error

  • Check port type definitions
  • Add type conversion actors
  • Use flexible port types

Getting Help

  1. Check logs: Enable debug logging for detailed information
  2. Use monitoring tools: Enable network and actor monitoring
  3. Review configuration: Verify all configuration parameters
  4. Test in isolation: Create minimal test cases
  5. Community support: Open issues on GitHub with detailed error information

Diagnostic Tools

Health Check Endpoint

#![allow(unused)]
fn main() {
// Add health check to your application
use warp::Filter;

let health = warp::path("health")
    .map(|| {
        let status = check_system_health();
        warp::reply::json(&status)
    });

warp::serve(health)
    .run(([127, 0, 0, 1], 8080))
    .await;
}

Metrics Collection

#![allow(unused)]
fn main() {
use prometheus::{Counter, Histogram, register_counter, register_histogram};

let message_counter = register_counter!("reflow_messages_total", "Total messages processed").unwrap();
let processing_time = register_histogram!("reflow_processing_duration_seconds", "Processing time").unwrap();

// In actor processing
message_counter.inc();
let _timer = processing_time.start_timer();
}

Performance Profiling

# CPU profiling
cargo install flamegraph
cargo flamegraph --bin reflow-app

# Memory profiling  
cargo install heaptrack
heaptrack target/release/reflow-app

For additional help, see:

Glossary

A

Actor : A fundamental unit of computation in Reflow that processes messages and can create other actors, send messages, or designate behavior for the next message.

Actor Model : A mathematical model of concurrent computation that treats actors as the universal primitives of concurrent computation.

API : Application Programming Interface - A set of protocols and tools for building software applications.

C

Component : A reusable unit of functionality in Reflow that can be connected to other components to build workflows.

Connection : A link between two components that allows data or control flow to pass from one component to another.

Concurrent Computation : Multiple computations executing at the same time, potentially interacting with each other.

D

Deno : A secure runtime for JavaScript and TypeScript built on V8, Chrome's JavaScript engine.

Deployment : The process of releasing and configuring a Reflow application to run in a production environment.

E

Edge : In graph terminology, a connection between two nodes (components) that represents data or control flow.

Event : A signal or message that indicates something has happened in the system.

F

Flow-Based Programming (FBP) : A programming paradigm that defines applications as networks of "black box" processes exchanging data across predefined connections.

G

Graph : A data structure consisting of nodes (vertices) and edges that represent the connections between them. In Reflow, graphs represent workflows.

GraphQL : A query language and runtime for APIs that provides a complete description of data in your API.

I

Inport : An input connection point on a component that receives data or control signals.

Initial Information Packet (IIP) : A data packet that provides initial values to component inputs at the start of execution.

M

Message : A unit of communication between actors containing data and metadata.

Message Passing : The primary means of communication between actors, where information is sent via discrete messages.

Metadata : Data that provides information about other data, such as component properties or connection details.

N

Node : In graph terminology, a vertex that represents a component or processing unit in a workflow.

Network : A collection of connected components that form a complete workflow or application.

O

Outport : An output connection point on a component that sends data or control signals to other components.

P

Port : A connection point on a component, either an inport (input) or outport (output).

Process : A running instance of a component that can receive and send messages.

Protocol : A set of rules that define how actors communicate with each other.

R

ReactFlow : A library for building node-based editors and interactive diagrams with React.

Runtime : The execution environment where Reflow applications run, including the JavaScript/Deno runtime.

S

Serialization : The process of converting data structures or objects into a format that can be stored or transmitted.

Subgraph : A subset of a larger graph that can be treated as a single component.

T

TypeScript : A strongly typed programming language that builds on JavaScript by adding static type definitions.

Trait : A characteristic or property that defines the behavior or type of a port or component.

V

Visual Editor : A graphical user interface that allows users to create and modify workflows by dragging and dropping components.

W

WebAssembly (WASM) : A binary instruction format for a stack-based virtual machine, designed to be fast and portable.

Workflow : A sequence of connected components that process data or perform tasks in a specific order.

Web Worker : A JavaScript API that allows web pages to run scripts in background threads separate from the main execution thread.

Contributing to Reflow

Thank you for your interest in contributing to Reflow! This document provides guidelines and information for contributors.

Getting Started

Prerequisites

Before contributing, ensure you have:

  • Rust (latest stable version)
  • Node.js (version 18 or higher)
  • Git for version control
  • mdBook for documentation (optional)

Setting Up the Development Environment

  1. Clone the repository:

    git clone https://github.com/offbit-ai/reflow.git
    cd reflow
    
  2. Install Rust dependencies:

    cargo build
    
  3. Install Node.js dependencies:

    cd examples/audio-flow
    npm install
    
  4. Run tests:

    cargo test
    

How to Contribute

Reporting Issues

  • Use the GitHub Issues page
  • Provide detailed information about the bug or feature request
  • Include relevant code examples and error messages
  • Search existing issues before creating new ones

Submitting Pull Requests

  1. Fork the repository and create a new branch
  2. Make your changes with clear, descriptive commits
  3. Add tests for new functionality
  4. Update documentation as needed
  5. Submit a pull request with a clear description

Code Style Guidelines

Rust Code

  • Follow the Rust Style Guide
  • Use cargo fmt to format code
  • Use cargo clippy to catch common mistakes
  • Write comprehensive documentation comments (///)

JavaScript/TypeScript Code

  • Use Prettier for formatting
  • Follow ESLint recommendations
  • Use meaningful variable and function names
  • Write JSDoc comments for public APIs

Documentation

  • Use clear, concise language
  • Include code examples where appropriate
  • Test all code examples to ensure they work
  • Follow the existing documentation structure

Development Workflow

Branch Naming Convention

  • feature/description - for new features
  • fix/description - for bug fixes
  • docs/description - for documentation updates
  • refactor/description - for code refactoring

Commit Message Format

type(scope): brief description

Detailed explanation of the change, if necessary.

Fixes #123

Types:

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation changes
  • style: Code style changes (formatting, etc.)
  • refactor: Code refactoring
  • test: Adding or updating tests
  • chore: Maintenance tasks

Testing

Unit Tests

cargo test

Integration Tests

cargo test --test integration

Documentation Tests

cargo test --doc

End-to-End Tests

cd examples/audio-flow
npm test

Documentation Guidelines

Writing Style

  • Use active voice
  • Write in present tense
  • Be clear and concise
  • Include practical examples
  • Explain the "why" not just the "how"

Code Examples

  • Test all code examples
  • Include imports and setup code
  • Show expected output where relevant
  • Use realistic, meaningful examples

API Documentation

  • Document all public functions and types
  • Include parameter descriptions
  • Provide return value information
  • Add usage examples

Community Guidelines

Code of Conduct

We are committed to providing a welcoming and inclusive environment for all contributors. Please:

  • Be respectful and considerate
  • Focus on constructive feedback
  • Help others learn and grow
  • Celebrate diverse perspectives

Communication Channels

  • GitHub Issues: Bug reports and feature requests
  • GitHub Discussions: General questions and community discussion
  • Discord: Real-time chat (invite link in README)

Release Process

Versioning

We follow Semantic Versioning:

  • MAJOR.MINOR.PATCH
  • Breaking changes increment MAJOR
  • New features increment MINOR
  • Bug fixes increment PATCH

Release Checklist

  1. Update version numbers in Cargo.toml
  2. Update CHANGELOG.md
  3. Run full test suite
  4. Update documentation
  5. Create GitHub release
  6. Publish to crates.io (maintainers only)

Architecture Guidelines

Actor Design Principles

When creating new actors:

  • Single Responsibility: Each actor should have one clear purpose
  • Immutable Messages: Messages should be immutable data structures
  • Error Handling: Handle errors gracefully and provide meaningful messages
  • Documentation: Include comprehensive examples and usage guidelines

Component Design

For new components:

  • Composability: Components should work well with others
  • Configuration: Use clear, typed configuration options
  • Performance: Consider memory usage and execution speed
  • Testing: Include unit tests and integration tests

API Design

When designing APIs:

  • Consistency: Follow existing patterns and conventions
  • Type Safety: Use strong typing where possible
  • Documentation: Provide clear documentation and examples
  • Backwards Compatibility: Consider impact on existing users

Getting Help

If you need help with contributing:

  1. Check the documentation
  2. Search existing issues
  3. Ask in GitHub Discussions
  4. Join our Discord community

Recognition

Contributors are recognized in:

  • The project README
  • Release notes
  • Annual contributor reports

We appreciate all contributions, whether they're code, documentation, testing, or community support!

License

By contributing to Reflow, you agree that your contributions will be licensed under the same license as the project (see LICENSE file).