Pardon our dust! All this is still a work in progress.

Task

Computation nodes with automatic dependency tracking

A Task is a computation node that can depend on params or other tasks. Tasks are lazy: they only execute when you run a graph, and they’re memoized by default.

Creating a Task

import { task } from "@hello-terrain/work";

const double = task((get, work) => {
  const value = get(someParam);
  return work(() => value * 2);
});

task(compute) returns a Task<T> you can configure fluently (.cache(), .lane(), .displayName(), …) and then register with a graph.

API Reference

task(compute, options?)

Creates a new task node.

ParameterTypeDescription
compute(get, work, ctx) => T | Promise<T>The computation function
optionsTaskOptionsOptional configuration

Returns: Task<T>

Compute Function Parameters

The compute function receives three arguments:

get(ref)

Retrieves the value of a param or another task. This automatically registers a dependency.

const sum = task((get, work) => {
  const a = get(paramA);  // Depends on paramA
  const b = get(taskB);   // Depends on taskB
  return work(() => a + b);
});

All get() calls must happen before calling work(). Reading dependencies after starting work will throw an error.

work(fn)

Executes a “work unit”. This boundary ensures the dependancies are up to date when the callback is executed.

const processed = task((get, work) => {
  const data = get(rawData);
  return work(() => {
    // Heavy computation happens here
    return expensiveProcess(data);
  });
});

Any value returned in the task callback function will be stored, but it is recommended to return work() with a callback to ensure your dependencies are not stale.


// this works fine, especially for tasks that do trivial calculations
const processed = task(async () => {
  const data = await someOperationNotRequiringDependencies()
  return data
});

work() may be called at most once per task compute. Calling work() more than once will throw.

ctx (TaskContext)

Provides execution context including lane info, abort signal, and timing utilities.

PropertyTypeDescription
lanestringThe lane this task is executing in
signalAbortSignalAbort signal for cancellation
now()() => numberHigh-precision time in milliseconds
resourcesunknownUser-supplied resources (if provided to run)
const fetchData = task(async (get, work, ctx) => {
  const url = get(apiUrl);
  
  // Check for cancellation
  if (ctx.signal.aborted) {
    throw new Error("Cancelled");
  }
  
  return work(() => {
    // Pass signal to fetch for automatic cancellation
    // this will cancel propogation to downstream tasks
    const response = await fetch(url, { signal: ctx.signal });
    response.json()
  });
});

Task Configuration

Tasks support fluent configuration methods:

displayName(name)

Sets a human-readable name for debugging and visualization.

const area = task((get, work) => {
  const w = get(width);
  const h = get(height);
  return work(() => w * h);
}).displayName("area");

lane(lane)

Assigns the task to a lane (a string tag).

If you pass laneConcurrency to graph.run(), the graph will enforce per-lane concurrency limits for that run. If laneConcurrency is omitted (or {}), lanes are still useful metadata (exposed via ctx.lane and task:start events), but tasks will not be throttled by lane.

const cpuBound = task((get, work) => {
  return work(() => heavyComputation());
}).lane("cpu");

const ioBound = task(async (get, work, ctx) => {
  return work(() => fetchFromNetwork(ctx.signal));
}).lane("io");

cache(strategy)

Sets the caching strategy for the task.

StrategyDescription
"memo"Cache results; only recompute when dependencies change (default)
"once"Compute once, then cache forever (ignores upstream changes)
"none"Always recompute on every run
// Default: memoized
const memoized = task((get, work) => {
  const v = get(input);
  return work(() => v * 2);
});

// Explicit memoization
const cached = task((get, work) => {
  const v = get(input);
  return work(() => v * 2);
}).cache("memo");

// Compute once and never again
const init = task((get, work) => {
  const config = get(configParam);
  return work(() => createExpensiveResource(config));
}).cache("once");

// Always recompute
const fresh = task((_get, work) => work(() => Date.now())).cache("none");

tags(tags)

Annotates the task with tags for filtering or debugging.

const critical = task((get, work) => {
  const v = get(input);
  return work(() => v);
}).tags(["critical", "user-facing"]);

Gotchas

Closure state is re-created on every run

The compute function passed to task() is re-invoked each time the task runs. Any let or mutable variable declared inside the compute closure is re-initialized on every execution. This means you cannot use local variables to persist state across runs.

// BUG: `connection` is re-declared as undefined on every run
const connectTask = task((get, work) => {
  const url = get(urlParam);
  let connection: Connection | undefined = undefined;
  return work(() => {
    if (!connection) {
      connection = createConnection(url); // runs EVERY time
    }
    return connection;
  });
});

If you need state that persists across runs, use the create / update pattern (see below) or move the variable to module scope (not recommended for multi-instance use).

The create / update pattern

When a task creates an object once and then updates it on subsequent runs, split it into two tasks:

  1. A create task with .cache("once") that allocates the resource.
  2. An update task with default "memo" cache that reads the resource via get() and mutates it.
// Creates the resource once; same reference is returned forever.
const createDbPool = task((get, work) => {
  const config = get(dbConfigParam);
  return work(() => new ConnectionPool(config));
})
  .displayName("createDbPool")
  .cache("once");

// Re-runs when params change; updates the existing pool.
const updateDbPool = task((get, work) => {
  const pool = get(createDbPool);       // always the same instance
  const maxConns = get(maxConnsParam);  // triggers re-run on change
  return work(() => {
    pool.setMaxConnections(maxConns);
    return pool;
  });
}).displayName("updateDbPool");

This keeps the resource identity stable while allowing reactive updates.

Stale references in cache("once") tasks

A cache("once") task runs exactly once and then ignores all upstream changes. If it captures a reference to an object that is later recreated by an upstream task, the cache("once") task will hold a stale reference.

This is especially important when tasks create GPU resources (buffers, shader nodes, uniforms) that are referenced by a downstream shader builder:

// BAD: bufferTask recreates the buffer when maxNodes changes,
// but shaderTask captured the ORIGINAL buffer and never re-runs.
const bufferTask = task((get, work) => {
  const maxNodes = get(maxNodesParam);
  return work(() => createGpuBuffer(maxNodes));
});

const shaderTask = task((get, work) => {
  const buffer = get(bufferTask);
  return work(() => buildShader(buffer)); // stale after resize!
}).cache("once");

Fix: Make the shader task use default "memo" cache and depend on the correct upstream. It will only re-run when the buffer task produces new objects (e.g. on resize), not on every frame:

// bufferTask only re-runs when maxNodes changes → new buffer objects
const bufferTask = task((get, work) => {
  const maxNodes = get(maxNodesParam);
  return work(() => createGpuBuffer(maxNodes));
});

// dataWriteTask runs every frame but doesn't trigger shader rebuild
const dataWriteTask = task((get, work) => {
  const buffer = get(bufferTask);
  const data = get(dataSource);
  return work(() => writeToBuffer(buffer, data));
});

// shaderTask depends on bufferTask (not dataWriteTask) → rebuilds
// only when the buffer is resized, stays cached otherwise.
const shaderTask = task((get, work) => {
  const buffer = get(bufferTask);
  return work(() => buildShader(buffer));
});

Separate your creation and data-write dependencies

When a downstream task only needs a reference to an object (not its contents), depend on the task that creates the object, not the task that writes data into it. This prevents unnecessary re-computation:

maxNodesParam → bufferTask → shaderTask   (rebuilds on resize only)

dataSource → dataWriteTask                 (runs every frame, no shader rebuild)

If shaderTask depended on dataWriteTask instead, it would rebuild the shader every frame.

Examples

Task Dependencies

Tasks can depend on other tasks, forming a computation DAG:

const basePrice = param(100);
const quantity = param(3);
const taxRate = param(0.08);

const subtotal = task((get, work) => {
  const base = get(basePrice);
  const qty = get(quantity);
  return work(() => base * qty);
}).displayName("subtotal");

const tax = task((get, work) => {
  const s = get(subtotal);
  const r = get(taxRate);
  return work(() => s * r);
}).displayName("tax");

const total = task((get, work) => {
  const s = get(subtotal);
  const t = get(tax);
  return work(() => s + t);
}).displayName("total");

const g = graph();
g.add(subtotal);
g.add(tax);
g.add(total);

await g.run();
console.log(g.get(subtotal)); // 300
console.log(g.get(tax));      // 24
console.log(g.get(total));    // 324

Async Tasks

Tasks can be asynchronous:

const userId = param("user-123");

const userData = task(async (get, work, ctx) => {
  const id = get(userId);  
  return work(async () => {
    const response = await fetch(`/api/users/${id}`, {
      signal: ctx.signal,
    });
    
    if (!response.ok) {
      throw new Error(`Failed to fetch user: ${response.status}`);
    }
    return response.json()
  });
}).displayName("userData").lane("network");

const userName = task((get, work) => {
  const user = get(userData);
  return work(() => user.name);
}).displayName("userName");

Using the Work Function

The work() function isolates the actual computation from dependency resolution:

const processedImage = task((get, work) => {
  // Read all dependencies first
  const image = get(rawImage);
  const filters = get(filterSettings);
  const quality = get(outputQuality);
  
  // Then do the heavy work
  return work(() => {
    let result = image;
    for (const filter of filters) {
      result = applyFilter(result, filter);
    }
    return compress(result, quality);
  });
}).displayName("processedImage").lane("cpu");

With Resources

Pass shared resources to tasks via the run options:

interface Resources {
  db: Database;
  cache: CacheClient;
}

const userRecord = task(async (get, work, ctx) => {
  const id = get(userId);
  const { db, cache } = ctx.resources as Resources;

  return work(async () => {
    // Check cache first
    const cached = await cache.get(`user:${id}`);
    if (cached) return cached;
    
    // Fetch from database
    const record = await db.users.findById(id);
    await cache.set(`user:${id}`, record);
    return record;
  })
});

// Run with resources
await g.run({
  resources: { db, cache },
});

Task States

During execution, tasks transition through states:

StateDescription
idleNot started or waiting to be scheduled
runningCurrently executing
readyCompleted successfully; result is cached
errorFailed with an error

Type Inference

Task output types are inferred from the compute function:

// Inferred as Task<number>
const count = task((get, work) => {
  const xs = get(items);
  return work(() => xs.length);
});

// Inferred as Task<User>
const user = task(async (_get, work) => {
  return work(async () => {
    const response = await fetch("/api/user");
    return response.json() as User
  });
});

// Explicit type (rarely needed)
const typed = task<string>((get, work) => {
  const v = get(value);
  return work(() => String(v));
});