The Context Object (ctx) is a crucial component in Blok that facilitates data sharing and state management across different Nodes within a single Workflow execution instance. It acts as a shared memory space, allowing Nodes to pass information to subsequent Nodes, access initial trigger data, and maintain state throughout the lifecycle of a workflow run.
When a Trigger initiates a Workflow, Blok creates a unique Context Object for that specific workflow instance. This object persists throughout the execution of that instance and is accessible to every Node within it.
Key characteristics of the Context Object:
Instance-Specific: Each run of a workflow gets its own isolated Context Object. Data in one workflow instance does not interfere with another.
Shared State: Nodes can read from and write to the Context Object, allowing them to share data and state dynamically.
Data Carrier: It carries initial data from the trigger (e.g., HTTP request body, query parameters, message payload) and makes it available to all Nodes.
Configuration & Services: It can also be used to provide access to configuration values, shared services, or utility functions that Nodes might need.
// Inside a Node's handle methodimport { type INanoServiceResponse, NanoService, NanoServiceResponse } from "@blok-ts/runner";import { type Context, GlobalError } from "@blok-ts/shared";type InputType = { message?: string;};export default class Node extends NanoService<InputType> { constructor() { super(); this.inputSchema = {}; this.outputSchema = {}; } async handle(ctx: Context, inputs: InputType): Promise<INanoServiceResponse> { const response: NanoServiceResponse = new NanoServiceResponse(); try { // 1. Get data from the request via context const userId = ctx.request.params.id; // 2. Get data set by a previous node const previousData = ctx.vars['previous-node-name']; // 3. Process the data const processedInfo = `Processed for ${userId} with ${previousData}`; // 4. Set data for subsequent nodes ctx.vars['current-node'] = processedInfo; // 5. Set the response data response.setSuccess({ message: inputs.message || processedInfo }); } catch (error: unknown) { const nodeError: GlobalError = new GlobalError((error as Error).message); nodeError.setCode(500); nodeError.setStack((error as Error).stack); nodeError.setName(this.name); nodeError.setJson(undefined); response.setError(nodeError); } return response; }}
While the Context Object allows for shared mutable state, it's essential to be mindful of how data is modified. Some Blok implementations or best practices might encourage treating parts of the context (like initial trigger data) as immutable to prevent unintended side effects.
Nodes typically read the data they need from the context (or directly from their input which is often populated from the context by the workflow engine), perform their operations, and then write their results back to the context if those results need to be accessed by subsequent nodes.
Clear Naming Conventions: Use consistent and descriptive keys when setting data in the context to avoid collisions and improve readability (e.g., nodeName.outputKey, shared.config.apiKey).
Minimize Global State: While the context is shared, avoid overusing it as a global dumping ground. Prefer passing data explicitly through Node inputs and outputs when the data flow is linear and simple.
Schema Awareness: Be aware of the data types you are storing and retrieving. While the context itself might be flexible, Nodes often expect specific data types as per their inputSchema.
Error Handling Data: The context can also be used to store error information if a Node fails, which can then be picked up by error handling Nodes or branches in the workflow.
Security Considerations: Be cautious about storing sensitive information in the context, especially if logs or full context dumps are generated for debugging. Sanitize or avoid storing raw sensitive data if possible.
✅ Nodes can dynamically read/write to ctx.response.data.
✅ ctx.request.params, ctx.request.body, and ctx.response drive workflow execution.
The Context Object is a powerful mechanism that enables dynamic and flexible workflows in Blok . Understanding how to use it effectively is key to building robust and maintainable nanoservice applications.
Next, learn about what initiates workflows: Triggers.