This repo demonstrates the use of HotMesh in a JavaScript/TypeScript environment. The demos are structured to run like unit tests, and reveal the full lifecycle of a HotMesh transactional workflow.
The repo also includes a Dashboard/WebApp which surfaces all engines, workers, and workflows. The Web App provides a real-time view of the network's health and performance, linking to the OpenTelemetry dashboard for more detailed information. Refer to the meshdata directory for details on the namespaces and entities surfaced in the example dashboard.
- Quickstart
- Install
- Learn
- MeshCall
- MeshFlow
- MeshData
- HotMesh
- Run the Demos
- Visualize | Open Telemetry
- Visualize | Redis Insight
- Visualize | HotMesh Dashboard
- Docker (Desktop 4.24/Docker Compose 2.22.0)
- Node
npm run docker:up- Buildnpm run docker:reset- Reset All Databases / Clear All Data
HotMesh works with SQL/Postgres. Postgres is included in the docker-compose External Port mappings are:
5432- Postgres
HotMesh works with any Redis-like backend. ValKey, DragonflyDB, and Redis are all included in the docker-compose. External Port mappings are:
6399- Redis6398- ValKey6397- DragonflyDB
All demos will work with all DB variants except for the MeshData demo which uses the Redis
FT.SEARCHmodule (unsupported in ValKey). The demo will still successfully execute workflows, but it will not be searchable usingFT.SEARCHcommands.
The Dashboard Web App is available at http://localhost:3010. It provides a visual representation of the network, including the number of engines, workers, and workflows. It also provides a real-time view of the network's health and performance, linking to the OpenTelemetry dashboard for more detailed information.
An LLM is also included to simplify querying and analyzing workflow data for those deployments that include the Redis FT.SEARCH module.
All demos run against the redis backend, but can be configured to target others. For example: DEMO_DB=postgres npm run docker:demo:js:hotmesh howdy.
Run from outside the Docker container.
npm run docker:demo:js:hotmesh howdy- Run the HotMesh lifecycle example (JavaScript)npm run docker:demo:js:meshcall- Run the MeshCall lifecycle example (JavaScript)npm run docker:demo:js:meshflow- Run the MeshFlow lifecycle example (JavaScript)npm run docker:demo:js:meshdata bronze silver gold- Run the MeshData lifecycle example (JavaScript)
Run from outside the Docker container.
npm run docker:demo:ts:hotmesh howdy- Run the HotMesh lifecycle example (TypeScript)npm run docker:demo:ts:meshcall- Run the MeshCall lifecycle example (TypeScript)npm run docker:demo:ts:meshflow- Run the MeshFlow lifecycle example (TypeScript)npm run docker:demo:ts:meshdata bronze silver gold- Run the MeshData lifecycle example (TypeScript)
npm install @hotmeshio/hotmesh🏠Home | 📄 SDK Docs | 💼 General Examples | 💼 Temporal Examples
MeshCall connects any function to the mesh
Run an idempotent cron job [more]
This example demonstrates an idempotent cron that runs daily at midnight. The id makes each cron job unique and ensures that only one instance runs, despite repeated invocations. The cron method returns false if a workflow is already running with the same id.
Optionally set a delay and/or set maxCycles to limit the number of cycles. The interval can be any human-readable time format (e.g., 1 day, 2 hours, 30 minutes, etc) or a standard cron expression.
-
Define the cron function.
//cron.ts import { MeshCall } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; export const runMyCron = async (id: string, interval = '0 0 * * *'): Promise<boolean> => { return await MeshCall.cron({ topic: 'my.cron.function', connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, callback: async () => { //your code here... }, options: { id, interval, maxCycles: 24 } }); };
-
Call
runMyCronat server startup (or call as needed to run multiple crons).//server.ts import { runMyCron } from './cron'; runMyCron('myNightlyCron123');
Interrupt a cron job [more]
This example demonstrates how to cancel a running cron job.
- Use the same
idandtopicthat were used to create the cron to cancel it.import { MeshCall } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; MeshCall.interrupt({ topic: 'my.cron.function', connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, options: { id: 'myNightlyCron123' } });
Call any function in any service [more]
Make interservice calls that behave like HTTP but without the setup and performance overhead. This example demonstrates how to connect and call a function.
-
Call
MeshCall.connectand provide atopicto uniquely identify the function.//myFunctionWrapper.ts import { MeshCall, Types } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; export const connectMyFunction = async () => { return await MeshCall.connect({ topic: 'my.demo.function', connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, callback: async (input: string) => { //your code goes here; response must be JSON serializable return { hello: input } }, }); };
-
Call
connectMyFunctionat server startup to connect your function to the mesh.//server.ts import { connectMyFunction } from './myFunctionWrapper'; connectMyFunction();
-
Call your function from anywhere on the network (or even from the same service). Send any payload as long as it's JSON serializable.
import { MeshCall } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const result = await MeshCall.exec({ topic: 'my.demo.function', args: ['something'], connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, }); //returns `{ hello: 'something'}`
Call and cache a function [more]
This solution builds upon the previous example, caching the response. The linked function will only be re/called when the cached result expires. Everything remains the same, except the caller which specifies an id and ttl.
-
Make the call from another service (or even the same service). Include an
idandttlto cache the result for the specified duration.import { MeshCall } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const result = await MeshCall.exec({ topic: 'my.demo.function', args: ['anything'], connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, options: { id: 'myid123', ttl: '15 minutes' }, }); //returns `{ hello: 'anything'}`
-
Flush the cache at any time, using the same
topicand cacheid.import { MeshCall } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; await MeshCall.flush({ topic: 'my.demo.function', connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, options: { id: 'myid123' }, });
MeshFlow is a serverless replacement for Temporal.io
Orchestrate unpredictable activities [more]
When an endpoint is unpredictable, use proxyActivities. HotMesh will retry as necessary until the call succeeds. This example demonstrates a workflow that greets a user in both English and Spanish. Even though both activities throw random errors, the workflow always returns a successful result.
-
Start by defining activities. Note how each throws an error 50% of the time.
//activities.ts export async function greet(name: string): Promise<string> { if (Math.random() > 0.5) throw new Error('Random error'); return `Hello, ${name}!`; } export async function saludar(nombre: string): Promise<string> { if (Math.random() > 0.5) throw new Error('Random error'); return `¡Hola, ${nombre}!`; }
-
Define the workflow logic. Include conditional branching, loops, etc to control activity execution. It's vanilla JavaScript written in your own coding style. The only requirement is to use
proxyActivities, ensuring your activities are executed with HotMesh's durability wrapper.//workflows.ts import { workflow } from '@hotmeshio/hotmesh'; import * as activities from './activities'; const { greet, saludar } = workflow .proxyActivities<typeof activities>({ activities }); export async function example(name: string): Promise<[string, string]> { return Promise.all([ greet(name), saludar(name) ]); }
-
Instance a HotMesh client to invoke the workflow.
//client.ts import { Client, HotMesh } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; async function run(): Promise<string> { const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); const handle = await client.workflow.start<[string,string]>({ args: ['HotMesh'], taskQueue: 'default', workflowName: 'example', workflowId: HotMesh.guid() }); return await handle.result(); //returns ['Hello HotMesh', '¡Hola, HotMesh!'] }
-
Finally, create a worker and link the workflow function. Workers listen for tasks on their assigned task queue and invoke the workflow function each time they receive an event.
//worker.ts import { worker } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import * as workflows from './workflows'; async function run() { const worker = await Worker.create({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, taskQueue: 'default', workflow: workflows.example, }); await worker.run(); }
Pause and wait for a signal [more]
Pause a function and only awaken when a matching signal is received from the outide.
-
Define the workflow logic. This one waits for the
my-sig-nalsignal, returning the signal payload ({ hello: 'world' }) when it eventually arrives. Interleave additional logic to meet your use case.//waitForWorkflow.ts import { workflow } from '@hotmeshio/hotmesh'; export async function waitForExample(): Promise<{hello: string}> { return await workflow.waitFor<{hello: string}>('my-sig-nal'); //continue processing, use the payload, etc... }
-
Instance a HotMesh client and start a workflow. Use a custom workflow ID (
myWorkflow123).//client.ts import { Client, HotMesh } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; async function run(): Promise<string> { const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); //start a workflow; it will immediately pause await client.workflow.start({ args: ['HotMesh'], taskQueue: 'default', workflowName: 'waitForExample', workflowId: 'myWorkflow123', await: false, }); }
-
Create a worker and link the
waitForExampleworkflow function.//worker.ts import { Worker } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import * as workflows from './waitForWorkflow'; async function run() { const worker = await Worker.create({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, taskQueue: 'default', workflow: workflows.waitForExample, }); await worker.run(); }
-
Send a signal to awaken the paused function; await the function result.
import { Client } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); //awaken the function by sending a signal await client.signal('my-sig-nal', { hello: 'world' }); //get the workflow handle and await the result const handle = await client.getHandle({ taskQueue: 'default', workflowId: 'myWorkflow123' }); const result = await handle.result(); //returns { hello: 'world' }
Wait for multiple signals (collation) [more]
Use a standard Promise to collate and cache multiple signals. HotMesh will only awaken once all signals have arrived. HotMesh will track up to 25 concurrent signals.
-
Update the workflow logic to await two signals using a promise:
my-sig-nal-1andmy-sig-nal-2. Add additional logic to meet your use case.//waitForWorkflows.ts import { workflow } from '@hotmeshio/hotmesh'; export async function waitForExample(): Promise<[boolean, number]> { const [s1, s2] = await Promise.all([ workflow.waitFor<boolean>('my-sig-nal-1'), workflow.waitFor<number>('my-sig-nal-2') ]); //do something with the signal payloads (s1, s2) return [s1, s2]; }
-
Send two signals to awaken the paused function.
import { Client } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); //send 2 signals to awaken the function; order is unimportant await client.signal('my-sig-nal-2', 12345); await client.signal('my-sig-nal-1', true); //get the workflow handle and await the collated result const handle = await client.getHandle({ taskQueue: 'default', workflowId: 'myWorkflow123' }); const result = await handle.result(); //returns [true, 12345]
Create a recurring, cyclical workflow [more]
This example calls an activity and then sleeps for a week. It runs indefinitely until it's manually stopped. It takes advantage of durable execution and can safely sleep for months or years.
Container restarts have no impact on actively executing workflows as all state is retained in the backend.
-
Define the workflow logic. This one calls a legacy
statusDiagnosticfunction once a week.//recurringWorkflow.ts import { workflow } from '@hotmeshio/hotmesh'; import * as activities from './activities'; const { statusDiagnostic } = workflow .proxyActivities<typeof activities>({ activities }); export async function recurringExample(someValue: number): Promise<void> { do { await statusDiagnostic(someValue); } while (await workflow.sleepFor('1 week')); }
-
Instance a HotMesh client and start a workflow. Assign a custom workflow ID (e.g.,
myRecurring123) if the workflow should be idempotent.//client.ts import { Client, HotMesh } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; async function run(): Promise<string> { const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); //start a workflow; it will immediately pause await client.workflow.start({ args: [55], taskQueue: 'default', workflowName: 'recurringExample', workflowId: 'myRecurring123', await: false, }); }
-
Create a worker and link the
recurringExampleworkflow function.//worker.ts import { Worker } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import * as workflows from './recurringWorkflow'; async function run() { const worker = await Worker.create({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, taskQueue: 'default', workflow: workflows.recurringExample, }); await worker.run(); }
-
Cancel the recurring workflow (
myRecurring123) by callinginterrupt.import { Client } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const client = new Client({ connection: { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } } }); //get the workflow handle and interrupt it const handle = await client.getHandle({ taskQueue: 'default', workflowId: 'myRecurring123' }); const result = await handle.interrupt();
MeshData adds analytics to running workflows
Create a search index [more]
This example demonstrates how to define a schema and deploy an index for a 'user' entity type.
-
Define the schema for the
userentity. This one includes the 3 formats supported by the FT.SEARCH module:TEXT,TAGandNUMERIC.//schema.ts export const schema: Types.WorkflowSearchOptions = { schema: { id: { type: 'TAG', sortable: false }, first: { type: 'TEXT', sortable: false, nostem: true }, active: { type: 'TAG', sortable: false }, created: { type: 'NUMERIC', sortable: true }, }, index: 'user', prefix: ['user'], };
-
Create the index upon server startup. This one initializes the 'user' index, using the schema defined in the previous step. It's OK to call
createSearchIndexmultiple times; it will only create the index if it doesn't already exist.//server.ts import { MeshData } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import { schema } from './schema'; const meshData = new MeshData({ class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, schema, ); await meshData.createSearchIndex('user', { namespace: 'meshdata' });
Create an indexed, searchable record [more]
This example demonstrates how to create a 'user' workflow backed by the searchable schema from the prior example.
-
Call MeshData
connectto initialize a 'user' entity worker. It references a target worker function which will run the workflow. Data fields that are documented in the schema (likeactive) will be automatically indexed when set on the workflow record.//connect.ts import { MeshData } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import { schema } from './schema'; export const connectUserWorker = async (): Promise<void> => { const meshData = new MeshData( { class: Postgres, options: { connectionString: 'postgresql:// usr:pwd@localhost:5432/db' } }, schema, ); await meshData.connect({ entity: 'user', target: async function(name: string): Promise<string> { //add custom, searchable data (`active`) and return const search = await MeshData.workflow.search(); await search.set('active', 'yes'); return `Welcome, ${name}.`; }, options: { namespace: 'meshdata' }, }); }
-
Wire up the worker at server startup, so it's ready to process incoming requests.
//server.ts import { connectUserWorker } from './connect'; await connectUserWorker();
-
Call MeshData
execto create a 'user' workflow. Searchable data can be set throughout the workflow's lifecycle. This one initializes the workflow with 3 data fields:id,nameandtimestamp. An additional data field (active) is set within the workflow function in order to demonstrate both mechanisms for reading/writing data to a workflow.//exec.ts import { MeshData } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; const meshData = new MeshData( { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, schema, ); export const newUser = async (id: string, name: string): Promise<string> => { const response = await meshData.exec({ entity: 'user', args: [name], options: { ttl: 'infinity', id, search: { data: { id, name, timestamp: Date.now() } }, namespace: 'meshdata', }, }); return response; };
-
Call the
newUserfunction to create a searchable 'user' record.import { newUser } from './exec'; const response = await newUser('jim123', 'James');
Fetch record data [more]
This example demonstrates how to read data fields directly from a workflow.
-
Read data fields directly from the jimbo123 'user' record.
//read.ts import { MeshData } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import { schema } from './schema'; const meshData = new MeshData( { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, schema, ); const data = await meshData.get( 'user', 'jimbo123', { fields: ['id', 'name', 'timestamp', 'active'], namespace: 'meshdata' }, );
Search record data [more]
This example demonstrates how to search for those workflows where a given condition exists in the data. This one searches for active users. NOTE: The native Redis FT.SEARCH syntax and SQL are currently supported. The JSON abstraction shown here is a convenience method for straight-forward, one-dimensional queries.
-
Search for active users (where the value of the
activefield isyes).//read.ts import { MeshData } from '@hotmeshio/hotmesh'; import { Client as Postgres } from 'pg'; import { schema } from './schema'; const meshData = new MeshData( { class: Postgres, options: { connectionString: 'postgresql://usr:pwd@localhost:5432/db' } }, schema, ); const results = await meshData.findWhere('user', { query: [{ field: 'active', is: '=', value: 'yes' }], limit: { start: 0, size: 100 }, return: ['id', 'name', 'timestamp', 'active'] });
HotMesh is pluggable and ships with support for Postgres (pg) and Redis (ioredis/redis). NATS is also supported for extended PubSub (patterned subscriptions).
Postgres [more]
import { Client as PostgresClient } from 'pg';
//provide these credentials to HotMesh
const connection = {
class: PostgresClient,
options: {
connectionString: 'postgresql://usr:pwd@localhost:5432/db'
}
};Pool connections are recommended for high-throughput applications. The pool will manage connections and automatically handle connection pooling.
import { Pool as PostgresPool } from 'pg';
const PostgresPoolClient = new PostgresPool({
connectionString: 'postgresql://usr:pwd@localhost:5432/db',
max: 20,
});
//provide these credentials to HotMesh
const connection = {
class: PostgresPoolClient,
options: {}
};Redis [more]
import * as Redis from 'redis';
//OR `import Redis from 'ioredis';`
const connection = {
class: Redis,
options: {
url: 'redis://:your_password@localhost:6379',
}
};NATS [more]
Add NATS for improved PubSub support, including patterned subscriptions. Note the explicit channel subscription in the example below. The NATS provider supports version 2.0 of the NATS client (the latest version). See ./package.json for details.
import { Client as Postgres } from 'pg';
import { connect as NATS } from 'nats';
const connection = {
store: {
class: Postgres,
options: {
connectionString: 'postgresql://usr:pwd@localhost:5432/db',
}
},
stream: {
class: Postgres,
options: {
connectionString: 'postgresql://usr:pwd@localhost:5432/db',
}
},
sub: {
class: NATS,
options: { servers: ['nats:4222'] }
},
};HotMesh is a distributed modeling and orchestration system capable of encapsulating existing systems, such as Business Process Management (BPM) and Enterprise Application Integration (EAI). The central innovation is its ability to compile its models into Distributed Executables, replacing a traditional Application Server with a network of Decentralized Message Routers.
The following depicts the mechanics of the approach and describes what is essentially a sequence engine. Time is an event source in the system, while sequence is the final arbiter. With the app server out of the way, the database behaves like a balloon, flexibly expanding and deflating as it adjusts to evolving workloads. The system promises at-least-once delivery and at-most-once processing.
The modeling system is based on a canonical set of 9 message types (and corresponding transitions) that guarantee the coordinated flow in the absence of a central controller.
Here, for example, is the worker activity type. It's reentrant (most activities are), which allows your linked functions to emit in-process messages as they complete a task. The messaging system goes beyond basic request/response and fan-out/fan-in patterns, providing real-time progress updates.
Process orchestration is emergent within HotMesh and occurs naturally as a result of processing stream messages. While the reference implementation targets Redis+TypeScript, any language (Rust, Go, Python, Java) and multimodal database (ValKey, DragonFly, etc) can take advantage of the sequence engine design pattern.
HotMesh is designed as a distributed quorum of engines where each member adheres to the principles of CQRS. According to CQRS, consumers are instructed to read events from assigned topic queues while producers write to said queues. This division of labor is essential to the smooth running of the system. HotMesh leverages this principle to drive the perpetual behavior of engines and workers (along with other advantages described here).
As long as their assigned topic queue has items, consumers will read exactly one item and then journal the result to another queue. And as long as all consumers (engines and workers) adhere to this principle, sophisticated workflows emerge.
The following YAML represents a HotMesh workflow; it includes a trigger and a linked worker function. The Sequence Engine will step through these instructions one step at a time once deployed.
# myfirstapp.1.yaml
app:
id: myfirstapp
version: '1'
graphs:
- subscribes: myfirstapp.test
activities:
t1:
type: trigger
w1:
type: worker
topic: work.do
transitions:
t1:
- to: w1The
work.dotopic identifies the worker function to execute. This name is arbitrary and should match the semantics of your use case or your own personal preference.
Call HotMesh.init to register a worker and engine. As HotMesh is a distributed orchestration platform, initializing a point of presence like this serves to give the engine another distributed node. If spare CPU is available on the host machine, the engine role will be called upon to coordinate the overall workflow. Similarly, invoking the linked worker function, involves the worker role.
import * as Redis from 'redis';
import { HotMesh } from '@hotmeshio/hotmesh';
const hotMesh = await HotMesh.init({
appId: 'myfirstapp',
engine: {
connection: {
class: Redis,
options: { url: 'redis://:key_admin@redis:6379' }
}
},
workers: [
{
topic: 'work.do',
connection: {
class: Redis,
options: { url: 'redis://:key_admin@redis:6379' }
}
callback: async (data: StreamData) => {
return {
metadata: { ...data.metadata },
data: { } //optionally process inputs, return output
};
}
}
]
});
await hotMesh.deploy('./myfirstapp.1.yaml');
await hotMesh.activate('1');
const response = await hotMesh.pubsub('myfirstapp.test', {});
//returns {}Redis is the default test runner for the demos, but you can specify any target backend (postgres valkey, redis, dragonfly). For example, DEMO_DB=postgres npm run demo:js:hotmesh greetings.
This demo deploys a custom YAML model and a linked worker function to demonstrate the full end-to-end lifecycle. From within the container:
npm run demo:js:hotmesh greetingsFrom outside the container:
npm run docker:demo:js:hotmesh greetingsThis demo shows interservice function calls with caching support. From within the container:
npm run demo:js:meshcallFrom outside the container:
npm run docker:demo:js:meshcallThis demo shows a basic durable workflow with support for retry. It throws errors 50% of the time and eventually recovers, logging the result of the workflow execution to the log. The retry cycle is set at 5 seconds. From within the container:
npm run demo:js:meshflowFrom outside the container:
npm run docker:demo:js:meshflowThis demo runs a few workflows (one for every term you add when starting the test). The app auto-deploys, creates an index, and searches for workflow records based upon terms. The following will create 3 searchable workflows: bronze, silver, gold. The last term entered will be used to drive the FT.SEARCH query. (The following would search for the 'gold' record after all workflows (bronze, silver, and gold) have started.) From within the container:
npm run demo:js:meshdata bronze silver goldFrom outside the container:
npm run docker:demo:js:meshdata bronze silver goldThis demo deploys a custom YAML model and a linked worker function to demonstrate the full end-to-end lifecycle. From within the container:
npm run demo:ts:hotmesh greetingsFrom outside the container:
npm run docker:demo:ts:hotmesh greetingsThis demo shows interservice function calls with caching support. From within the container:
npm run demo:ts:meshcallFrom outside the container:
npm run docker:demo:ts:meshcallThis demo shows a basic durable workflow with support for retry. It throws errors 50% of the time and eventually recovers, logging the result of the workflow execution to the log. The retry cycle is set at 5 seconds. From within the container:
npm run demo:ts:meshflowFrom outside the container:
npm run docker:demo:ts:meshflowThis demo runs a few workflows (one for every term you add when starting the test). The app auto-deploys, creates an index, and searches for workflow records based upon terms. The following will create 3 searchable workflows: bronze, silver, gold. The last term entered will be used to drive the FT.SEARCH query. (The following would search for the 'gold' record after all workflows (bronze, silver, and gold) have started.) From within the container:
npm run demo:ts:meshdata bronze silver goldFrom outside the container:
npm run docker:demo:ts:meshdata bronze silver goldAdd your Honeycomb credentials to .env, and view the full OpenTelemetry execution tree organized as a DAG.
View commands, streams, data, etc using RedisInsight.
The HotMesh dashboard provides a visual representation of the network, including the number of engines, workers, and workflows. It also provides a real-time view of the network's health and performance, linking to the OpenTelemetry dashboard for more detailed information.
An LLM is also included to simplify querying and analyzing workflow data for those deployments that include the Redis FT.SEARCH module.





