Skip to content

Conversation

@jumski
Copy link
Contributor

@jumski jumski commented Nov 2, 2025

Summary

This PR removes the pgmq 1.4.x compatibility layer and requires pgmq 1.5.1 or higher. It eliminates deprecated functions, removes backported SQL code, and leverages new PGMQ features including message headers support.

⚠️ BREAKING CHANGE: This version requires pgmq 1.5.0 or higher and will NOT work with pgmq 1.4.x.

Changes

PGMQ Version Requirement

  • Minimum version: pgmq 1.5.0+ (tested with 1.5.1)
  • Migration guard: Added compatibility check in migration that fails early with clear error message if pgmq < 1.5.0 is detected
  • Affected packages: @pgflow/core, @pgflow/edge-worker

Removed Compatibility Layer

Backported Functions Removed

  • pgflow.read_with_poll (68 lines) - Removed backport, now uses native pgmq.read_with_poll directly
  • pgflow.create_realtime_partition - Removed deprecated partitioning function
  • Test file removed: supabase/tests/realtime/create_realtime_partition.test.sql (83 lines)

Function Updates

  • pgflow.set_vt_batch - Updated to return headers column from pgmq 1.5.0+:
    • Changed return type from SETOF to TABLE with explicit column definitions
    • Now returns: msg_id, read_ct, enqueued_at, vt, message, headers
    • Added comprehensive pgTAP test for headers handling

Migration

File: pkgs/core/supabase/migrations/20251102201302_pgflow_upgrade_pgmq_1_5_1.sql

The migration includes:

  1. Compatibility check - Verifies pgmq.message_record has headers column
  2. Clear error message - Guides users to upgrade pgmq if version is incompatible
  3. Function replacement - Drops and recreates set_vt_batch with new signature

Code Updates

TypeScript

  • PgflowSqlClient.ts - Changed queue polling to use pgmq.read_with_poll instead of pgflow.read_with_poll
  • Queue.ts - Updated queue reading logic
  • Database types - Regenerated to reflect new function signatures and removed deprecated functions
  • Test fixtures - Added headers field to message record types

SQL Tests

Updated 11 test files to remove calls to deprecated functions:

  • Removed pgflow.create_realtime_partition() calls from all realtime tests
  • Simplified test setup by using native PGMQ functions
  • Added new test: supabase/tests/set_vt_batch/headers_handling.test.sql

Documentation

  • README.md - Enhanced documentation with:
    • Clarified root vs dependent map step behavior
    • Improved formatting and examples
    • Added edge case documentation
    • Better explanation of map step input handling
  • BREAKING CHANGE notice in changeset explaining:
    • Version requirement
    • Migration path for Supabase users
    • Self-hosting upgrade requirements

Migration Management Skill

  • Updated .claude/skills/migration-management/SKILL.md with additional troubleshooting guidance

Migration Guide

For Supabase Users

Recent Supabase versions include pgmq 1.5.0+ by default. Simply upgrade pgflow:

# Upgrade will apply the migration automatically
pnpm nx migrate @pgflow/core

The migration will verify pgmq compatibility and fail with a clear message if your pgmq version is too old.

For Self-Hosted Users

Before upgrading pgflow:

  1. Verify your pgmq version:
  2. If extversion < 1.5.0, upgrade pgmq first:
  3. Then upgrade pgflow:

Benefits

  • Simplified codebase: Removed 200+ lines of compatibility code
  • Native PGMQ features: Direct access to new pgmq 1.5.0+ capabilities
  • Message headers: Support for metadata propagation (foundation for future features)
  • Cleaner tests: Removed deprecated function calls across test suite
  • Better error messages: Migration fails early with actionable guidance

Verified Fixes

This PR verifies that upstream issues in Supabase Realtime have been resolved:

  • Janitor's 10-minute delay causes silent failures in local development supabase/realtime#1369 - The Janitor's 10-minute startup delay that caused silent failures with realtime.send() after database resets has been fixed in recent Supabase versions
  • With the fix in place, pgflow no longer needs the create_realtime_partition workaround function that manually created partitions before tests
  • Removal of pgflow.create_realtime_partition and its test file confirms the upstream fix is stable and working as expected

Testing

  • All pgTAP tests updated and passing
  • New test added for headers handling
  • Migration tested on both compatible (1.5.1) and incompatible (1.4.4) versions
  • Edge worker integration tests updated with headers support

Dependencies

Related Files

Changed:

  • 37 files changed, 629 insertions(+), 926 deletions(-)
  • Net reduction of ~300 lines of code

Key files:

  • Migration: pkgs/core/supabase/migrations/20251102201302_pgflow_upgrade_pgmq_1_5_1.sql
  • Schema: pkgs/core/schemas/0110_function_set_vt_batch.sql
  • Client: pkgs/core/src/PgflowSqlClient.ts
  • Worker: pkgs/edge-worker/src/queue/Queue.ts

@changeset-bot
Copy link

changeset-bot bot commented Nov 2, 2025

🦋 Changeset detected

Latest commit: 3af7bb9

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 8 packages
Name Type
@pgflow/core Minor
@pgflow/edge-worker Minor
pgflow Minor
@pgflow/client Minor
@pgflow/example-flows Minor
@pgflow/demo Minor
@pgflow/dsl Minor
@pgflow/website Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link
Contributor Author

jumski commented Nov 2, 2025

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more


How to use the Graphite Merge Queue

Add either label to this PR to merge it via the merge queue:

  • merge:queue - adds this PR to the back of the merge queue
  • hotfix:queue - for urgent hot fixes, skip the queue and merge this PR next

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@nx-cloud
Copy link

nx-cloud bot commented Nov 2, 2025

🤖 Nx Cloud AI Fix Eligible

An automatically generated fix could have helped fix failing tasks for this run, but Self-healing CI is disabled for this workspace. Visit workspace settings to enable it and get automatic fixes in future runs.

To disable these notifications, a workspace admin can disable them in workspace settings.


View your CI Pipeline Execution ↗ for commit 3af7bb9

Command Status Duration Result
nx affected -t lint typecheck test --parallel -... ❌ Failed 7m 53s View ↗
nx affected -t test:e2e --parallel --base=c35fa... ✅ Succeeded 5m 28s View ↗

☁️ Nx Cloud last updated this comment at 2025-11-12 23:20:26 UTC

@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from b531d64 to c9b5c0e Compare November 2, 2025 23:01
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch 2 times, most recently from 4a21116 to 361b043 Compare November 3, 2025 10:09
@jumski jumski force-pushed the chore-update-supabase-versions branch from 9f66fe6 to dfb9cbe Compare November 3, 2025 11:10
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 361b043 to 6bf098a Compare November 3, 2025 11:10
Comment on lines +488 to +496
Args: { msg_ids: number[]; queue_name: string; vt_offsets: number[] }
Returns: {
enqueued_at: string
headers: Json
message: Json
msg_id: number
read_ct: number
vt: string
}[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The headers field in the return type is typed as Json but should be Json | null since the JSONB column in PostgreSQL can be NULL. This type mismatch could cause runtime errors when TypeScript code receives null headers from messages that were sent without headers.

set_vt_batch: {
  Args: { msg_ids: number[]; queue_name: string; vt_offsets: number[] }
  Returns: {
    enqueued_at: string
    headers: Json | null  // Should allow null
    message: Json
    msg_id: number
    read_ct: number
    vt: string
  }[]
}

This aligns with how headers is typed elsewhere in the codebase (line 784) and matches the actual database behavior where headers can be NULL for messages sent without headers.

Suggested change
Args: { msg_ids: number[]; queue_name: string; vt_offsets: number[] }
Returns: {
enqueued_at: string
headers: Json
message: Json
msg_id: number
read_ct: number
vt: string
}[]
Args: { msg_ids: number[]; queue_name: string; vt_offsets: number[] }
Returns: {
enqueued_at: string
headers: Json | null
message: Json
msg_id: number
read_ct: number
vt: string
}[]

Spotted by Graphite Agent

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

@jumski jumski force-pushed the chore-update-supabase-versions branch from dfb9cbe to e5e3dfa Compare November 4, 2025 08:14
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 6bf098a to 53aaf5c Compare November 4, 2025 08:14
@jumski jumski force-pushed the chore-update-supabase-versions branch from e5e3dfa to 8f99711 Compare November 6, 2025 09:24
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch 2 times, most recently from 968c2ae to 05b808d Compare November 6, 2025 12:04
Comment on lines 1 to 47
-- Drop "set_vt_batch" function
DROP FUNCTION "pgflow"."set_vt_batch";
-- Create "set_vt_batch" function
CREATE FUNCTION "pgflow"."set_vt_batch" ("queue_name" text, "msg_ids" bigint[], "vt_offsets" integer[]) RETURNS TABLE ("msg_id" bigint, "read_ct" integer, "enqueued_at" timestamptz, "vt" timestamptz, "message" jsonb, "headers" jsonb) LANGUAGE plpgsql AS $$
DECLARE
qtable TEXT := pgmq.format_table_name(queue_name, 'q');
sql TEXT;
BEGIN
/* ---------- safety checks ---------------------------------------------------- */
IF msg_ids IS NULL OR vt_offsets IS NULL OR array_length(msg_ids, 1) = 0 THEN
RETURN; -- nothing to do, return empty set
END IF;

IF array_length(msg_ids, 1) IS DISTINCT FROM array_length(vt_offsets, 1) THEN
RAISE EXCEPTION
'msg_ids length (%) must equal vt_offsets length (%)',
array_length(msg_ids, 1), array_length(vt_offsets, 1);
END IF;

/* ---------- dynamic statement ------------------------------------------------ */
/* One UPDATE joins with the unnested arrays */
sql := format(
$FMT$
WITH input (msg_id, vt_offset) AS (
SELECT unnest($1)::bigint
, unnest($2)::int
)
UPDATE pgmq.%I q
SET vt = clock_timestamp() + make_interval(secs => input.vt_offset),
read_ct = read_ct -- no change, but keeps RETURNING list aligned
FROM input
WHERE q.msg_id = input.msg_id
RETURNING q.msg_id,
q.read_ct,
q.enqueued_at,
q.vt,
q.message,
q.headers
$FMT$,
qtable
);

RETURN QUERY EXECUTE sql USING msg_ids, vt_offsets;
END;
$$;
-- Drop "read_with_poll" function
DROP FUNCTION "pgflow"."read_with_poll";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical: Missing pgmq version compatibility check

The core migration is missing the compatibility check that exists in the playground migration. Without this check, the migration will succeed on pgmq < 1.5.0 but fail at runtime when set_vt_batch tries to return the headers column that doesn't exist in older pgmq versions.

The playground migration (lines 1-32) includes this check:

DO $$
DECLARE
    has_headers BOOLEAN;
BEGIN
    SELECT EXISTS (
        SELECT 1 FROM pg_type t
        JOIN pg_namespace n ON t.typnamespace = n.oid
        JOIN pg_attribute a ON a.attrelid = t.typrelid
        WHERE n.nspname = 'pgmq'
          AND t.typname = 'message_record'
          AND a.attname = 'headers'
          AND a.attnum > 0
          AND NOT a.attisdropped
    ) INTO has_headers;
    
    IF NOT has_headers THEN
        RAISE EXCEPTION 'INCOMPATIBLE PGMQ VERSION DETECTED...';
    END IF;
END $$;

This check must be added at the beginning of the core migration to prevent silent failures in production.

Suggested change
-- Drop "set_vt_batch" function
DROP FUNCTION "pgflow"."set_vt_batch";
-- Create "set_vt_batch" function
CREATE FUNCTION "pgflow"."set_vt_batch" ("queue_name" text, "msg_ids" bigint[], "vt_offsets" integer[]) RETURNS TABLE ("msg_id" bigint, "read_ct" integer, "enqueued_at" timestamptz, "vt" timestamptz, "message" jsonb, "headers" jsonb) LANGUAGE plpgsql AS $$
DECLARE
qtable TEXT := pgmq.format_table_name(queue_name, 'q');
sql TEXT;
BEGIN
/* ---------- safety checks ---------------------------------------------------- */
IF msg_ids IS NULL OR vt_offsets IS NULL OR array_length(msg_ids, 1) = 0 THEN
RETURN; -- nothing to do, return empty set
END IF;
IF array_length(msg_ids, 1) IS DISTINCT FROM array_length(vt_offsets, 1) THEN
RAISE EXCEPTION
'msg_ids length (%) must equal vt_offsets length (%)',
array_length(msg_ids, 1), array_length(vt_offsets, 1);
END IF;
/* ---------- dynamic statement ------------------------------------------------ */
/* One UPDATE joins with the unnested arrays */
sql := format(
$FMT$
WITH input (msg_id, vt_offset) AS (
SELECT unnest($1)::bigint
, unnest($2)::int
)
UPDATE pgmq.%I q
SET vt = clock_timestamp() + make_interval(secs => input.vt_offset),
read_ct = read_ct -- no change, but keeps RETURNING list aligned
FROM input
WHERE q.msg_id = input.msg_id
RETURNING q.msg_id,
q.read_ct,
q.enqueued_at,
q.vt,
q.message,
q.headers
$FMT$,
qtable
);
RETURN QUERY EXECUTE sql USING msg_ids, vt_offsets;
END;
$$;
-- Drop "read_with_poll" function
DROP FUNCTION "pgflow"."read_with_poll";
-- Check for pgmq version compatibility
DO $$
DECLARE
has_headers BOOLEAN;
BEGIN
SELECT EXISTS (
SELECT 1 FROM pg_type t
JOIN pg_namespace n ON t.typnamespace = n.oid
JOIN pg_attribute a ON a.attrelid = t.typrelid
WHERE n.nspname = 'pgmq'
AND t.typname = 'message_record'
AND a.attname = 'headers'
AND a.attnum > 0
AND NOT a.attisdropped
) INTO has_headers;
IF NOT has_headers THEN
RAISE EXCEPTION 'INCOMPATIBLE PGMQ VERSION DETECTED: This migration requires pgmq 1.5.0 or later with headers support';
END IF;
END $$;
-- Drop "set_vt_batch" function
DROP FUNCTION "pgflow"."set_vt_batch";
-- Create "set_vt_batch" function
CREATE FUNCTION "pgflow"."set_vt_batch" ("queue_name" text, "msg_ids" bigint[], "vt_offsets" integer[]) RETURNS TABLE ("msg_id" bigint, "read_ct" integer, "enqueued_at" timestamptz, "vt" timestamptz, "message" jsonb, "headers" jsonb) LANGUAGE plpgsql AS $$
DECLARE
qtable TEXT := pgmq.format_table_name(queue_name, 'q');
sql TEXT;
BEGIN
/* ---------- safety checks ---------------------------------------------------- */
IF msg_ids IS NULL OR vt_offsets IS NULL OR array_length(msg_ids, 1) = 0 THEN
RETURN; -- nothing to do, return empty set
END IF;
IF array_length(msg_ids, 1) IS DISTINCT FROM array_length(vt_offsets, 1) THEN
RAISE EXCEPTION
'msg_ids length (%) must equal vt_offsets length (%)',
array_length(msg_ids, 1), array_length(vt_offsets, 1);
END IF;
/* ---------- dynamic statement ------------------------------------------------ */
/* One UPDATE joins with the unnested arrays */
sql := format(
$FMT$
WITH input (msg_id, vt_offset) AS (
SELECT unnest($1)::bigint
, unnest($2)::int
)
UPDATE pgmq.%I q
SET vt = clock_timestamp() + make_interval(secs => input.vt_offset),
read_ct = read_ct -- no change, but keeps RETURNING list aligned
FROM input
WHERE q.msg_id = input.msg_id
RETURNING q.msg_id,
q.read_ct,
q.enqueued_at,
q.vt,
q.message,
q.headers
$FMT$,
qtable
);
RETURN QUERY EXECUTE sql USING msg_ids, vt_offsets;
END;
$$;
-- Drop "read_with_poll" function
DROP FUNCTION "pgflow"."read_with_poll";

Spotted by Graphite Agent

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 6, 2025

🔍 Preview Deployment: Website

Deployment successful!

🔗 Preview URL: https://pr-301.pgflow.pages.dev

📝 Details:

  • Branch: chore-remove-backports-and-update-to-new-pgmq
  • Commit: aa2172e77216125d7be0176ad934283e07041837
  • View Logs

_Last updated: _

@jumski jumski force-pushed the chore-update-supabase-versions branch from 8f99711 to b05ba56 Compare November 9, 2025 16:06
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 05b808d to b05b2ae Compare November 9, 2025 16:06
@jumski jumski force-pushed the chore-update-supabase-versions branch from b05ba56 to 06a26d1 Compare November 10, 2025 22:17
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch 2 times, most recently from e6a446e to 1d22df3 Compare November 10, 2025 23:27
@jumski jumski force-pushed the chore-update-supabase-versions branch from 06a26d1 to e1462be Compare November 10, 2025 23:27
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 1d22df3 to ff95e03 Compare November 10, 2025 23:44
@jumski jumski force-pushed the chore-update-supabase-versions branch 2 times, most recently from 3eb2f52 to 1deb956 Compare November 11, 2025 11:38
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from ff95e03 to 00b1670 Compare November 11, 2025 11:38
@jumski jumski force-pushed the chore-update-supabase-versions branch from 1deb956 to eba7284 Compare November 11, 2025 14:39
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 00b1670 to 508c6ac Compare November 11, 2025 14:39
@jumski jumski force-pushed the chore-update-supabase-versions branch from eba7284 to f8447bd Compare November 12, 2025 22:58
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from 508c6ac to b1b578b Compare November 12, 2025 22:58
…test references

- Deleted the create_realtime_partition SQL function and related comments
- Updated test scripts to no longer call create_realtime_partition
- Replaced pgflow.read_with_poll with pgmq.read_with_poll in client code
- Ensured tests focus on existing functionality without partition creation
- Minor adjustments to test setup for consistency and clarity
@jumski jumski force-pushed the chore-update-supabase-versions branch from f8447bd to c35fab0 Compare November 12, 2025 23:10
@jumski jumski force-pushed the chore-remove-backports-and-update-to-new-pgmq branch from b1b578b to 3af7bb9 Compare November 12, 2025 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants