Compare commits

...

428 Commits

Author SHA1 Message Date
Albert Alises
22f2e34fe6
fix(core): Stop workflow builder after terminal remediation (#30289)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-12 13:53:18 +00:00
Jon
d06110ba9d
feat(Facebook Graph API Node): Add OAuth2 support (#27112)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-05-12 13:32:08 +00:00
Jaakko Husso
0ce820de73
fix(core): Abort orchestrator run after repeated plan-guard rejections (no-changelog) (#30274) 2026-05-12 12:11:17 +00:00
Alex Grozav
f0649e0a3d
refactor(editor): Add executionData store for per-execution state (no-changelog) (#29757) 2026-05-12 11:53:43 +00:00
Matsu
28df864aab
chore: Bump fast-uri override to 3.1.2 (#30307) 2026-05-12 11:41:40 +00:00
Michael Kret
27d72acae5
feat: Track n8n Connect credential toggle in telemetry (no-changelog) (#30245) 2026-05-12 11:09:29 +00:00
Declan Carroll
cd0519f360
chore: Skip scaffolding templates in code-health and swap to catalog refs (no-changelog) (#30297) 2026-05-12 10:58:00 +00:00
Matsu
c158771d5f
ci: Allow removal of deprecated release candidate branches (#30058) 2026-05-12 10:48:35 +00:00
Matsu
c0be06f9ff
ci: Migrate from actions/attest-sbom to actions/attest (#30304) 2026-05-12 10:45:58 +00:00
Charlie Kolb
d5d51731d2
fix(editor): Sanitize workflow created during sub-workflow conversion (#30208) 2026-05-12 10:27:52 +00:00
Dawid Myslak
fb78047d9a
fix(core): Add origin-only fallback to MCP OAuth discovery for path-bearing server URLs (#30231)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-12 10:10:16 +00:00
Matsu
b3760c776f
ci: Skip quality checks on Bot PRs (#30284) 2026-05-12 10:08:40 +00:00
Andreas Fitzek
d2e5db258c
feat(core): Add encrypted secureArtifacts slot to IExecutionContext (no-changelog) (#30125) 2026-05-12 09:58:45 +00:00
bjorger
744bb92c2f
feat(core): Add observational memory runtime, builder, and read path (#29815) 2026-05-12 09:55:52 +00:00
Fendy H
d06bbe4f32
feat(NocoDB Node): Add new data apis and use new api version (#18626) 2026-05-12 09:32:46 +00:00
Milorad FIlipović
54d62bb4a1
fix(core): Update instance-ai evaluator to include pinned subnodes and allow all mcp tools (#30292) 2026-05-12 09:13:01 +00:00
Declan Carroll
a60ef7dbb5
ci: Gate PRs on code-health and janitor checks (#30091)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-12 09:02:53 +00:00
Konstantin Tieber
111d403aa7
fix(core): Member role getting read permissions for insights (#30291) 2026-05-12 08:53:38 +00:00
Albert Alises
5059ce7e3d
feat(ai-builder): Expose generated workflow IDs on LangSmith trace root metadata (#30262) 2026-05-12 08:37:57 +00:00
Bernhard Wittmann
b445221c6a
feat: Computer-use evaluation harness (no-changelog) (#29797)
Co-authored-by: Elias Meire <elias@meire.dev>
2026-05-12 08:36:12 +00:00
Bernhard Wittmann
dc7dcaf1b1
fix: Show friendly message in computer use cli when connection token is invalid (no-changelog) (#30288) 2026-05-12 08:34:33 +00:00
Marc Littlemore
ab8475b4cf
chore: Revert to old CODEOWNERS (#30290) 2026-05-12 08:19:58 +00:00
RomanDavydchuk
980f3c8461
fix(editor): Improve dedicated MCP tools connection experience (no-changelog) (#30200) 2026-05-12 08:05:48 +00:00
Andreas Fitzek
2b7e313430
feat(core): Add redaction enforcement feature-flag helpers (no-changelog) (#30253) 2026-05-12 08:03:43 +00:00
Andreas Fitzek
0bde73c42f
feat(core): Scaffold inbound-secrets module (no-changelog) (#30093) 2026-05-12 08:03:29 +00:00
Sandra Zollner
1e685062c3
refactor(core): Combine insights by workflow count and page query (#29787)
Co-authored-by: Ali Elkhateeb <ali.elkhateeb@n8n.io>
2026-05-12 08:01:47 +00:00
Eugene
df6e39bddf
feat(editor): Disable Agent editing UI when user lacks agent:update (no-changelog) (#30201)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: yehorkardash <yehor.kardash@n8n.io>
2026-05-12 07:58:12 +00:00
Mutasem Aldmour
3297536011
refactor(core): Move node-specific builder guidance to per-node @builderHint (no-changelog) (#29992)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 07:45:33 +00:00
José Braulio González Valido
95cf41c37c
chore(core): Enable Daytona sandbox in Instance AI evals (no-changelog) (#29931)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 07:43:04 +00:00
n8n-release-tag-merge[bot]
74fb4110c4 Merge tag 'n8n@2.21.0' 2026-05-12 08:02:43 +00:00
n8n-assistant[bot]
61be42c7bb
🚀 Release 2.21.0 (#30283)
Co-authored-by: Matsuuu <16068444+Matsuuu@users.noreply.github.com>
2026-05-12 07:29:34 +00:00
Ricardo Espinoza
b5bafc861e
feat(core): Add update_partial_workflow MCP tool (#29739) 2026-05-12 07:24:49 +00:00
Jon
3dd134ab3c
fix(core): Preserve AxiosHeaders instance when applying OpenAI vendor defaults (#29860)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Shireen Missi <94372015+ShireenMissi@users.noreply.github.com>
2026-05-12 07:24:26 +00:00
yehorkardash
e98c1e5fe6
fix(editor): Set document title on agent pages (no-changelog) (#30243) 2026-05-12 07:13:33 +00:00
yehorkardash
ae81d1bac0
fix(core): Resolve global credentials for agents (no-changelog) (#30233) 2026-05-12 07:13:30 +00:00
Matsu
cb019eb253
ci: Add artifact prefix to e2e runs to prevent clashing (#30281) 2026-05-12 07:03:09 +00:00
Ali Elkhateeb
8b0a3ae3d3
feat(core): Enrich agent execution telemetry (no-changelog) (#29914) 2026-05-12 06:25:26 +00:00
Romeo Balta
7fdd98aa72
feat(editor): Add proactive starter experiment (no-changelog) (#30252)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-11 21:32:14 +00:00
Dawid Myslak
133a5aa0ad
feat(Onfleet Trigger Node): Add webhook request verification (#29485) 2026-05-11 21:27:33 +00:00
Dawid Myslak
da41470311
feat(Acuity Scheduling Trigger Node): Add webhook request verification (#29261)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 20:54:07 +00:00
Dawid Myslak
94e403300b
feat(Asana Trigger Node): Add webhook request verification (#29258) 2026-05-11 20:04:15 +00:00
aikido-autofix[bot]
267fe49d51
fix: Fix 15 security issues in fast-xml-builder, basic-ftp, fast-uri and 5 more (#30169)
Co-authored-by: aikido-autofix[bot] <119856028+aikido-autofix[bot]@users.noreply.github.com>
Co-authored-by: Declan Carroll <declan@n8n.io>
2026-05-11 19:15:47 +00:00
Michael Drury
e968723808
chore(core): Langsmith OTel telemetry for agent builder (#30259) 2026-05-11 18:29:33 +00:00
Albert Alises
bb73952fcc
fix(core): Defer credential setup during workflow builds (#30181) 2026-05-11 15:46:44 +00:00
Michael Drury
9072ee3beb
fix(core): Agents called from workflows use the workflows owner/user ID for calling further workflows through the agent (#30242) 2026-05-11 14:51:59 +00:00
Eugene
1749801508
fix(core): Gate agent node tools behind node-tools-searcher module (no-changelog) (#30237) 2026-05-11 14:49:40 +00:00
Alexander Gekov
a8aa95551e
fix(Git Node): Restore Clone and other operations on simple-git 3.36+ (#30223) 2026-05-11 14:46:30 +00:00
Declan Carroll
c75a45ba15
chore: Sync quarantine list and add hanging instance-ai tests (#30248) 2026-05-11 14:31:17 +00:00
Tomi Turtiainen
0a761355c4
fix(core): Prevent proxy layer accumulation in ObservableObject (#30129) 2026-05-11 14:29:28 +00:00
Raúl Gómez Morales
bad43d0c81
test(editor): Move Instance AI runtime coverage (no-changelog) (#30240) 2026-05-11 14:21:31 +00:00
Rob Hough
b168523254
refactor(editor): Fix small style nits in Agents (#30222) 2026-05-11 13:34:27 +00:00
Svetoslav Dekov
3df6611fb3
chore(editor): Refactoring instance-ai workflow setup FE code (no-changelog) (#30012)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Charlie Kolb <charlie@n8n.io>
2026-05-11 13:33:15 +00:00
Michael Kret
2e046d5b7f
fix(RSS Feed Read Node): Respect proxy settings (#30059) 2026-05-11 13:28:15 +00:00
Milorad FIlipović
0494f24967
feat(core): Track no results in code-builder search tool (no-changelog) (#30165)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-11 13:08:21 +00:00
Milorad FIlipović
e8827cd6e8
fix(core): Improve documentation usage in mcp tools (#30210) 2026-05-11 12:52:56 +00:00
Matsu
b64a84159d
ci: Use cla-signed labels with CLA automations (#30234)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 12:48:41 +00:00
José Braulio González Valido
5bf5f03453
fix(core): Avoid Agent.close() deadlock in instance-ai web-research fetch (no-changelog) (#30105)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 12:40:49 +00:00
José Braulio González Valido
3123f2551b
fix(core): Allow same-domain redirects in instance-ai web research (TRUST-73) (#30107)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 12:40:43 +00:00
Milorad FIlipović
127544ae5b
fix(core): Fix Resource Mapper types in SDK (no-changelog) (#30213) 2026-05-11 12:35:53 +00:00
Bernhard Wittmann
ea98243c2b
feat: Add deeplinkpairing and connection handling for native computer use (no-changelog) (#29445) 2026-05-11 12:35:08 +00:00
Dawid Myslak
2e21c5fcf8
feat(Microsoft Outlook Node): Add location and attendees fields to calendar events (#29844)
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Michael Kret <88898367+michael-radency@users.noreply.github.com>
2026-05-11 12:29:49 +00:00
Stephen Wright
7635131bd3
feat(editor): Show locked state and permission notice on data redaction workflow settings (#30022)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-05-11 12:02:59 +00:00
Raúl Gómez Morales
0d571c05e4
refactor(editor): Add Instance AI thread provider (no-changelog) (#30090) 2026-05-11 11:45:19 +00:00
Arvin A
6f9b99a3cf
feat(editor): Eval run detail loading + error states (TRUST-70 follow-up) (#29817)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-05-11 11:36:03 +00:00
Mutasem Aldmour
0feec2fea6
fix(core): Make placeholder() return string (no-changelog) (#30100)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:32:35 +00:00
Garrit Franke
e3e70d6068
feat(Figma Trigger Node): Add OAuth2 authentication support (#30079)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:30:49 +00:00
Matsu
410b75c3d0
ci: Add in-house CLA check workflow (#30209)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:29:11 +00:00
bjorger
75646c4527
fix(core): Clarify agent builder prompt guidance (#30127) 2026-05-11 11:11:51 +00:00
Mutasem Aldmour
d0367a00e8
chore: Align pairwise eval builder with production handover (#30019)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 11:00:37 +00:00
Michael Drury
7094b48c94
fix(core): Persist agent chat draft across modes and hide unfinished tool-approval toggle (#30123) 2026-05-11 10:53:59 +00:00
Michael Kret
582b6ae9ea
fix(MongoDB Node): Resolve collection parameter per item in write operations (#29956) 2026-05-11 10:16:14 +00:00
Irénée
26beabb445
refactor(core): Split SSO loader (no-changelog) (#30065) 2026-05-11 10:16:02 +00:00
Dawid Myslak
96b018d356
fix(YouTube Node): Fix misspelled "unlisted" privacy status value in Video Update operation (#30203)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-11 10:00:29 +00:00
Suguru Inoue
d5d290d706
refactor(editor): Migrate workflow document store init (#30077) 2026-05-11 09:57:07 +00:00
Albert Alises
40ffbfa5ab
feat(ai-builder): Add n8n and workflow SDK versions to LangSmith trace metadata (no-changelog) (#30202) 2026-05-11 09:36:53 +00:00
Daria
94d91e13bf
fix(core): Export boolean CSV values as true/false for Data Tables (#30007) 2026-05-11 09:30:00 +00:00
Yuliia Pominchuk
515ae7ced4
feat(core): Add IP rate limiting to dynamic credential authentication endpoints (#30199) 2026-05-11 09:25:26 +00:00
Albert Alises
52a4bcb23a
fix(core): Add liveness timeouts for Instance AI (#30145) 2026-05-11 09:13:57 +00:00
bjorger
be4ef22533
feat(core): Add observational memory storage foundation (#29814) 2026-05-11 09:01:44 +00:00
Guillaume Jacquart
f4e8088cb8
fix(core): Stop applying node-defined sensitive output fields to runtime data (#30198)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 08:57:42 +00:00
bjorger
174f0f805e
fix(core): Scope credential resolution (#30156) 2026-05-11 08:53:22 +00:00
oleg
c94a403682
feat(core): Add agents SDK telemetry hooks (no-changelog) (#30014) 2026-05-11 08:48:23 +00:00
Alexander Gekov
a30772c933
fix(core): Skip unknown fixedCollection keys instead of throwing (#29689)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-11 08:33:56 +00:00
RomanDavydchuk
86170674b7
feat(core): Generate service-specific OAuth2 credentials for dedicated MCP tools (#29884)
Co-authored-by: Elias Meire <elias@meire.dev>
2026-05-11 07:29:37 +00:00
Michael Kret
1a22c76270
fix(Schedule Node): Fix hourly intervals that don't divide evenly into 24h (#29778) 2026-05-11 07:28:37 +00:00
Michael Kret
7c1a77154c
fix(Wait Node): Resolve expressions inside Custom HTML form fields (#30060) 2026-05-11 06:51:45 +00:00
Jaakko Husso
f63567b1ec
feat(editor): Land users to instance AI on root if the module is enabled (no-changelog) (#30121) 2026-05-11 06:46:31 +00:00
Raúl Gómez Morales
dd7555d277
refactor(editor): Split Instance AI view into route-driven empty + thread leaves (no-changelog) (#29877) 2026-05-11 06:38:39 +00:00
n8n-assistant[bot]
3bf5d4ac91
chore: Update node popularity data (#30191)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-05-11 06:33:04 +00:00
Declan Carroll
3a33a448b0
test(benchmark): Question-driven Playwright benchmark suite with tiered topology and rich diagnostics (#29024)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Lint (push) Has been cancelled
CI: Master (Build, Test, Lint) / Performance (push) Has been cancelled
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Has been cancelled
Util: Update Node Popularity / update-popularity (push) Has been cancelled
Test: E2E Coverage Weekly / Prepare Docker (coverage) (push) Has been cancelled
Util: Update Node Popularity / approve-and-automerge (push) Has been cancelled
Test: E2E Coverage Weekly / E2E (coverage) (push) Has been cancelled
Test: E2E Coverage Weekly / Aggregate Coverage (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (beta) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (stable) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (v1) (push) Has been cancelled
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 21:14:08 +00:00
Ricardo Espinoza
60e23e10e0
fix(core): Avoid MCP get_execution hang on circular references (#30051)
Some checks failed
Build: Benchmark Image / build (push) Has been cancelled
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Lint (push) Has been cancelled
CI: Master (Build, Test, Lint) / Performance (push) Has been cancelled
Util: Sync API Docs / sync-public-api (push) Has been cancelled
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Has been cancelled
2026-05-08 19:02:34 +00:00
José Braulio González Valido
5e88748334
fix(core): Always create instance-ai sandbox workspace dirs (TRUST-79) (#30106)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 16:52:44 +00:00
Dawid Myslak
fbf89bde11
feat(GitLab Trigger Node): Add webhook request verification (#29260) 2026-05-08 16:50:00 +00:00
Iván Ovejero
3702ff8eb3
fix(core): Propagate waitTill from worker to main in scaling mode (#30099) 2026-05-08 16:45:58 +00:00
Mike Repeć
d3a3441be2
chore: assign instance-ai cli module to instance-ai team (#30120) 2026-05-08 15:45:18 +00:00
bjorger
8171cf0b32
fix(editor): Disable chat during interactive agent choices (#30111) 2026-05-08 15:14:38 +00:00
Eugene
523fd85e45
feat(editor): Add "New agent" to the universal add menu (no-changelog) (#29978)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 15:10:27 +00:00
Dimitri Lavrenük
bda1782de6
fix: Remove confirmation from browser connect and disconnect tools (no-changelog) (#30096) 2026-05-08 14:43:06 +00:00
Dimitri Lavrenük
1e8f89bd5a
feat: Allow late browser connection after timeout (no-changelog) (#30092) 2026-05-08 14:42:53 +00:00
Jaakko Husso
f709e53824
fix(core): Inline AI_NODE_SDK_VERSION to save memory by not loading @n8n/ai-utilities on boot (#30113) 2026-05-08 14:33:31 +00:00
Rob Hough
f87094cf6e
fix(editor): Add expand/collapse to chat panel in Agents (#30069) 2026-05-08 14:27:46 +00:00
mfsiega
cd5b2b3762
chore(core): Add @n8n/engine HTTP server and harness (no-changelog) (#29913)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 14:08:11 +00:00
Dimitri Lavrenük
8a6e779c6d
fix: Fix browser use tool context in AI Assistant (no-changelog) (#30080) 2026-05-08 13:50:07 +00:00
Daria
277431b88b
fix(editor): Match input height with mode selector in resource locator (#30075) 2026-05-08 13:49:24 +00:00
Danny Martini
9931c4d055
refactor(core): Skip redundant extend helpers in VM mode (no-changelog) (#30098)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 13:40:15 +00:00
Mutasem Aldmour
72eca2f398
refactor: Rename node-level builderHint.message to searchHint and propertyHint (#30062)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 13:32:50 +00:00
Jaakko Husso
7e6bca1f13
feat(core): Make instance AI enabled by default for users on the test group, drop opt-in modal (no-changelog) (#30097) 2026-05-08 13:19:49 +00:00
Mike Repeć
e15c091c75
fix(editor): Refactor archive workflow spec to use API helpers for stability (no-changelog) (#30089) 2026-05-08 12:40:59 +00:00
Bernhard Wittmann
ecf96ad30c
fix: Add warning to Computer Use install modal (#30094) 2026-05-08 12:25:54 +00:00
Benjamin Schroth
8116e0a485
feat(core): Add multi-config evaluations backend (#29784) 2026-05-08 12:24:17 +00:00
Mutasem Aldmour
2ece58eee5
chore: Assign workflow-sdk and instance-ai to instance-ai team (#30087)
Co-authored-by: Claude <noreply@anthropic.com>
2026-05-08 12:19:55 +00:00
Dawid Myslak
0cc163b7dc
fix(EditImage Node): Fix composite operation failing with stream empty buffer (#30088)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-08 12:14:25 +00:00
Albert Alises
ceaebc6cbe
fix(core): Validate AI builder credential IDs before save (#30070)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-08 11:29:12 +00:00
Mutasem Aldmour
afe119be14
fix(core): Improve AI chat file upload handling and error states (#29701)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 11:26:58 +00:00
Marc Littlemore
cfec60de6a
chore: Clarify decorators ownership (#30085) 2026-05-08 10:59:26 +00:00
Charlie Kolb
2b2fa0aaa3
chore: Move stylelint-config codeowners to qa-dx (no-changelog) (#30084) 2026-05-08 10:28:49 +00:00
Charlie Kolb
4b89faa707
chore: Reassign i18n package codeownership to frontend team (no-changelog) (#30082) 2026-05-08 10:20:36 +00:00
Dawid Myslak
910822fb09
feat(Figma Trigger Node): Add webhook request verification (#29262)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 09:34:11 +00:00
Marc Littlemore
b3a806125b
chore: Improve CODEOWNERS file for automatic team review (#27883) 2026-05-08 09:06:37 +00:00
Suguru Inoue
149bdebf37
refactor(editor): Delete workflow ref from workflows.store.ts (#29531) 2026-05-08 08:54:35 +00:00
Declan Carroll
33c3598e66
ci: Remove unused bin fields to fix pnpm install warnings (no-changelog) (#29586)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-08 08:47:25 +00:00
Tomi Turtiainen
7c57843cf6
refactor(ai-builder): Replace hand-rolled sandbox client with @n8n/sandbox-client SDK (no-changelog) (#29879) 2026-05-08 08:32:02 +00:00
Eugene
6f4f0a0303
fix(core): Activate agent chat integrations on every main (#30029)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Michael Drury <michael.drury@n8n.io>
2026-05-08 08:11:57 +00:00
Declan Carroll
e7b353cabc
ci: Shard weekly E2E coverage run across cached docker image (no-changelog) (#29337)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 08:08:39 +00:00
Rob Hough
478d4998a8
fix(editor): Fix Agents styling issues from merge regression (#30032) 2026-05-08 08:06:40 +00:00
Csaba Tuncsik
5cbd2dd1e9
fix(editor): Polish encryption keys settings page (#30008) 2026-05-08 07:44:29 +00:00
Alexander Gekov
d318bc1e33
fix(Notion Node): Paginate Get Many operations beyond 100-item API cap (#29690)
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Co-authored-by: Michael Kret <88898367+michael-radency@users.noreply.github.com>
2026-05-08 07:03:37 +00:00
Declan Carroll
6b893b45a0
fix: Align undici override across major versions (#30028)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-08 05:51:34 +00:00
Declan Carroll
75ed71c001
fix(core): Add ESLint rule to prevent error instances in toThrow assertions (#29889)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-08 05:51:05 +00:00
Jaakko Husso
73dae68663
fix(core): Handle browser not being available on computer use gracefully, better pause-for-user tool (no-changelog) (#29995)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (beta) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (stable) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (v1) (push) Has been cancelled
2026-05-07 22:09:29 +00:00
Michael Drury
820128196c
fix(core): Simplify Slack redirect URL verification process for agents (#30033)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-07 18:38:32 +00:00
Jaakko Husso
8e0f37d100
fix(core): Support type filters on global credential lookups (#30002) 2026-05-07 17:50:58 +00:00
Guillaume Jacquart
75053fec93
feat(editor): Add envFeatureFlag and copyButton property options (#29733)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 16:44:17 +00:00
oleg
ffcf63691f
feat(agents): Add reusable workspace edit tools (no-changelog) (#30013) 2026-05-07 16:03:13 +00:00
Irénée
730c3e12a5
feat(core): Define community packages with environment variables (#29961) 2026-05-07 15:56:14 +00:00
Jaakko Husso
e6b37ef06d
fix(core): Make instance AI use the correct instance URL for OAuth callbacks (no-changelog) (#30024) 2026-05-07 15:37:09 +00:00
Jaakko Husso
43438f0361
fix(core): Tighten instance ai mutation and approval gates (no-changelog) (#29750) 2026-05-07 15:36:36 +00:00
Arvin A
9014baea7e
feat(editor): Redesign evaluation run detail page (#29592) 2026-05-07 15:02:59 +00:00
Charlie Kolb
ca33060e0b
fix(core): Advance Postgres IDENTITY sequences after entity import (#29762)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:00:04 +00:00
Eugene
1a270f2f35
fix(editor): Make agent publish indicator dot use correct color (no-changelog) (#29979)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:05:07 +00:00
Marc Littlemore
ba5b3d13b1
fix(editor): Render tooltips above popovers (#29997)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 13:02:16 +00:00
Michael Drury
9f92005938
fix(core): Agent sessions correctly quoting columns in queries for Postgres (#29999) 2026-05-07 12:30:12 +00:00
Benjamin Schroth
f7c7acc244
fix(editor): Make sure trimmed placeholder never reaches backend (#29842) 2026-05-07 12:15:27 +00:00
Jon
f871d44cab
fix(Salesforce Node): Fix trigger not firing on repeated record updates (#29107)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Michael Kret <88898367+michael-radency@users.noreply.github.com>
2026-05-07 12:13:57 +00:00
Konstantin Tieber
01300e9b9b
fix(core): Simple-git update broke https connection (#29998) 2026-05-07 12:01:41 +00:00
aikido-autofix[bot]
972d8d4ec7
chore: Bump Axios, hono, vm2 and fast-xml-parser (#29829)
Co-authored-by: aikido-autofix[bot] <119856028+aikido-autofix[bot]@users.noreply.github.com>
Co-authored-by: Matsuuu <huhta.matias@gmail.com>
2026-05-07 11:54:50 +00:00
José Braulio González Valido
30d9a168bc
feat(ai-builder): Add --prebuilt-workflows flag for eval CLI (no-changelog) (#29830)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 11:47:29 +00:00
Dawid Myslak
dab3653f80
feat(Microsoft Outlook Node): Add support for recurring event instances (#29802)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-07 11:45:52 +00:00
Declan Carroll
8573197aef
ci: Scope path-filter and janitor diff to PR-only changes (#29993)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 11:41:35 +00:00
Michael Kret
0edcdcfe85
fix(Calendly Trigger Node): Use API v2 for webhook subscriptions (#29771) 2026-05-07 11:29:34 +00:00
Jaakko Husso
a316742c92
fix(core): Gate web search tool use behind approval checks correctly (no-changelog) (#29685)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
Co-authored-by: Albert Alises <albert.alises@gmail.com>
2026-05-07 11:06:51 +00:00
Matsu
ad0a6e9d46
ci: Use a configurable json file for safechain options (#29960) 2026-05-07 10:43:24 +00:00
Matsu
db0097c57f
ci: Make Chromatic visual checks non-blocking (no-changelog) (#29965)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 10:43:14 +00:00
Mike Repeć
5c7921f71c
fix(core): Filter WaitTracker to only poll waiting executions (#29898)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 10:10:05 +00:00
Garrit Franke
15105610f6
docs: Correct rationale for no-overrides-field ESLint rule (#29973)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 10:07:14 +00:00
Rob Hough
8474f1e6f3
fix(editor): Change read-only background color so it's visible (no-changelog) (#29971) 2026-05-07 09:58:41 +00:00
Garrit Franke
5abcae686c
feat(Strava Node): Allow custom OAuth2 scopes (#29972)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 09:57:03 +00:00
Mutasem Aldmour
1cb7c591b3
chore: Match production builder step cap in pairwise eval (#29977)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 09:53:36 +00:00
Michael Drury
ebafde7f85
feat(core): Show workflow-triggered runs in agent session history (no-changelog) (#29932)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 09:48:47 +00:00
Dawid Myslak
8f1f42d180
feat(Trello Trigger Node): Add webhook request verification (#29252) 2026-05-07 09:42:45 +00:00
Elias Meire
2dbf02e63e
fix(core): Harden axios error handling against non-string error stack (#29100) 2026-05-07 09:38:13 +00:00
Guillaume Jacquart
7fdc7788d5
test(core): Cover JWE decryption on dynamic-credential OAuth2 callback (#29808)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 09:33:55 +00:00
Dawid Myslak
acc9643811
feat(Twilio Trigger Node): Add webhook request verification (#29259) 2026-05-07 09:00:56 +00:00
Michael Kret
29a864ca9b
fix(HTTP Request Node): Validate URL type in older node versions (#29886) 2026-05-07 08:46:16 +00:00
Guillaume Jacquart
e71afedfab
fix(editor): Rename encryption keys "Type" column to "Status" (#29966)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:31:30 +00:00
Arvin A
6232de4d47
feat(editor): Cap eval concurrency slider at admin-set limit (#29807)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 08:31:13 +00:00
Albert Alises
273db4be75
fix(ai-builder): Validate MCP tool names and schemas (no-changelog) (#29871) 2026-05-07 08:25:04 +00:00
Albert Alises
8dd6d12918
fix(ai-builder): Improve filesystem read handling (no-changelog) (#29870) 2026-05-07 08:24:47 +00:00
Albert Alises
be90f9f873
fix(ai-builder): Use expiring Computer Use setup tokens (no-changelog) (#29872) 2026-05-07 08:24:38 +00:00
Albert Alises
5e3aa1a726
fix(ai-builder): Preserve collected planning context (#29916) 2026-05-07 08:24:00 +00:00
Michael Kret
55df7cbd06
fix(Google Chat Node): Clarify message resource name field (#29964) 2026-05-07 08:16:20 +00:00
Elias Meire
9b3b29b505
fix: Correct connect.html path in browser extension (#29714) 2026-05-07 08:11:53 +00:00
Dawid Myslak
4e2865206c
feat(Formstack Trigger Node): Add webhook request verification (#29495)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 08:07:07 +00:00
Bernhard Wittmann
68560fbb9a
refactor: Extract shared eval helpers (no-changelog) (#29800) 2026-05-07 08:05:01 +00:00
Mutasem Aldmour
34f2107071
feat(core): Accept merge.input(n) inside ifElse/switch branch targets in workflow-sdk (#29716)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Jaakko Husso <jaakko@n8n.io>
2026-05-07 07:46:06 +00:00
Mutasem Aldmour
ac993e8859
chore(core): Add CLI to print Instance AI agent prompts (no-changelog) (#29759) 2026-05-07 07:45:49 +00:00
Michael Drury
4b67c31896
feat(core): Add get_environment tool for runtime date and timezone (no-changelog) (#29930)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 07:36:09 +00:00
Dimitri Lavrenük
9255311491
feat: Use agent-browser within Computer Use (no-changelog) (#29863) 2026-05-07 07:27:49 +00:00
Matsu
d247f61096
ci: Flip order of npm releases for idempotency (#29958) 2026-05-07 06:44:17 +00:00
Dawid Myslak
3276edce10
feat(Cal Trigger Node): Add webhook request verification (#29484) 2026-05-07 05:55:23 +00:00
Dawid Myslak
e929f9fbe7
feat(Calendly Trigger Node): Add webhook request verification (#29482) 2026-05-07 05:55:20 +00:00
Dawid Myslak
a772016e36
feat(Customer.io Trigger Node): Add webhook request verification (#29480) 2026-05-07 05:55:17 +00:00
Dawid Myslak
eaadf190b8
feat(Mautic Trigger Node): Add webhook request verification (#29658)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-07 05:55:10 +00:00
Dawid Myslak
3c97c49d63
feat(Taiga Trigger Node): Add webhook request verification (#29487) 2026-05-07 05:50:38 +00:00
Dawid Myslak
12b7cc6739
feat(MailerLite Trigger Node): Add webhook request verification (#29491)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-07 05:37:51 +00:00
bjorger
1faa3b1f2a
fix(core): Stop agent builder from hallucinating LLM model ids (no-changelog) (#29922) 2026-05-07 05:11:11 +00:00
Matsu
9d3fb2ba26
ci: Exclude all monorepo packages from safechain minimum age (#29953) 2026-05-07 09:05:22 +03:00
José Braulio González Valido
2164afc5df
chore(ai-builder): Improve eval comparison alert clarity (no-changelog) (#29929)
Some checks are pending
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 21:20:49 +00:00
Yuliia Pominchuk
dd812c5010
fix(core): Emit missing auth audit events for OIDC and SSO-restricted login (#29856)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-06 19:00:20 +00:00
Stephen Wright
ae57e606b4
fix(core): Initialise encryption key proxy on worker and webhook instances (#29912) 2026-05-06 18:12:48 +00:00
Rob Hough
1e52b14b99
fix(editor-ui): Fix ChatHub prompt background to surface token (no-changelog) (#29892) 2026-05-06 17:47:56 +00:00
Garrit Franke
31f577a39f
feat: Add cred-class-name-suffix lint rule (no-changelog) (#29801) 2026-05-06 16:00:49 +00:00
yehorkardash
64079ad98c
feat(core): Agents as first class entities support (no-changelog) (#28017)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Michael Drury <michael.drury@n8n.io>
Co-authored-by: Arvin A <51036481+DeveloperTheExplorer@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Arvin Ansari <arvin.ansari@n8n.io>
Co-authored-by: bjorger <50590409+bjorger@users.noreply.github.com>
Co-authored-by: Eugene <eugene@n8n.io>
Co-authored-by: Michael Drury <me@michaeldrury.co.uk>
Co-authored-by: Robin Braumann <robin.braumann@n8n.io>
Co-authored-by: Rob Hough <robhough180@gmail.com>
2026-05-06 15:44:44 +00:00
Tuukka Kantola
6b1061386e
feat(editor): Add button to open workflow from Instance AI preview (no-changelog) (#29880)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-05-06 14:22:22 +00:00
Itay
bc8d196931
fix(core): Stop logging password reset token values (#29405)
Co-authored-by: Garrit Franke <32395585+garritfra@users.noreply.github.com>
2026-05-06 14:02:22 +00:00
Ricardo Espinoza
d6cc3bedd1
feat(core): Add MCP tool to list credentials (#29438) 2026-05-06 13:42:53 +00:00
Ricardo Espinoza
60a51229e0
fix(core): Throw on bare OutputSelector passed to .add()/.to() (#29736) 2026-05-06 13:33:30 +00:00
Andreas Fitzek
04e9b258a8
fix(core): Add support for context establishment hooks in webhook mode (#29893) 2026-05-06 13:22:27 +00:00
Daria
f42be9030e
fix(core): Allow GIT_SSH_COMMAND in simple-git after 3.36.0 upgrade (#29894) 2026-05-06 13:08:25 +00:00
Matsu
de3a98f58f
ci: Apply SafeChain @n8n/* exclusion to all setup-nodejs steps (#29891)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 13:01:53 +00:00
Rob Hough
8e30d78939
refactor(editor): Migrate v2 Dropdown (no-changelog) (#29466) 2026-05-06 12:24:48 +00:00
Alex Grozav
81a621e3d8
refactor(editor): Add injectNDVStore helper and migrate consumers (no-changelog) (#29794) 2026-05-06 12:16:49 +00:00
Michael Kret
35931319b5
fix(Notion Node): Update UI URLs from notion.so to notion.com ahead of domain migration (#29861) 2026-05-06 12:10:45 +00:00
Charlie Kolb
49e7b056b4
fix(editor): Rename canvas header dropdown action to Description (#29719)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-06 12:04:00 +00:00
Tomi Turtiainen
188ee6d704
chore: Clean up min release age exclude list (no-changelog) (#29882) 2026-05-06 12:01:39 +00:00
Declan Carroll
b6cc694ef5
ci: Exclude @n8n packages from SafeChain minimum package age check (no-changelog) (#29881)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-06 11:53:23 +00:00
Ricardo Espinoza
ed9471a532
fix(ai-builder): Resolve HitlTool variants to base node in get_node_types (#29731) 2026-05-06 11:46:52 +00:00
Daria
bec74aeb4f
fix(core): Add workflow structure validation (#29699) 2026-05-06 11:42:12 +00:00
Daria
d6bae35e8f
fix(editor): Resolve expressions in 'Go to Sub-workflow' navigation (#29843) 2026-05-06 11:41:17 +00:00
Raúl Gómez Morales
a3ae1d8556
fix(editor): Suppress all toasts in Instance AI workflow preview iframe (no-changelog) (#29876) 2026-05-06 11:31:25 +00:00
Garrit Franke
8b54333739
fix(core): Lint package.json in community node tooling (no-changelog) (#29864)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 10:56:31 +00:00
Michael Kret
d63e1ae84e
fix(Google Sheets Node): Reduce duplicate API calls in append operation to avoid quota limits (#29444) 2026-05-06 10:18:18 +00:00
Elias Meire
4dce41f795
feat(core): Transform MCP server configs into dedicated MCP tools (#29493)
Co-authored-by: RomanDavydchuk <roman.davydchuk@n8n.io>
2026-05-06 10:17:43 +00:00
Jon
4d5bafc146
feat(Jira Node): Add OAuth2 (3LO) support (#29414) 2026-05-06 09:49:30 +00:00
Bernhard Wittmann
b6127d8722
feat: Add fully dynamic disclaimer to Quick Connect offer (#29852) 2026-05-06 09:39:52 +00:00
Garrit Franke
e99e6afb49
feat: Add cred-class-oauth2-naming ESLint rule for community nodes (no-changelog) (#29858)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 09:37:52 +00:00
Marc Littlemore
ff41613533
fix(editor): Refresh node icon when diff sidebar selection changes (#29816)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-06 09:23:06 +00:00
Csaba Tuncsik
9afbe13b81
feat(core): Server-side pagination, sorting, and filtering for encryption keys (#29708) 2026-05-06 09:20:14 +00:00
Devendra Reddy Pennabadi
08a36d7515
fix(editor): Preserve decimal suffix when duplicating a node (#29541)
Co-authored-by: Garrit Franke <32395585+garritfra@users.noreply.github.com>
2026-05-06 09:13:14 +00:00
mfsiega
f3a21e14a1
chore(core): Scaffold @n8n/engine package (no-changelog) (#29838)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 09:04:24 +00:00
Suguru Inoue
8aea190659
refactor(editor): Migrate workflow-related workflows store methods (#29853) 2026-05-06 08:54:38 +00:00
Csaba Tuncsik
d4e9705749
test: Add Playwright smoke spec for Vite dev-server boot (#29539) 2026-05-06 08:49:25 +00:00
Garrit Franke
701f9a4627
feat(core): Add n8n-object-validation ESLint rule for community nodes (#29698) 2026-05-06 08:36:53 +00:00
Michael Kret
46d52ffc7e
fix: Handle IMAP fetch errors to prevent instance crash and stuck workflows (#29469) 2026-05-06 08:34:41 +00:00
Iván Ovejero
80c8a6c2fd
fix(core): Fix duplicate task request on runner defer (#28315) 2026-05-06 08:32:48 +00:00
Matsu
61c8895f63
ci: Fix flacky test error assertion (#29848) 2026-05-06 08:31:07 +00:00
Raúl Gómez Morales
f2764f04c0
fix(core): Preserve node positions on AI workflow updates (#29850) 2026-05-06 08:30:10 +00:00
Albert Alises
869dc32c15
feat(ai-builder): Speeds up Instance AI eval by parallelizing iterations and trimming mock handler (no-changelog) (#29839) 2026-05-06 08:15:33 +00:00
Albert Alises
a33a89a215
fix(ai-builder): Allow restoring archived workflows from Instance AI (#29813) 2026-05-06 08:15:16 +00:00
José Braulio González Valido
bbe3e2d148
feat(ai-builder): Add per-PR eval regression detection vs LangSmith baseline (#29456)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 08:15:08 +00:00
Raúl Gómez Morales
5b01cba8b2
refactor(editor): Extract per-thread runtime from Instance AI store (no-changelog) (#29773) 2026-05-06 08:13:46 +00:00
Andreas Fitzek
2714f00121
fix(core): Allow profile edits when SSO is no longer active (#29765) 2026-05-06 07:59:18 +00:00
Rob Hough
ee847d1624
fix(editor): Fix collapse/expand for Chat sidebar (#29378) 2026-05-06 07:45:51 +00:00
Tomi Turtiainen
b6ee2b93ed
refactor(core): Extract event bus startup recovery helpers (no-changelog) (#29558) 2026-05-06 07:34:12 +00:00
Ali Elkhateeb
07f6de6ba0
refactor(API): Use PublicAPIEndpoint type in all public API handlers (no-changelog) (#29752) 2026-05-06 07:32:52 +00:00
Bernhard Wittmann
57ae85785d
fix: Use /form base URL for Form Trigger production links (no-changelog) (#29766)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-05-06 07:17:56 +00:00
Albert Alises
34b92b1623
fix(core): Add workflow details to builder telemetry (no-changelog) (#29821) 2026-05-06 07:06:04 +00:00
Bernhard Wittmann
50e8218ce8
fix: Replay sub-agent conversation on credential-setup nudge (no-changelog) (#29760) 2026-05-06 06:52:42 +00:00
Garrit Franke
c4056b255e
feat(core): Add no-template-placeholders ESLint rule for community nodes (#29796) 2026-05-06 06:20:37 +00:00
Matsu
5af9d0729f
chore: Bump simple-git to 3.36.0 (#29834) 2026-05-06 06:03:49 +00:00
Jaakko Husso
82354742d3
feat(core): Use McpManagerClient and enforce whether MCP server connections are allowed (#29694)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
CI: Python / Checks (push) Has been cancelled
2026-05-05 17:53:01 +00:00
Albert Alises
4d9e624b41
feat(ai-builder): Guarantee user-visible output on terminal states (#29636) 2026-05-05 16:32:45 +00:00
Konstantin Tieber
283071e611
feat(core): Add flag to import workflow cli to activate workflow on import (#29770) 2026-05-05 16:29:00 +00:00
Iván Ovejero
e2576ca25b
fix(core): Add configurable retries and error details to S3 (#28309) 2026-05-05 15:55:23 +00:00
Jon
4c369e83f2
fix(Snowflake Node): Fix issue with Insert and Update operations not working (#29339)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-05 15:47:52 +00:00
Benjamin Schroth
bd7eeb7bc8
fix(core): Skip disabled tool nodes when mapping AI Agent tool sources (#29460) 2026-05-05 15:35:13 +00:00
Jon
3a967fc041
chore: Add community PR review skill (no-changelog) (#29626) 2026-05-05 15:25:55 +00:00
Dimitri Lavrenük
ed12bcb58e
feat: Improve computer-use prompt in Instance AI (no-changelog) (#29450) 2026-05-05 14:59:30 +00:00
Matsu
bfc7775ab3
ci: Fix flaky error assertion in tests (#29798) 2026-05-05 14:55:33 +00:00
Matsu
1ca4dd3fa5
ci: Validate required pr quality checks with ci-filter (#29786) 2026-05-05 14:00:45 +00:00
Tomi Turtiainen
e3ff671448
refactor(core): Extract leader election client and improve robustness (no-changelog) (#29696) 2026-05-05 13:44:29 +00:00
Michael Kret
0cafc717a2
fix(Airtable Node): Fix typecast option dropping attachment field updates (#29556) 2026-05-05 13:24:58 +00:00
Ricardo Espinoza
fba873c37e
fix(core): Clarify 0-based indexing in workflow SDK prompts and JSDoc (#29734) 2026-05-05 13:03:47 +00:00
Matsu
c742a85b3b
chore: Update CODEOWNERS to reflect the new group name (#29788) 2026-05-05 12:36:34 +00:00
Declan Carroll
67f621519e
ci: Scope RELEASE env to editor-ui turbo task (no-changelog) (#29585)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-05 12:21:45 +00:00
Alexander Gekov
d2e1eb30f1
fix(Notion Node): Serialize staticData as ISO string in NotionTrigger (#29688)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-05 12:18:31 +00:00
Danny Martini
9c4ac76255
fix(core): Log errors from fire-and-forget test webhook deactivation (no-changelog) (#29767) 2026-05-05 12:04:16 +00:00
Iván Ovejero
a7864762ca
fix: Restore broken stdlib calls in Python Code node (#29776) 2026-05-05 11:53:14 +00:00
Charlie Kolb
d5af542f25
fix(editor): Improve sidebar new resource menu UX (#29597)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-05 11:47:40 +00:00
Matsu
6ace86e0eb
chore: Refactor and add tests for bump-versions.mjs (#29662) 2026-05-05 11:42:10 +00:00
RomanDavydchuk
78aa0e70f2
fix(Supabase Node): Don't display RPCs in an RLC for the table (#28146)
Co-authored-by: Michael Kret <88898367+michael-radency@users.noreply.github.com>
2026-05-05 11:20:33 +00:00
Raúl Gómez Morales
a408257ebe
fix(editor): Stabilize Instance AI workflow preview rendering (no-changelog) (#29408) 2026-05-05 10:55:33 +00:00
Matsu
ec514da099
ci: Fix race condition between npm releases and daytona snapshots (#29768) 2026-05-05 10:48:46 +00:00
Sudarshan Soma
0697562ac9
fix(Oracle DB Node): Handle the test failures (#28341) 2026-05-05 10:19:14 +00:00
Albert Alises
dc749e0423
refactor(core): Remove global builder node guides (#29582) 2026-05-05 09:27:00 +00:00
Garrit Franke
804f51cf0d
fix(core): Check npm provenance in community package scanner (#29667)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-05 09:26:23 +00:00
n8n-release-tag-merge[bot]
74c256c1c1 Merge tag 'n8n@2.20.0' 2026-05-05 09:42:12 +00:00
n8n-assistant[bot]
b970d259c4
🚀 Release 2.20.0 (#29761)
Co-authored-by: Matsuuu <16068444+Matsuuu@users.noreply.github.com>
2026-05-05 09:14:22 +00:00
Matsu
9ab58df394
chore: Migrate @n8n/nodes-langchain from Jest to Vitest (#28950) 2026-05-05 08:27:59 +00:00
Garrit Franke
4e0f8b5018
feat(core): Add node-operation-error-itemindex ESLint rule (no-changelog) (#29462) 2026-05-05 08:27:04 +00:00
Garrit Franke
c6c6f8ff38
feat: Add valid-credential-references ESLint rule (#29452) 2026-05-05 08:26:50 +00:00
Garrit Franke
8aace75535
feat: Add no-runtime-dependencies ESLint rule (#29366) 2026-05-05 08:26:14 +00:00
Alexander Gekov
0f7776e972
feat(editor): Hide model selector for unsupported AI Gateway actions (#29588)
Co-authored-by: Michael Kret <88898367+michael-radency@users.noreply.github.com>
2026-05-05 08:14:54 +00:00
Mike Repeć
34c49b9c23
fix(editor): Ignore paste events on read-only canvas (#29673)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-05 07:51:59 +00:00
Michael Kret
c724dace38
fix: Skip AI tool generation for community trigger nodes (#29453) 2026-05-05 07:50:52 +00:00
Rob Hough
c6cbc49016
refactor(editor): Add motion.scss utilities to standardise animations and transitions (#29704) 2026-05-05 07:34:43 +00:00
Matsu
7c0d3ccb39
ci: Ignore .md & .mdx files on check-pr-size (#29744) 2026-05-05 07:32:48 +00:00
Michael Kret
f401f9101d
fix(Microsoft Outlook Trigger Node): Use per-folder endpoints for folder-scoped message polling (#29663) 2026-05-05 07:07:35 +00:00
Michael Kret
a65e181a22
fix(Postgres Node): Output Large-Format Numbers As option ignored after pool is cached (#29477) 2026-05-05 06:50:35 +00:00
oleg
b41f1a06ab
fix(core): Defer Instance AI temporary workflow cleanup (no-changelog) (#29700)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Release: Create Minor Release PR / Create release PR (push) Has been cancelled
Release: Create Minor Release PR / Notify Slack (push) Has been cancelled
2026-05-04 18:28:27 +00:00
Alex Grozav
17b1206790
refactor(editor): Add executionData store for per-execution state (no-changelog) (#29687)
Some checks are pending
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
2026-05-04 17:22:18 +00:00
moseoh
b72bd1987c
fix(DeepL Node): Update credentials to use header-based authentication (#24614)
Co-authored-by: RomanDavydchuk <roman.davydchuk@n8n.io>
2026-05-04 17:10:45 +00:00
Andreas Fitzek
4b9e975ca0
feat(editor): Surface cluster information in debug data (no-changelog) (#29583) 2026-05-04 16:27:07 +00:00
Guillaume Jacquart
ad7cdcc04f
feat(core): Add JWE decryption to OAuth2 credential flow (#29497)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-04 16:14:50 +00:00
Iván Ovejero
568e5a24bf
fix(core): Isolate expressions on chat resumption and test webhook deactivation (#29703) 2026-05-04 15:08:59 +00:00
oleg
96fabbafad
feat(instance-ai): Reuse workflow builder sandboxes (no-changelog) (#29598)
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
2026-05-04 14:51:55 +00:00
Jaakko Husso
63d59d48c5
fix(core): Wrap web-search snippets in untrusted data boundaries (no-changelog) (#29695) 2026-05-04 14:19:25 +00:00
Ricardo Espinoza
dad423155f
fix(core): Make MCP client registration cap tunable and surface a proper limit error (#29429) 2026-05-04 13:54:59 +00:00
Mutasem Aldmour
dc6bd68de3
fix(core): Accept placeholder() inside node credentials slot (#29691)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 13:52:48 +00:00
Daria
1d9548c81f
feat(core): Add MCP tool search executions (#29161) 2026-05-04 13:41:47 +00:00
Jaakko Husso
f69aea3899
refactor(core): Use the common SSRF service on instance AI and harden web fetch (#29674) 2026-05-04 13:37:21 +00:00
Mutasem Aldmour
fdceec21b9
feat: Add pairwise workflow eval pipeline (#29123)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Jaakko Husso <jaakko@n8n.io>
2026-05-04 13:26:27 +00:00
Arvin A
4c76aa1467
feat(core): Run evaluation test cases in parallel behind PostHog rollout flag (#29412) 2026-05-04 13:18:01 +00:00
Ali Elkhateeb
e35042999f
fix(core): Add timeout to external secrets provider refresh (#29679) 2026-05-04 13:10:05 +00:00
Andreas Fitzek
45effb8959
feat(core): Add configurable event log path per process (#29403) 2026-05-04 12:49:29 +00:00
Albert Alises
2259f32de8
fix(ai-builder): Add boundaries on the workflow builder remediation loops (#29430) 2026-05-04 12:05:20 +00:00
Alex Grozav
d422d2bafb
refactor(editor): Introduce setter facades for workflow execution state (no-changelog) (#29675) 2026-05-04 11:46:38 +00:00
Michael Kret
62ddc5c443
fix(Compare Datasets Node): Preserve falsy values in mix mode except fields (#29666)
Co-authored-by: RomanDavydchuk <roman.davydchuk@n8n.io>
2026-05-04 11:42:57 +00:00
Charlie Kolb
9fda7332c4
fix(editor): Make textarea resize handle accessible in NDV (#29676)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-04 11:35:08 +00:00
Charlie Kolb
f775604c25
refactor: Split up instance-ai confirmation endpoint DTO by action (#29179) 2026-05-04 10:47:38 +00:00
Albert Alises
c28d501ba1
fix(ai-builder): Stop builder from adding auth to inbound trigger nodes by default (#29648) 2026-05-04 10:25:17 +00:00
Alexander Gekov
418f1f2edb
fix(core): Acquire expression isolate for dynamic node parameter requests (#29671)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-04 10:20:05 +00:00
Albert Alises
dc52bbd532
fix(core): Show AI Builder draft workflows in workflow list (#29670) 2026-05-04 10:15:39 +00:00
Stephen Wright
be22095646
feat(editor): Add reveal redacted data permission to custom roles execution section (#29526) 2026-05-04 09:10:12 +00:00
Luca Mattiazzi
cf8887f9ea
fix(editor): Resolve "Workflow not found" error on evaluations tab (no-changelog) (#29593) 2026-05-04 09:05:10 +00:00
Irénée
baf5bb8e91
refactor: Share SSO provisioning mode types between frontend and backend (no-changelog) (#29384)
Some checks are pending
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Co-authored-by: Konstantin Tieber <46342664+konstantintieber@users.noreply.github.com>
2026-05-04 08:54:25 +00:00
uppinote
a2afc47c22
feat(editor): Add environment variable to disable workflow autosave (#25144)
Co-authored-by: Daria Staferova <daria.staferova@n8n.io>
2026-05-04 08:33:49 +00:00
Jaakko Husso
595aae498c
fix(editor): Don't paint main sidebar on top of instance AI workflow artifact NDVs (no-changelog) (#29584) 2026-05-04 08:30:39 +00:00
Jean Ibarz
9decb1e2a9
fix(Salesforce Node): Allow overriding JWT audience with My Domain URL (#29016) 2026-05-04 07:53:09 +00:00
Rob Hough
b4d898e4ae
chore: Fix skills so they work with non-Claude harnesses (#29644) 2026-05-04 07:46:03 +00:00
Rob Hough
07b53430f9
feat(editor): Add transition on Sidebar collapsed (#29650) 2026-05-04 07:45:41 +00:00
Tuukka Kantola
8c0faa27c4
feat(editor): Polish Instance AI chat list sidebar (no-changelog) (#29463)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-04 07:17:47 +00:00
Chris Z
34d7a02df7
fix(core): Reject empty webhookMethods in community lint rule (#29474) 2026-05-04 07:11:24 +00:00
Sandra Zollner
45c18fb09c
feat(core): Decouple insights pruning max age from license (#29527) 2026-05-04 07:03:47 +00:00
n8n-assistant[bot]
88b3a0b3c6
chore: Update node popularity data (#29659)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (beta) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (stable) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (v1) (push) Has been cancelled
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-05-04 01:04:08 +00:00
Charlie Kolb
6bca1fa26f
fix(core): Recreate data table backing tables on entity import (#29454)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 00:31:38 +00:00
Rob Hough
94bf3db438
fix(editor): Use text-color to stop color-scheme override on N8nButton (no-changelog) (#29520)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Has been cancelled
CI: Master (Build, Test, Lint) / Lint (push) Has been cancelled
CI: Master (Build, Test, Lint) / Performance (push) Has been cancelled
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Has been cancelled
Util: Update Node Popularity / update-popularity (push) Has been cancelled
Test: E2E Coverage Weekly / Coverage Tests (push) Has been cancelled
Util: Update Node Popularity / approve-and-automerge (push) Has been cancelled
2026-05-02 16:45:15 +00:00
Stephen Wright
243f665e60
fix(editor): Fix OAuth2 credential showing "Needs first setup" after connecting (#29617)
Some checks are pending
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
2026-05-01 12:25:55 +00:00
mfsiega
86f47ee6dc
fix(Schedule Node): Cap day-of-month jitter at 28 (#29614)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 10:34:37 +00:00
Suguru Inoue
bdf06fa8dd
refactor(editor): Migrate whole workflow object consumers (#29395) 2026-05-01 10:04:55 +00:00
Andreas Fitzek
e17b6864be
feat(core): Add built-in cluster health checks (no-changelog) (#29506)
Co-authored-by: Stephen Wright <sjw948@gmail.com>
2026-05-01 09:28:51 +00:00
Csaba Tuncsik
56412bcce2
fix(editor): Polish encryption keys date range filter (#29569) 2026-05-01 09:03:00 +00:00
Mutasem Aldmour
9b00ccbfd1
fix: Drop template search tools from builder (#29573) 2026-05-01 08:44:21 +00:00
Stephen Wright
ee7260c495
fix(core): Wire EncryptionKeyProxy provider on bootstrap (#29581) 2026-05-01 08:37:38 +00:00
Jon
221c7f7410
fix(Notion Node): Support app.notion.com URL format for page and block ID extraction (#29554)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Build: Benchmark Image / build (push) Has been cancelled
Util: Sync API Docs / sync-public-api (push) Has been cancelled
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:25:26 +00:00
Stephen Wright
ef3c3e0f80
chore: Upgrade nock to 14.0.13 to fix Node.js 24+ compatibility (#29595) 2026-04-30 15:40:23 +00:00
Rob Hough
6698c42e4e
fix(editor): Add proper bg color for hover state with color-mix() (#29590) 2026-04-30 15:28:15 +00:00
Jaakko Husso
bd130a071f
fix(core): Make instance AI test workflows without publishing them (no-changelog) (#29557)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (beta) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (stable) (push) Has been cancelled
Release: Schedule Patch Release PRs / Create patch release PR (${{ matrix.track }}) (v1) (push) Has been cancelled
2026-04-30 15:04:02 +00:00
Jaakko Husso
b97ca36a99
fix(editor): Make instance ai resource link chips open resources (#29577) 2026-04-30 15:02:03 +00:00
Benjamin Schroth
90d875ce3e
fix(Anthropic Chat Model Node): Add adaptive thinking mode for Claude Opus 4.7+ (#29467) 2026-04-30 13:23:49 +00:00
Mike Repeć
a7ef7416b1
fix(editor): Restore read-only mode for archived workflows on canvas (#29559)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2026-04-30 13:22:39 +00:00
Guillaume Jacquart
473d49c9b1
feat(core): Add preAuthentication support to requestOAuth2 pipeline (#29418)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-30 12:38:03 +00:00
Bernhard Wittmann
12275c86d9
fix(Merge Node): Improve SQL Query mode memory efficiency and error reporting (#28993) 2026-04-30 12:26:38 +00:00
Declan Carroll
c04ea7fae9
ci: Bump safe-chain to v1.5.1 and use safe-mode retry (no-changelog) (#29486)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-30 12:11:35 +00:00
Albert Alises
625ed5e95a
fix(core): Harden Set node workflow SDK contract (#29568) 2026-04-30 12:10:44 +00:00
Albert Alises
661f9908bc
fix(ai-builder): Allow skipping final ask-user question (#29563) 2026-04-30 10:58:45 +00:00
Om Chimurkar
44579d6d3a
fix(editor): Fix sub-workflow folder placement and connection loss (#28770)
Co-authored-by: Charlie Kolb <charlie@n8n.io>
2026-04-30 10:49:02 +00:00
Rob Hough
cdfa7fe4da
refactor(editor): Re-style N8nTooltip (#29509)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-30 10:25:45 +00:00
Garrit Franke
f77dfd1a11
fix(editor): Surface unofficial verified community node tools in AI Tools picker (#28985) 2026-04-30 10:03:47 +00:00
Albert Alises
6175fd6f7b
fix(core): Gate Instance AI edits to pre-existing workflows (#29501) 2026-04-30 08:29:11 +00:00
lif
896461bee3
fix(core): Use editor base URL for workflow and execution links (#23630)
Signed-off-by: majiayu000 <majiayu000@gmail.com>
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Matsu <huhta.matias@gmail.com>
2026-04-30 08:25:26 +00:00
Michael Kret
83250c1710
chore: Add tests for SettingsCommunityNodesView (#29461) 2026-04-30 08:17:15 +00:00
Michael Kret
d18f183b21
fix: Allow 5-field cron expressions with step values in polling nodes (#29447)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2026-04-30 08:15:37 +00:00
Tomi Turtiainen
8b7b4f575d
fix(core): Handle missing runData during execution recovery (#29513) 2026-04-30 08:12:02 +00:00
José Braulio González Valido
e7f3e6f771
feat(ai-builder): Add three new workflow eval test cases (no-changelog) (#29351)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 08:11:04 +00:00
Alexander Gekov
5799481d1c
fix(Todoist Node): Migrate to Todoist unified API v1 endpoints (#29532) 2026-04-30 08:05:44 +00:00
Csaba Tuncsik
656f9c2d7f
feat(editor): Add data encryption keys settings page (#29068) 2026-04-30 08:02:40 +00:00
Daria
5a56459129
fix(editor): Never block publishing on node execution issues (#29479) 2026-04-30 07:54:43 +00:00
Jon
b2ac67f154
fix(Snowflake Node): Avoid call stack overflow on large result sets (#29200)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 07:53:48 +00:00
Stephen Wright
83c400e8d4
fix(editor): Show permission-aware message on redacted input/output panels (#29521)
Some checks are pending
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
2026-04-30 07:37:29 +00:00
Albert Alises
139b803dae
fix: Use explicit node references for AI memory session keys (#29473) 2026-04-30 07:26:36 +00:00
José Braulio González Valido
4fd68bfc99
ci(ai-builder): Parallelize Instance AI eval CI across multiple n8n containers (#29545)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 07:22:28 +00:00
Tomi Turtiainen
0dbe6c533e
refactor(core): Simplify message event bus recovery (no-changelog) (#29505) 2026-04-30 07:11:09 +00:00
Ali Elkhateeb
74d55b9c68
fix(core): Round fractional time saved values before inserting into insights BIGINT column (#29553) 2026-04-30 07:09:02 +00:00
Jaakko Husso
ef56501d47
fix(core): Force saving executions when instance AI executes WFs (#29515) 2026-04-30 06:47:32 +00:00
Declan Carroll
ab16e197a7
ci: Adjusts Docker builds for Colima compatibility (#29343) 2026-04-30 06:10:27 +00:00
Andreas Fitzek
2a0e2fb47a
fix(core): Restore peer project discovery in share dropdowns (#29537)
Some checks are pending
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
2026-04-29 19:45:07 +00:00
Nikhil Kuriakose
5f93b48e79
feat(editor): Update copy for mcp settings (#29399) 2026-04-29 19:21:04 +00:00
Sandra Zollner
484cb2efba
feat(core): Fix user access control logic (#29481) 2026-04-29 15:42:09 +00:00
Mike Repeć
3791db782b
fix(core): Add missing @n8n/tournament alias to Vite config (no-changelog) (#29530) 2026-04-29 15:25:20 +00:00
Svetoslav Dekov
0e07dedc08
fix: Add sequence prefix to proxy expectation recordings (no-changelog) (#29524) 2026-04-29 15:21:48 +00:00
Iván Ovejero
334ce39f65
test: Retry SSE webhook setup on 404 (#28961)
Co-authored-by: Danny Martini <danny@n8n.io>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 15:15:49 +00:00
Guillaume Jacquart
40da23f688
feat(editor): Track IdP role mapping in provisioning telemetry (#29416)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 15:12:48 +00:00
Alex Grozav
9a91c83a27
refactor(editor): Scope NDV store per workflow document (no-changelog) (#29392) 2026-04-29 15:08:36 +00:00
Jaakko Husso
594c60b497
fix(core): Make instance AI see workflow runtime error messages correctly (no-changelog) (#29371)
Some checks failed
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Build: Benchmark Image / build (push) Has been cancelled
Util: Sync API Docs / sync-public-api (push) Has been cancelled
2026-04-29 15:06:13 +00:00
Tuukka Kantola
e075f859f9
feat(editor): Add dev-panel for DOM-annotated feedback prompts (no-changelog) (#28761)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 14:42:12 +00:00
Mutasem Aldmour
0a80722dcb
feat: Validate workflow-sdk output topology against mode (#29363)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:39:35 +00:00
Jaakko Husso
388cd79908
fix(core): Pass nodeTypesProvider to validate workflows fully at instance AI (#29333) 2026-04-29 14:25:20 +00:00
Jaakko Husso
84ac8110f8
fix(ai-builder): Handle properties with contradicting displayOptions as OR alternatives instead of AND (#29500) 2026-04-29 14:24:43 +00:00
phyllis-noester
c4bb5ae8df
fix(core): Persist execution context before writing to db (#28973)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:15:13 +00:00
RomanDavydchuk
4358f1d51c
fix(Telegram Trigger Node): Drop pending updates when creating a new webhook (#29103) 2026-04-29 13:57:52 +00:00
Dawid Myslak
1516ec7c06
feat(Netlify Trigger Node): Add webhook request verification (#29256) 2026-04-29 13:45:41 +00:00
Sandra Zollner
898ba5ae25
feat(core): Add migration for postgres variable values (#29489) 2026-04-29 13:45:24 +00:00
Charlie Kolb
d9d1e7c44a
fix(core): Respect global admin scope when listing favorites (#29472)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 13:38:17 +00:00
José Braulio González Valido
54d9286d92
fix(ai-builder): Filter LangSmith eval dataset by local file slugs (#29507)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 13:30:29 +00:00
Andreas Fitzek
794334cd79
feat: Add instance-level JWKS URI endpoint for JWE public key distribution (#29498) 2026-04-29 13:12:43 +00:00
Rob Hough
758f89c9ef
feat(editor): Move Switch component to core design system (#27322) 2026-04-29 13:11:24 +00:00
Rob Hough
5361257a80
fix(editor): Remove clipping for focus panel textarea (#28677) 2026-04-29 13:11:17 +00:00
Suguru Inoue
39a9ac2a14
refactor(editor): Migrate deprecated workflow-related methods on workflows store (#29362) 2026-04-29 13:04:19 +00:00
Tomi Turtiainen
16d1461858
fix(core): Include stack trace in error logs for non-ApplicationError errors (#29496) 2026-04-29 12:58:56 +00:00
Rob Hough
bc315d087f
fix(editor): Align message box button radius with N8nButton (#29397) 2026-04-29 12:32:08 +00:00
Julian van der Horst
4ea1153dfb
fix: Fix ollama node url path and thinking tokens (#23963)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Co-authored-by: Dimitri Lavrenük <20122620+dlavrenuek@users.noreply.github.com>
Co-authored-by: Dimitri Lavrenük <dimitri.lavrenuek@n8n.io>
2026-04-29 12:30:03 +00:00
Andreas Fitzek
ec2e2f11dc
feat(core): Add cluster check reconciliation cycle (no-changelog) (#28936) 2026-04-29 12:06:31 +00:00
Stephen Wright
9576ab907c
feat(core): Bootstrap legacy CBC and initial GCM encryption keys on startup (#29400) 2026-04-29 11:50:59 +00:00
Irénée
05e10e2680
feat(core): Manage MCP settings via environment variables (#29368)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 11:29:18 +00:00
Benjamin Schroth
1c8f4ec67b
chore: Update langchain packages (#29342) 2026-04-29 11:23:15 +00:00
Mutasem Aldmour
308d0b42b3
feat(core): Use versioned prebuilt Daytona snapshots for Instance AI sandboxes (#29359)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 11:10:16 +00:00
Tomi Turtiainen
ecd0ba8eba
fix(core): Validate workflow import URL requests (#29178) 2026-04-29 10:52:35 +00:00
Milorad FIlipović
9cb160585c
feat(core): Broadcast workflow settings updates (#29459) 2026-04-29 10:33:53 +00:00
Marc Littlemore
a273a9d3f4
fix(editor): Load more executions on tall screens (#29407)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 10:31:58 +00:00
Jen Y.
d92ec168aa
fix(core): Fix MCP OAuth discovery URL construction and grant type selection (#27283)
Co-authored-by: yehorkardash <yehor.kardash@n8n.io>
2026-04-29 10:21:05 +00:00
Michael Kret
47a6658b2d
fix: Validate sql (#24706) 2026-04-29 10:18:10 +00:00
mfsiega
b8b75719ba
feat(core): Warn and skip on duplicate scheduled executions (#28649)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:16:12 +00:00
Mike Repeć
7277566c64
fix(core): Add file path validation to localFile source (#29464) 2026-04-29 10:15:25 +00:00
Declan Carroll
bfc3f88a8b
ci: Fix licensed test filter for fork PR e2e runs (#29451)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:24:34 +00:00
Andreas Fitzek
32dd7433b7
fix(core): Correct LDAP search filter construction (#29388) 2026-04-29 09:13:27 +00:00
Ali Elkhateeb
f5132b9e9a
feat(core): Add --include and --exclude flags to import:credentials command (#29364) 2026-04-29 09:08:17 +00:00
Eugene
a4806ce068
chore: Add protect-endpoints skill (no-changelog) (#29385)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:02:04 +00:00
Matsu
98e24baf64
chore: Move @n8n/tournament to monorepo (#29358) 2026-04-29 08:32:50 +00:00
Garrit Franke
ca5320a7ed
fix(core): Widen zod peer dependency range in published packages (no-changelog) (#29376) 2026-04-29 08:24:36 +00:00
Stephen Wright
569f94bb82
feat: Include updatedAt in encryption key response DTO (#29424) 2026-04-29 08:16:01 +00:00
oleg
fb65c6155e
fix(core): Generate array types for properties with multipleValues (#29410) 2026-04-29 07:54:15 +00:00
Tomi Turtiainen
328f4b8b96
fix(core): Increase default task runner grant token TTL to 30s (#29443) 2026-04-29 07:43:40 +00:00
Raúl Gómez Morales
e8a79d3f5c
feat(editor): Expand Instance AI agent step timeline by default on cloud (no-changelog) (#29446) 2026-04-29 07:40:18 +00:00
Matsu
b9a8b578c6
chore: Gitignore .claude/worktrees (#29440)
Some checks failed
Build: Benchmark Image / build (push) Waiting to run
CI: Master (Build, Test, Lint) / Build for Github Cache (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (22.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (24.14.1) (push) Waiting to run
CI: Master (Build, Test, Lint) / Unit tests (25.x) (push) Waiting to run
CI: Master (Build, Test, Lint) / Lint (push) Waiting to run
CI: Master (Build, Test, Lint) / Performance (push) Waiting to run
CI: Master (Build, Test, Lint) / Notify Slack on failure (push) Blocked by required conditions
Util: Sync API Docs / sync-public-api (push) Waiting to run
CI: Python / Checks (push) Has been cancelled
2026-04-29 05:48:04 +00:00
Ricardo Espinoza
4ae0322ef2
fix(core): Add GET handler to MCP endpoint for Streamable HTTP spec compliance (#28787) 2026-04-28 22:08:57 +00:00
Ricardo Espinoza
2beb0062a5
fix(editor): Mark workflow dirty after debug pinData changes (#28886) 2026-04-28 22:07:53 +00:00
Declan Carroll
d461ec3e9b
test: Use identity-based assertion in node search test (no-changelog) (#29426)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-28 19:16:30 +00:00
Luca Mattiazzi
ded94a5124
fix(Simple Memory Node): Scope memory nodes session id to single memory node (no-changelog) (#28675) 2026-04-28 16:32:14 +00:00
Bernhard Wittmann
c2749768aa
fix(Google Drive Node): Resolve original file name when copying with empty name (#28896) 2026-04-28 15:13:49 +00:00
oleg
ad359b5e2c
feat(instance-ai): Orchestrator-executed checkpoint tasks for planned workflow verification (#29049)
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
2026-04-28 14:58:49 +00:00
Milorad FIlipović
0d907d6794
feat(core): Add endpoint to toggle mcp access for multiple workflows (#29007) 2026-04-28 14:25:39 +00:00
Guillaume Jacquart
e90397627d
feat(core): Add instance-level JWE key infrastructure (no-changelog) (#29071)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-28 14:25:05 +00:00
Rob Hough
c65fa28e1c
fix(editor): Keep publish actions menu enabled for published workflows (#29396) 2026-04-28 13:56:35 +00:00
Alex Grozav
39154b9037
refactor(editor): Move node issues to workflow document store (no-changelog) (#29390) 2026-04-28 13:53:59 +00:00
Bernhard Wittmann
e04f027b5d
fix(Zammad Node): Add To and CC fields for email articles (#28860) 2026-04-28 13:16:45 +00:00
Jon
aa0daf9fb6
feat(Slack Node): Allow users to configure OAuth2 scopes (#28728)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-28 12:30:40 +00:00
Iván Ovejero
7722023abd
fix(core): Reset Redis retry counter on successful reconnect (#29377) 2026-04-28 12:07:54 +00:00
Guillaume Jacquart
8551b1b90c
fix(core): Apply credential allowed domains in declarative node requests (#29082)
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-28 12:04:27 +00:00
Tomi Turtiainen
3f350a8577
fix(core): Make task runner grant token TTL configurable (#29357) 2026-04-28 12:04:02 +00:00
Declan Carroll
16a36186f2
ci: Tighten n8n testcontainer wait strategy and add sequential service start (#29352)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 11:36:03 +00:00
Tomi Turtiainen
7bd3532f07
fix(core): Fix code node executions hanging when idle timer overlaps with task acceptance (#29239) 2026-04-28 11:07:43 +00:00
n8n-release-tag-merge[bot]
d6618f3c68 Merge tag 'n8n@2.19.0' 2026-04-28 10:25:56 +00:00
Rob Hough
d332fe9c84
refactor(editor): Align @n8n/design-system with DS3 (no-changelog) (#28428) 2026-04-28 09:53:05 +00:00
Michael Kret
47ad39777f
fix: No Credits state for n8n Connect badge (#29375) 2026-04-28 09:47:36 +00:00
Stephen Wright
258b9703c5
refactor: Migrate all cipher call sites to encryptV2/decryptV2 (#29096) 2026-04-28 09:24:01 +00:00
Matsu
6ec8144914
chore: Enable OxLint in editor-ui (#29360) 2026-04-28 09:22:05 +00:00
Suguru Inoue
eb053180b0
refactor(editor): Migrate allNodes in workflow store (#29070) 2026-04-28 09:11:32 +00:00
3367 changed files with 268143 additions and 60920 deletions

View File

@ -1,62 +0,0 @@
# Design System Style Review Rules
Use these rules when reviewing CSS/SCSS/Vue style changes, especially in
`packages/frontend/` and `packages/frontend/@n8n/design-system/`.
## 1) Token source priority
Prefer this order when choosing visual values:
1. Semantic tokens from
`packages/frontend/@n8n/design-system/src/css/_tokens.scss`
2. Primitives from
`packages/frontend/@n8n/design-system/src/css/_primitives.scss`
3. Hard-coded values only when no suitable token exists
If no token exists, request a short rationale in the PR.
## 2) Hard-coded visual values
Flag hard-coded visual values and suggest token alternatives. This includes:
- Colors (`#fff`, `rgb()`, `hsl()`, `oklch()`)
- Spacing and sizing (`px`, `rem`, numeric layout constants in styles)
- Radius, border widths/styles, and shadows
- Typography values (font size, weight, line-height)
- Motion values (durations and easing like `cubic-bezier(...)`)
Severity: strong warning (expected migration to tokens/primitives when possible).
## 3) Legacy token usage
In `_tokens.scss`, the compatibility section labeled
"Legacy tokens (kept for compatibility)" is considered legacy usage.
When new or modified code uses these legacy token families, flag it as a
migration opportunity and recommend semantic token usage where available.
Severity: strong warning (discourage new usage, allow compatibility paths).
## 4) Deprecated style and component surfaces
Flag new usage of deprecated/legacy style surfaces in design-system components,
for example:
- `Button.legacy.scss` and legacy button override classes
- Legacy button variants/types (for example `highlight`, `highlight-fill`)
- Legacy component variants that exist for compatibility (for example legacy
tabs variant)
Severity: strong warning (prefer modern semantic variants/components).
## 5) Token substitution changes
If a PR changes one token reference to another (for example
`--text-color` -> `--text-color--subtle`), flag it as a soft warning.
Ask for intent in the PR description/comment:
- Intentional design adjustment, or
- Potential accidental visual regression
Do not treat token substitution as a hard failure by default.

1
.agents/skills Symbolic link
View File

@ -0,0 +1 @@
../.claude/plugins/n8n/skills

View File

@ -0,0 +1,150 @@
---
description: >-
Checks if a community pull request is ready for human review. Verifies CLA
signature, PR title format, description completeness, test coverage, and
cubic-dev-ai issues. Use when given a PR number or branch name to review,
or when the user says /community-pr-review, /pr-review, or asks to check if
a PR is ready for review.
allowed-tools: Bash(gh:*), Bash(git:*), Read, Glob, Grep
---
# Community PR Review
Given a PR number or branch name, determine whether it is ready for human review.
## Steps
### 1. Resolve the PR
If given a branch name, find the PR number first:
```bash
gh pr view <branch> --repo n8n-io/n8n --json number --jq .number
```
### 2. Fetch PR data
```bash
gh pr view <number> --repo n8n-io/n8n \
--json number,title,body,author,headRefName,headRefOid,files,isDraft,state
```
Fetch in parallel:
```bash
# CLA commit status (primary signal) — statuses are newest-first; use the first returned entry
gh api --paginate "repos/n8n-io/n8n/commits/<headRefOid>/statuses" \
--jq '[.[] | select(.context == "license/cla") | {state, description}] | first'
# CLAassistant issue comment (fallback when no commit status) — use the last returned entry
gh api --paginate "repos/n8n-io/n8n/issues/<number>/comments" \
--jq '[.[] | select(.user.login == "CLAassistant") | .body] | last'
# cubic-dev-ai PR review comments (streamed so results concatenate cleanly across pages)
gh api --paginate "repos/n8n-io/n8n/pulls/<number>/comments" \
--jq '.[] | select(.user.login == "cubic-dev-ai[bot]") | {body: .body, path: .path}'
```
### 3. Run the five checks
#### A. CLA signed
Check the `license/cla` commit status first; fall back to the CLAassistant comment if no status exists.
**Commit status** (`context == "license/cla"`):
- `state: "success"` → ✅ signed
- `state: "failure"` or `state: "error"` → ❌ not signed
- `state: "pending"` → ⏳ pending
- Not present → fall back to comment
**CLAassistant issue comment** (fallback):
- Body contains `"All committers have signed the CLA."` → ✅ signed
- Body contains `"not signed"` or a link to sign → ❌ not signed
- No comment → ❌ treat as not signed
#### B. PR title format
For all types except `revert`, the title must match:
```
^(feat|fix|perf|test|docs|refactor|build|ci|chore)(\([a-zA-Z0-9 ]+( Node)?\))?!?: [A-Z].+[^.]$
```
For `revert` titles, the summary is the original commit header (which starts with a lowercase type), so capitalization is not enforced:
```
^revert(\([a-zA-Z0-9 ]+( Node)?\))?!?: .+[^.]$
```
- Type must be one of: `feat fix perf test docs refactor build ci chore revert`
- Scope is optional, in parentheses e.g. `(editor)` or `(Slack Node)`
- Breaking changes: `!` before the colon
- Summary: starts with capital letter (lowercase allowed for `revert:`), no trailing period
- No Linear ticket IDs in the title (e.g. `N8N-1234`)
#### C. PR description completeness
1. **Summary** (`## Summary`) — must have non-empty content below the heading (not just the HTML comment).
2. **Related tickets** (`## Related Linear tickets, Github issues, and Community forum posts`) — acceptable content: a URL (`http`), a GitHub closing keyword (`closes #N`, `fixes #N`, `resolves #N`, etc.), or empty. Only flag if the section heading is missing entirely.
3. **Checklist** (`## Review / Merge checklist`) — all four items must be present. Unchecked checkboxes are expected for community PRs; do **not** flag them as missing.
#### D. Tests
Skip this check if the PR type (from the title) is `docs`, `ci`, `chore`, or `build`.
Otherwise:
1. Identify source files changed: non-test files under `packages/` from the `files` list.
2. If there are source file changes, check out the PR in a temporary worktree:
```bash
git fetch origin pull/<number>/head:pr/<number>
git worktree add /tmp/pr-<number>-review pr/<number>
```
3. Read the changed source files from the worktree to understand whether the changes introduce logic that warrants tests (new functions, bug fixes, behaviour changes, data transformations). Pure config changes, type-only changes, and trivial renames do not require tests.
4. Look for matching test files (`*.test.ts`, `*.spec.ts`, files inside `__tests__/`) among the changed files.
5. **Always clean up the worktree**, even if a previous check failed:
```bash
git worktree remove /tmp/pr-<number>-review --force
git branch -D pr/<number>
```
Report:
- ✅ Tests present, or change does not require tests
- ❌ Source logic changed but no test files found
#### E. cubic-dev-ai issues
Review the PR review comments fetched in step 2. `cubic-dev-ai[bot]` leaves comments for every issue it finds.
- No comments from `cubic-dev-ai[bot]`, or every comment explicitly states no issues were found → ✅
- Any other comment → ❌ report the total count and priority breakdown (e.g. "3 issues: 1× P1, 1× P2, 1× P3")
### 4. Output
Always output valid JSON in this exact shape:
```json
{
"readyForReview": <true if all passing checks allow merge, false otherwise>,
"messageForUser": "<Human-readable summary of what needs to change, written as if posted directly to the PR contributor. 'N/A' if nothing is needed.>",
"checks": {
"CLA": <true if signed, false if not signed or pending>,
"Title": <true if title matches convention, false otherwise>,
"Description": <true if all three template sections are complete, false otherwise>,
"TestsNeeded": <true if the code changes require tests, false if not applicable>,
"TestsIncluded": <true if test files are present in the PR, false otherwise>,
"CubicIssues": <true if cubic-dev-ai raised issues, false if no issues>
}
}
```
`readyForReview` is `true` only when: `CLA`, `Title`, and `Description` are all `true`; `CubicIssues` is `false`; and either `TestsNeeded` is `false` or `TestsIncluded` is `true`.
`messageForUser` should be a short, friendly message directed at the contributor listing exactly what they need to address. If `readyForReview` is `true`, set it to `"N/A"`.
Output nothing other than the JSON block.
## Notes
- Draft PRs — report all findings but note the PR is a draft.
- If the PR is already merged or closed, say so and skip the checks.
- Always remove the worktree even if earlier checks failed.

View File

@ -1,4 +1,5 @@
---
name: n8n:content-design
description: >
Product content designer for UI copy. Use when writing, reviewing, or auditing
user-facing text: button labels, error messages, tooltips, empty states, modal copy,

View File

@ -1,4 +1,5 @@
---
name: n8n:conventions
description: Quick reference for n8n patterns. Full docs /AGENTS.md
---

View File

@ -1,4 +1,5 @@
---
name: n8n:create-community-node-lint-rule
description: >-
Create new ESLint rules for the @n8n/eslint-plugin-community-nodes package.
Use when adding a lint rule, creating a community node lint, or working on

View File

@ -1,4 +1,5 @@
---
name: n8n:create-issue
description: Create Linear tickets or GitHub issues following n8n conventions. Use when the user asks to create a ticket, file a bug, open an issue, or says /create-issue.
argument-hint: "[linear|github] <description of the issue>"
compatibility:

View File

@ -1,4 +1,5 @@
---
name: n8n:create-pr
description: Creates GitHub pull requests with properly formatted titles that pass the check-pr-title CI validation. Use when creating PRs, submitting changes for review, or when the user says /pr or asks to create a pull request.
allowed-tools: Bash(git:*), Bash(gh:*), Read, Grep, Glob
---

View File

@ -1,4 +1,5 @@
---
name: n8n:create-skill
description: >-
Guides users through creating effective Agent Skills. Use when you want to
create, write, or author a new skill, or asks about skill structure, best

View File

@ -0,0 +1,33 @@
---
name: n8n:design-system
description: Guidelines on using Design System styles and components. Use when working on .vue files in packages/frontend. Triggers for tasks that include component architecture, styling, UI changes, or feature work.
---
# Design System
Comprehensive guide for building, styling, and using components in the frontend.
## When to Apply
Reference these guidelines when:
- Working on `.{vue|css|scss}` files in `packages/frontend`
- Adding new components to `packages/frontend/@n8n/design-system`
- Refactoring styles for Vue components
- Implementing new UI components or features
- Reviewing changes to UI
## Rules
- Follow guidelines in `packages/frontend/@n8n/design-system/src/styleguide/*.mdx`
- ALWAYS use CSS variables for styles from `packages/frontend/@n8n/design-system/src/css/_tokens.scss` or `packages/frontend/@n8n/design-system/src/css/_primtivies.scss`. Use hard-coded values only when no suitable tokens.
- ALWAYS prefer using existing components from `packages/frontend/@n8n/design-system/src/components`. Prefer components that aren't marked `@deprecated`.
- Use `light-dark()` when alternating colors for ligh/dark mode
- When working with animations or transitions, ALWAYS prefer using mixins from `packages/frontend/@n8n/design-system/src/css/mixins/motion.scss`
- When reviewing animations, follow the guides in `rules/web-animation-guidelines.md`
- When reviewing UI changes or adding new components, follow `rules/web-interface-guidelines.md`
## Examples
- "Add a modal dialog for confirming workflow deletion" → Use `N8nDialog`
- "Add a dropdown to select workflow status" → Use `N8nDropdown` or `N8nSelect`
- "Add button with + icon to add new tiem" → Wrap `N8nButton` with `iconOnly` prop with `N8nTooltip` and wrap in `N8nTooltip`. Use `N8nIcon` and proper aria-label.
- "Add a destructive action button" → use `N8nButton` with `variant="destructive"`
- "Make background color white/black" → Use `var(--background--surface)` for white on light mode and "black" on dark mode
- "Animate the title in gracefully" -> Use `fade-in-up` mixin from `motion.scss` with `var(--duration--base)`

View File

@ -0,0 +1,93 @@
# Web Motion Guidelines
Design and implement web animations that feel natural and purposeful
## Timing and Duration
## Duration Guidelines
| Element Type | Duration |
| --------------------------------- | --------- |
| Micro-interactions | 100-150ms |
| Standard UI (tooltips, dropdowns) | 150-250ms |
| Modals, drawers | 200-300ms |
**Rules:**
- UI animations should stay under 300ms
- Larger elements animate slower than smaller ones
- Exit animations can be ~20% faster than entrance
- Match duration to distance - longer travel = longer duration
### The Frequency
Determine how often users will see the animation:
- **100+ times/day** → No animation (or drastically reduced)
- **Occasional use** → Standard animation
- **Rare/first-time** → Can be more special
**Example:** Raycast never animates because users open it hundreds of times a day.
## When to Animate
**Do animate:**
- Enter/exit transitions for spatial consistency
- State changes that benefit from visual continuity
- Responses to user actions (feedback)
- Rarely-used interactions where delight adds value
**Don't animate:**
- Keyboard-initiated actions
- Hover effects on frequently-used elements
- Anything users interact with 100+ times daily
- When speed matters more than smoothness
## Performance
Prefer animating `transform` and `opacity`. These skip layout and paint stages, running entirely on the GPU.
**Avoid animating:**
- `padding`, `margin`, `height`, `width` (trigger layout)
- `blur` filters above 20px (expensive, especially Safari)
- CSS variables in deep component trees
### Optimization Techniques
```css
/* Force GPU acceleration */
.animated-element {
will-change: transform;
}
```
## Practical Tips
Quick reference for common scenarios. See [PRACTICAL-TIPS.md](PRACTICAL-TIPS.md) for detailed implementations.
| Scenario | Solution |
| ------------------------------- | ----------------------------------------------- |
| Make buttons feel responsive | Add `transform: scale(0.97)` on `:active` |
| Element appears from nowhere | Start from `scale(0.95)`, not `scale(0)` |
| Shaky/jittery animations | Add `will-change: transform` |
| Hover causes flicker | Animate child element, not parent |
| Popover scales from wrong point | Set `transform-origin` to trigger location |
| Sequential tooltips feel slow | Skip delay/animation after first tooltip |
| Small buttons hard to tap | Use 44px minimum hit area (pseudo-element) |
| Something still feels off | Add subtle blur (under 20px) to mask it |
| Hover triggers on mobile | Use `@media (hover: hover) and (pointer: fine)` |
## Easing Decision Flowchart
Is the element entering or exiting the viewport?
├── Yes → ease-out
└── No
├── Is it moving/morphing on screen?
│ └── Yes → ease-in-out
└── Is it a hover change?
├── Yes → ease
└── Is it constant motion?
├── Yes → linear
└── Default → ease-out

View File

@ -0,0 +1,98 @@
# Web Interface Guidelines
<!-- credit to https://github.com/raunofreiberg/interfaces -->
This document outlines a non-exhaustive list of details that make a good (web) interface. It is a living document, periodically updated based on learnings. Some of these may be subjective, but most apply to all websites.
The [WAI-ARIA](https://www.w3.org/TR/wai-aria-1.1/) spec is deliberately not duplicated in this document. However, some accessibility guidelines may be pointed out. Contributions are welcome. Edit [this file](https://github.com/raunofreiberg/interfaces/blob/main/README.md) and submit a pull request.
## Interactivity
- Clicking the input label should focus the input field
- Inputs should be wrapped with a `<form>` to submit by pressing Enter
- Inputs should have an appropriate `type` like `password`, `email`, etc
- Inputs should disable `spellcheck` and `autocomplete` attributes most of the time
- Inputs should leverage HTML form validation by using the `required` attribute when appropriate
- Input prefix and suffix decorations, such as icons, should be absolutely positioned on top of the text input with padding, not next to it, and trigger focus on the input
- Toggles should immediately take effect, not require confirmation
- Buttons should be disabled after submission to avoid duplicate network requests
- Interactive elements should disable `user-select` for inner content
- Decorative elements (glows, gradients) should disable `pointer-events` to not hijack events
- Interactive elements in a vertical or horizontal list should have no dead areas between each element, instead, increase their `padding`
## Typography
- Fonts should have `-webkit-font-smoothing: antialiased` applied for better legibility
- Fonts should have `text-rendering: optimizeLegibility` applied for better legibility
- Fonts should be subset based on the content, alphabet or relevant language(s)
- Font weight should not change on hover or selected state to prevent layout shift
- Font weights below 400 should not be used
- Medium sized headings generally look best with a font weight between 500-600
- Adjust values fluidly by using CSS [`clamp()`](https://developer.mozilla.org/en-US/docs/Web/CSS/clamp), e.g. `clamp(48px, 5vw, 72px)` for the `font-size` of a heading
- Where available, tabular figures should be applied with `font-variant-numeric: tabular-nums`, particularly in tables or when layout shifts are undesirable, like in timers
- Prevent text resizing unexpectedly in landscape mode on iOS with `-webkit-text-size-adjust: 100%`
## Motion
- Switching themes should not trigger transitions and animations on elements [^1]
- Animation duration should not be more than 200ms for interactions to feel immediate
- Animation values should be proportional to the trigger size:
- Don't animate dialog scale in from 0 → 1, fade opacity and scale from ~0.8
- Don't scale buttons on press from 1 → 0.8, but ~0.96, ~0.9, or so
- Actions that are frequent and low in novelty should avoid extraneous animations: [^2]
- Opening a right click menu
- Deleting or adding items from a list
- Hovering trivial buttons
- Looping animations should pause when not visible on the screen to offload CPU and GPU usage
- Use `scroll-behavior: smooth` for navigating to in-page anchors, with an appropriate offset
## Touch
- Hover states should not be visible on touch press, use `@media (hover: hover)` [^3]
- Font size for inputs should not be smaller than 16px to prevent iOS zooming on focus
- Inputs should not auto focus on touch devices as it will open the keyboard and cover the screen
- Apply `muted` and `playsinline` to `<video />` tags to auto play on iOS
- Disable `touch-action` for custom components that implement pan and zoom gestures to prevent interference from native behavior like zooming and scrolling
- Disable the default iOS tap highlight with `-webkit-tap-highlight-color: rgba(0,0,0,0)`, but always replace it with an appropriate alternative
## Optimizations
- Large `blur()` values for `filter` and `backdrop-filter` may be slow
- Scaling and blurring filled rectangles will cause banding, use radial gradients instead
- Sparingly enable GPU rendering with `transform: translateZ(0)` for unperformant animations
- Toggle `will-change` on unperformant scroll animations for the duration of the animation [^4]
- Auto-playing too many videos on iOS will choke the device, pause or even unmount off-screen videos
- Bypass React's render lifecycle with refs for real-time values that can commit to the DOM directly [^5]
- [Detect and adapt](https://github.com/GoogleChromeLabs/react-adaptive-hooks) to the hardware and network capabilities of the user's device
## Accessibility
- Disabled buttons should not have tooltips, they are not accessible [^6]
- Focusable elements in a sequential list should be navigable with <kbd></kbd> <kbd></kbd>
- Focusable elements in a sequential list should be deletable with <kbd></kbd> <kbd>Backspace</kbd>
- To open immediately on press, dropdown menus should trigger on `mousedown`, not `click`
- Use a svg favicon with a style tag that adheres to the system theme based on `prefers-color-scheme`
- Icon only interactive elements should define an explicit `aria-label`
- Tooltips triggered by hover should not contain interactive content
- Images should always be rendered with `<img>` for screen readers and ease of copying from the right click menu
- Illustrations built with HTML should have an explicit `aria-label` instead of announcing the raw DOM tree to people using screen readers
- Gradient text should unset the gradient on `::selection` state
- When using nested menus, use a "prediction cone" to prevent the pointer from accidentally closing the menu when moving across other elements.
## Design
- Optimistically update data locally and roll back on server error with feedback
- Authentication redirects should happen on the server before the client loads to avoid janky URL changes
- Style the document selection state with `::selection`
- Display feedback relative to its trigger:
- Show a temporary inline checkmark on a successful copy, not a notification
- Highlight the relevant input(s) on form error(s)
- Empty states should prompt to create a new item, with optional templates
[^1]: Switching between dark mode or light mode will trigger transitions on elements that are meant for explicit interactions like hover. We can [disable transitions temporarily](https://paco.me/writing/disable-theme-transitions) to prevent this. For Next.js, use [next-themes](https://github.com/pacocoursey/next-themes) which prevents transitions out of the box.
[^2]: This is a matter of taste but some interactions just feel better with no motion. For example, the native macOS right click menu only animates out, not in, due to the frequent usage of it.
[^3]: Most touch devices on press will temporarily flash the hover state, unless explicitly only defined for pointer devices with [`@media (hover: hover)`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/hover).
[^4]: Use [`will-change`](https://developer.mozilla.org/en-US/docs/Web/CSS/will-change) as a last resort to improve performance. Pre-emptively throwing it on elements for better performance may have the opposite effect.
[^5]: This might be controversial but sometimes it can be beneficial to manipulate the DOM directly. For example, instead of relying on React re-rendering on every wheel event, we can track the delta in a ref and update relevant elements directly in the callback.
[^6]: Disabled buttons do not appear in tab order in the DOM so the tooltip will never be announced for keyboard users and they won't know why the button is disabled.
[^7]: As of 2023, Safari will not take the border radius of an element into account when defining custom outline styles. [Safari 16.4](https://developer.apple.com/documentation/safari-release-notes/safari-16_4-release-notes) has added support for `outline` following the curve of border radius. However, keep in mind that not everyone updates their OS immediately.

View File

@ -1,4 +1,5 @@
---
name: n8n:linear-issue
description: Fetch and analyze Linear issue with all related context. Use when starting work on a Linear ticket, analyzing issues, or gathering context about a Linear issue.
argument-hint: "[issue-id]"
compatibility:

View File

@ -1,4 +1,5 @@
---
name: n8n:loom-transcript
description: Fetch and display the full transcript from a Loom video URL. Use when the user wants to get or read a Loom transcript.
argument-hint: [loom-url]
---
@ -101,4 +102,4 @@ Format and present the full transcript to the user:
- No authentication or cookies are required — Loom's transcript API is publicly accessible.
- Only English transcripts are available through this API.
- Transcripts are auto-generated and may contain minor errors.
- Transcripts are auto-generated and may contain minor errors.

View File

@ -1,4 +1,5 @@
---
name: n8n:node-add-oauth
description: Add OAuth2 credential support to an existing n8n node — creates the credential file, updates the node, adds tests, and keeps the CLI constant in sync. Use when the user says /node-add-oauth.
argument-hint: "[node-name] [optional: custom-scopes flag or scope list]"
---

View File

@ -0,0 +1,139 @@
---
name: n8n:protect-endpoints
description: Applies n8n's RBAC scope decorators to REST endpoints. Use when creating a new @RestController, adding any @Get/@Post/@Put/@Patch/@Delete route to an existing controller, or reviewing endpoint authorization. Every authenticated endpoint must be gated by @ProjectScope or @GlobalScope.
---
# Protect REST endpoints with RBAC
**Rule:** every authenticated route on a `@RestController` MUST carry an access-scope decorator. If you add a route without one, the IDOR/permission bypass is on you.
## Decision
```
URL has :projectId → @ProjectScope('<resource>:<op>')
URL has no project → @GlobalScope('<resource>:<op>')
skipAuth: true → no decorator + comment explaining alternate auth
```
`@ProjectScope` succeeds if the user has the scope **globally OR in the project named in the URL**. `@GlobalScope` ignores project relations entirely.
Both decorators come from `@n8n/decorators`. The middleware lives in `packages/cli/src/controller.registry.ts` (`createScopedMiddleware`) and resolves access via `userHasScopes` in `packages/cli/src/permissions.ee/check-access.ts`.
## Apply the decorator
```ts
import { Get, Post, ProjectScope, RestController } from '@n8n/decorators';
@RestController('/projects/:projectId/widgets')
export class WidgetsController {
@Post('/')
@ProjectScope('widget:create') // create
async create(...) { ... }
@Get('/:widgetId')
@ProjectScope('widget:read') // read one
async get(...) { ... }
@Get('/')
@ProjectScope('widget:list') // list
async list(...) { ... }
@Patch('/:widgetId')
@ProjectScope('widget:update') // update
async update(...) { ... }
@Delete('/:widgetId')
@ProjectScope('widget:delete') // delete
async delete(...) { ... }
}
```
Conventions:
- One decorator per route, placed directly under the HTTP-method decorator.
- Use the most specific scope that fits. Reuse `*:update` for state-changing actions like `publish`/`unpublish`/`build` unless the resource needs to gate them separately (see `workflow:publish` for the precedent).
- Routes without `:projectId` and not global-only operations are usually a design smell — flag it.
## When the scope doesn't exist yet
Add the resource and ops in `packages/@n8n/permissions/`:
1. **`src/constants.ee.ts`** — add to `RESOURCES` (alphabetical):
```ts
widget: [...DEFAULT_OPERATIONS, 'execute'] as const,
```
The `Scope` union (`<resource>:<op>` template-literal type) auto-derives.
2. **`src/scope-information.ts`** — add a display name + description per scope.
3. **`src/roles/scopes/project-scopes.ee.ts`** — add to project roles. Match the `workflow` precedent unless product says otherwise:
- `REGULAR_PROJECT_ADMIN_SCOPES`, `PERSONAL_PROJECT_OWNER_SCOPES`, `PROJECT_EDITOR_SCOPES` → all CRUDL+execute scopes.
- `PROJECT_VIEWER_SCOPES` → read/list/execute only.
- `PROJECT_CHAT_USER_SCOPES` → execute only (if applicable).
4. **`src/roles/scopes/global-scopes.ee.ts`** — add to `GLOBAL_OWNER_SCOPES` (admin inherits via `concat()`). Do **not** add to member/chat-user globals — they get scopes via project relations.
5. **Personal-space publishing**: if you add a `<resource>:publish` scope, also append it to `PERSONAL_SPACE_PUBLISHING_SETTING.scopes` in `constants.ee.ts` so personal-owner gating matches `workflow:publish`.
6. **Frontend wiring** — three files in the editor; skipping any of them means the new scopes will not appear in the project-role configuration UI:
- `packages/frontend/editor-ui/src/app/stores/rbac.store.ts` — add `<resource>: {}` to `scopesByResourceId` (typecheck will fail otherwise).
- `packages/frontend/editor-ui/src/features/project-roles/projectRoleScopes.ts` — add the resource to `UI_OPERATIONS` (operations to render in the permissions matrix, in display order) **and** to `SCOPE_TYPES` (the order the resource group appears on the page).
- `packages/frontend/@n8n/i18n/src/locales/en.json` — add `projectRoles.<resource>:<op>` (column label) and `projectRoles.<resource>:<op>.tooltip` (hover description) for every op, plus `projectRoles.type.<resource>` (the group header).
7. **Snapshot** — update `packages/@n8n/permissions/src/__tests__/__snapshots__/scope-information.test.ts.snap` to include the new `<resource>:*` entries.
No DB migration needed — `AuthRolesService.init()` syncs scopes/roles on every startup. Custom team roles created in the UI are **not** auto-updated; mention this in the PR description.
## Public / unauthenticated routes
`{ skipAuth: true }` skips the auth middleware → `req.user` is undefined → adding `@ProjectScope` would 401 every call. Public routes (third-party webhooks, signed callbacks) must:
1. **Omit the scope decorator.**
2. Authenticate via signature/HMAC verification inside the handler (or another route-specific mechanism).
3. Carry a comment explaining why no scope is applied, so the next reviewer doesn't try to "fix" it.
Example:
```ts
// Third-party webhook callback: do not add @ProjectScope. Auth happens
// via per-platform signature verification inside webhookHandler, and
// :projectId is unused in the (agentId, platform) lookup.
@Post('/:agentId/webhooks/:platform', { skipAuth: true, allowBots: true })
async handleWebhook(...) { ... }
```
## Verify with a route-metadata test
Add a regression test that fails when a future route is added without a scope. Iterate every route on the controller via `ControllerRegistryMetadata` and assert the gate.
```ts
import { ControllerRegistryMetadata } from '@n8n/decorators';
import { Container } from '@n8n/di';
import { WidgetsController } from '../widgets.controller';
const UNAUTHENTICATED_HANDLERS = new Set<string>(); // add public handler names here
const metadata = Container.get(ControllerRegistryMetadata).getControllerMetadata(
WidgetsController as never,
);
const routeCases = Array.from(metadata.routes.entries()).map(([handlerName, route]) => ({
handlerName, route,
}));
describe('WidgetsController route access scopes', () => {
it.each(routeCases)(
'$handlerName is gated by a project-scoped widget:* check',
({ handlerName, route }) => {
if (UNAUTHENTICATED_HANDLERS.has(handlerName)) {
expect(route.accessScope).toBeUndefined();
expect(route.skipAuth).toBe(true);
return;
}
expect(route.accessScope).toBeDefined();
expect(route.accessScope?.globalOnly).toBe(false);
expect(route.accessScope?.scope.startsWith('widget:')).toBe(true);
},
);
});
```
## Defense in depth (still required)
Decorator alone is not enough when handlers leak data via downstream calls. Service/repository methods should still **filter by `projectId`** (or user-scoped helpers like `findByUser`). The decorator gates *who can call this URL*; the service gates *what they can read*. Both, always.
## Reference patterns
- Project-scoped CRUD: `packages/cli/src/workflows/workflows.controller.ts`, `packages/cli/src/credentials/credentials.controller.ts`, `packages/cli/src/modules/data-table/data-table.controller.ts`.
- Mixed global + project: `packages/cli/src/controllers/project.controller.ts`.

View File

@ -1,4 +1,5 @@
---
name: n8n:reproduce-bug
description: Reproduce a bug from a Linear ticket with a failing test. Expects the full ticket context (title, description, comments) to be provided as input.
---

View File

@ -1,4 +1,5 @@
---
name: n8n:setup-mcps
description: >-
Configure MCP servers for n8n development. Use when the user says /setup-mcps
or asks to set up MCP servers for n8n.

View File

@ -1,4 +1,5 @@
---
name: n8n:spec-driven-development
description: Keeps implementation and specs in sync. Use when working on a feature that has a spec in .claude/specs/, when the user says /spec, or when starting implementation of a documented feature. Also use when the user asks to verify implementation against a spec or update a spec after changes.
---

1
.claude/skills Symbolic link
View File

@ -0,0 +1 @@
plugins/n8n/skills

View File

@ -1,32 +1,12 @@
{
"version": 1,
"generated": "2026-04-23T08:42:21.615Z",
"totalViolations": 102,
"generated": "2026-05-12T09:37:31.489Z",
"totalViolations": 82,
"violations": {
"packages/@n8n/agents/package.json": [
{
"rule": "catalog-violations",
"line": 40,
"message": "langsmith@>=0.3.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "193bb785d0b4"
},
{
"rule": "catalog-violations",
"line": 27,
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "b58f03d0d5c1"
},
{
"rule": "catalog-violations",
"line": 41,
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "a77ced903cdf"
}
],
"packages/@n8n/ai-workflow-builder.ee/package.json": [
{
"rule": "catalog-violations",
"line": 72,
"line": 73,
"message": "langsmith@^0.4.6 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "6ee5e003d795"
},
@ -39,154 +19,110 @@
{
"rule": "catalog-violations",
"line": 70,
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "94f80b083b76"
},
{
"rule": "catalog-violations",
"line": 71,
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9c770d66baf2"
},
{
"rule": "catalog-violations",
"line": 76,
"line": 77,
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "85c311d87491"
},
{
"rule": "catalog-violations",
"line": 82,
"line": 83,
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "407c8d1b3428"
}
],
"packages/@n8n/cli/package.json": [
{
"rule": "catalog-violations",
"line": 79,
"message": "@types/node@24.10.1 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "a5a872807ede"
},
{
"rule": "catalog-violations",
"line": 74,
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "733c3960022e"
}
],
"packages/@n8n/eslint-config/package.json": [
{
"rule": "catalog-violations",
"line": 56,
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "82841e89293f"
}
],
"packages/@n8n/eslint-plugin-community-nodes/package.json": [
{
"rule": "catalog-violations",
"line": 46,
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "46d3130cf108"
},
{
"rule": "catalog-violations",
"line": 47,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "589f90baeece"
}
],
"packages/@n8n/json-schema-to-zod/package.json": [
{
"rule": "catalog-violations",
"line": 63,
"message": "zod@^3.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "436de7cbc5ea"
}
],
"packages/@n8n/node-cli/package.json": [
{
"rule": "catalog-violations",
"line": 76,
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "1b5deae544ea"
},
{
"rule": "catalog-violations",
"line": 52,
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "da74ed210d07"
},
{
"rule": "catalog-violations",
"line": 51,
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9711a9b00bf9"
},
{
"rule": "catalog-violations",
"line": 55,
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "6a9e12780943"
},
{
"rule": "catalog-violations",
"line": 59,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "d536f5a9c3f8"
"message": "zod@^3.25.76 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "0e18482e8781"
}
],
"packages/@n8n/nodes-langchain/package.json": [
{
"rule": "catalog-violations",
"line": 289,
"message": "openai@^6.9.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "b9b214e61fdc"
"line": 292,
"message": "openai@^6.34.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "3c1f53f0afe3"
},
{
"rule": "catalog-violations",
"line": 299,
"message": "zod-to-json-schema@3.23.3 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "081b5d0b5ca5"
},
{
"rule": "catalog-violations",
"line": 296,
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "88d67e2ef747"
},
{
"rule": "catalog-violations",
"line": 254,
"line": 259,
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "69d6fa7e46f9"
},
{
"rule": "catalog-violations",
"line": 270,
"line": 274,
"message": "cheerio appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "8cd029bb871e"
},
{
"rule": "catalog-violations",
"line": 280,
"line": 284,
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "26f20ebea4b1"
},
{
"rule": "catalog-violations",
"line": 286,
"line": 289,
"message": "mongodb appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "46cb48884e22"
},
{
"rule": "catalog-violations",
"line": 290,
"line": 293,
"message": "pdf-parse appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "0c7d44a9c2e4"
}
],
"packages/testing/janitor/package.json": [
"packages/@n8n/tournament/package.json": [
{
"rule": "catalog-violations",
"line": 39,
"message": "ts-morph@>=20.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "4a2907301983"
"line": 44,
"message": "@types/node@^18.13.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "6368b5d3b924"
},
{
"rule": "catalog-violations",
"line": 52,
"message": "typescript@^5.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "f668021a144e"
},
{
"rule": "catalog-violations",
"line": 55,
"message": "ast-types appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "27edcbb2b4f8"
},
{
"rule": "catalog-violations",
"line": 56,
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "75058f9a4d30"
},
{
"rule": "catalog-violations",
"line": 57,
"message": "recast appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "5f2b50fef19d"
}
],
"packages/frontend/@n8n/chat/package.json": [
@ -195,12 +131,6 @@
"line": 56,
"message": "unplugin-icons@^0.19.0 should use \"catalog:frontend\" (exists in pnpm-workspace.yaml [frontend])",
"hash": "a0d24d761026"
},
{
"rule": "catalog-violations",
"line": 59,
"message": "vite-plugin-dts@^4.5.3 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "37ac4b34bc06"
}
],
"packages/frontend/@n8n/design-system/package.json": [
@ -211,268 +141,128 @@
"hash": "237e9d17c4ba"
}
],
"packages/frontend/@n8n/storybook/package.json": [
{
"rule": "catalog-violations",
"line": 31,
"message": "@types/node@^24.10.1 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "50fb70481f8f"
}
],
"packages/@n8n/node-cli/src/template/templates/declarative/custom/template/package.json": [
{
"rule": "catalog-violations",
"line": 40,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "c55e0c75d586"
},
{
"rule": "catalog-violations",
"line": 43,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "999c932ac3ae"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "2f772d0b5a09"
},
{
"rule": "catalog-violations",
"line": 41,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "6ded3ee6fafe"
}
],
"packages/@n8n/node-cli/src/template/templates/declarative/github-issues/template/package.json": [
{
"rule": "catalog-violations",
"line": 43,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "c3815ab2677d"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "11608ee90ba9"
},
{
"rule": "catalog-violations",
"line": 49,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "4514689aef5c"
},
{
"rule": "catalog-violations",
"line": 44,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ce8e04a67c4c"
}
],
"packages/@n8n/node-cli/src/template/templates/programmatic/example/template/package.json": [
{
"rule": "catalog-violations",
"line": 40,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "cd90d70b3ce4"
},
{
"rule": "catalog-violations",
"line": 43,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "d0998542352d"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "fd2577d9c87b"
},
{
"rule": "catalog-violations",
"line": 41,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "a931f101c8a0"
}
],
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/memory-custom/template/package.json": [
{
"rule": "catalog-violations",
"line": 41,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "298daa052478"
},
{
"rule": "catalog-violations",
"line": 44,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "9d70bb26b233"
},
{
"rule": "catalog-violations",
"line": 47,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "42aefb6c9989"
},
{
"rule": "catalog-violations",
"line": 42,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "cf4f2ca88b59"
}
],
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-ai-custom/template/package.json": [
{
"rule": "catalog-violations",
"line": 43,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "3c8b4977fd8a"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "9d31f8f7537c"
},
{
"rule": "catalog-violations",
"line": 49,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "e1734c74601d"
},
{
"rule": "catalog-violations",
"line": 44,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "2a2dea670608"
}
],
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-ai-custom-example/template/package.json": [
{
"rule": "catalog-violations",
"line": 43,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "91ea1dbe7d4e"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "72d08eab5625"
},
{
"rule": "catalog-violations",
"line": 49,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "91b58c718e73"
},
{
"rule": "catalog-violations",
"line": 44,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "83b610ec607a"
}
],
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-openai-compatible/template/package.json": [
{
"rule": "catalog-violations",
"line": 43,
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "082bc9c01097"
},
{
"rule": "catalog-violations",
"line": 46,
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
"hash": "1b9d2910ce91"
},
{
"rule": "catalog-violations",
"line": 49,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "6b5e714159dc"
},
{
"rule": "catalog-violations",
"line": 44,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ba672d26d64d"
}
],
"packages/cli/package.json": [
{
"rule": "catalog-violations",
"line": 97,
"line": 98,
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "1e3686e1923b"
},
{
"rule": "catalog-violations",
"line": 132,
"line": 139,
"message": "@opentelemetry/sdk-trace-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "1cf7f6bcf5d1"
},
{
"rule": "catalog-violations",
"line": 140,
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "a3dad0b8dc21"
},
{
"rule": "catalog-violations",
"line": 142,
"line": 150,
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "949e802528f7"
},
{
"rule": "catalog-violations",
"line": 193,
"line": 202,
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "3cab98902302"
},
{
"rule": "catalog-violations",
"line": 209,
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "5b7e9b03fb10"
},
{
"rule": "catalog-violations",
"line": 200,
"line": 217,
"message": "undici appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "91c29775e961"
},
{
"rule": "catalog-violations",
"line": 203,
"line": 220,
"message": "ws appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "cd07242e8163"
},
{
"rule": "catalog-violations",
"line": 75,
"message": "@types/psl appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "6e62e0076b0a"
}
],
"packages/@n8n/agents/package.json": [
{
"rule": "catalog-violations",
"line": 28,
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "b58f03d0d5c1"
},
{
"rule": "catalog-violations",
"line": 50,
"message": "@opentelemetry/sdk-trace-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "c5c495ac3508"
},
{
"rule": "catalog-violations",
"line": 51,
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "a77ced903cdf"
}
],
"packages/@n8n/instance-ai/package.json": [
{
"rule": "catalog-violations",
"line": 56,
"line": 80,
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "5b2153508e47"
},
{
"rule": "catalog-violations",
"line": 37,
"line": 86,
"message": "@types/psl appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "56dabb51b433"
},
{
"rule": "catalog-violations",
"line": 56,
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "8fa6b9a8fc91"
},
{
"rule": "catalog-violations",
"line": 47,
"line": 64,
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "8f082fc2e8b6"
},
{
"rule": "catalog-violations",
"line": 71,
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9a9d97065952"
},
{
"rule": "catalog-violations",
"line": 59,
"line": 87,
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "12e346c47b39"
},
{
"rule": "catalog-violations",
"line": 31,
"line": 50,
"message": "@joplin/turndown-plugin-gfm appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "a3cf1504b5c2"
},
{
"rule": "catalog-violations",
"line": 46,
"line": 68,
"message": "pdf-parse appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "283fa9114c03"
}
@ -500,59 +290,91 @@
"packages/nodes-base/package.json": [
{
"rule": "catalog-violations",
"line": 908,
"line": 911,
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "2d1fab7a5b05"
},
{
"rule": "catalog-violations",
"line": 958,
"line": 961,
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "2daf37aa14e4"
},
{
"rule": "catalog-violations",
"line": 963,
"line": 966,
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "3f93c404ae9c"
},
{
"rule": "catalog-violations",
"line": 897,
"line": 900,
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ca4ac788adc6"
},
{
"rule": "catalog-violations",
"line": 909,
"line": 912,
"message": "cheerio appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "1a1b5bbc50c9"
},
{
"rule": "catalog-violations",
"line": 914,
"line": 915,
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "781db4a1e068"
},
{
"rule": "catalog-violations",
"line": 917,
"message": "eventsource appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9795e6c6d9e9"
},
{
"rule": "catalog-violations",
"line": 927,
"line": 930,
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "02341f2b5e3e"
},
{
"rule": "catalog-violations",
"line": 938,
"line": 941,
"message": "mongodb appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "f688907d087a"
},
{
"rule": "catalog-violations",
"line": 889,
"line": 892,
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ac254baa61f9"
}
],
"packages/@n8n/node-cli/package.json": [
{
"rule": "catalog-violations",
"line": 52,
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "da74ed210d07"
},
{
"rule": "catalog-violations",
"line": 59,
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "188baf266f61"
},
{
"rule": "catalog-violations",
"line": 51,
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9711a9b00bf9"
},
{
"rule": "catalog-violations",
"line": 55,
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "6a9e12780943"
}
],
"packages/frontend/editor-ui/package.json": [
{
"rule": "catalog-violations",
@ -560,6 +382,12 @@
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "bd9a2eeb072b"
},
{
"rule": "catalog-violations",
"line": 90,
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9e9c7ec09a0b"
},
{
"rule": "catalog-violations",
"line": 92,
@ -568,15 +396,15 @@
},
{
"rule": "catalog-violations",
"line": 90,
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "8a66e00b94fa"
"line": 77,
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "62156c2613b2"
}
],
"packages/@n8n/scan-community-package/package.json": [
{
"rule": "catalog-violations",
"line": 15,
"line": 20,
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ac0e4301d694"
}
@ -584,57 +412,57 @@
"packages/@n8n/ai-utilities/package.json": [
{
"rule": "catalog-violations",
"line": 57,
"line": 69,
"message": "undici appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "c14cd05614e8"
},
{
"rule": "catalog-violations",
"line": 53,
"line": 65,
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "884a45bdbcf2"
},
{
"rule": "catalog-violations",
"line": 60,
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "717de3a58c50"
"line": 72,
"message": "n8n-workflow appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "ea4fbfff30ba"
}
],
"packages/@n8n/mcp-browser/package.json": [
{
"rule": "catalog-violations",
"line": 37,
"line": 36,
"message": "ws appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "9650c1b55f3c"
},
{
"rule": "catalog-violations",
"line": 31,
"line": 28,
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "0c97891a24f4"
},
{
"rule": "catalog-violations",
"line": 32,
"line": 30,
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "8466b03b1044"
},
{
"rule": "catalog-violations",
"line": 36,
"line": 35,
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "f23a9d3d7aa2"
},
{
"rule": "catalog-violations",
"line": 44,
"line": 42,
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "3f9e46e56803"
},
{
"rule": "catalog-violations",
"line": 29,
"line": 26,
"message": "@joplin/turndown-plugin-gfm appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "743e3a7dbb32"
}
@ -655,14 +483,50 @@
"hash": "67f9d81d9528"
}
],
"packages/@n8n/cli/package.json": [
{
"rule": "catalog-violations",
"line": 74,
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "733c3960022e"
}
],
"packages/workflow/package.json": [
{
"rule": "catalog-violations",
"line": 58,
"message": "ast-types appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "1c7d7cf0b0fe"
},
{
"rule": "catalog-violations",
"line": 60,
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "627a716b5d23"
},
{
"rule": "catalog-violations",
"line": 68,
"message": "recast appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "b660317b5f6f"
}
],
"packages/@n8n/computer-use/package.json": [
{
"rule": "catalog-violations",
"line": 44,
"line": 47,
"message": "eventsource appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "f50c1eee2ed6"
}
],
"packages/@n8n/eslint-plugin-community-nodes/package.json": [
{
"rule": "catalog-violations",
"line": 47,
"message": "n8n-workflow appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
"hash": "c5830b76ff8e"
}
],
"packages/@n8n/stylelint-config/package.json": [
{
"rule": "catalog-violations",

View File

@ -38,3 +38,4 @@
!packages/@n8n/benchmark/**
!packages/@n8n/typescript-config
!packages/@n8n/typescript-config/**

9
.github/CODEOWNERS vendored
View File

@ -1,6 +1,5 @@
packages/@n8n/db/src/migrations/ @n8n-io/migrations-review
.github/workflows @n8n-io/ci-admins
.github/scripts @n8n-io/ci-admins
.github/actions @n8n-io/ci-admins
.github/poutine-rules @n8n-io/ci-admins
.github/workflows @n8n-io/qa-dx
.github/scripts @n8n-io/qa-dx
.github/actions @n8n-io/qa-dx
.github/poutine-rules @n8n-io/qa-dx

232
.github/OWNERS vendored Normal file
View File

@ -0,0 +1,232 @@
# n8n CODEOWNERS
#
# Last-match-wins: specific rules MUST come AFTER general rules.
# Default catch-all (ensures every file gets at least one reviewer)
* @n8n-io/catalysts
# Catalysts
packages/core/ @n8n-io/catalysts
packages/workflow/ @n8n-io/catalysts
packages/@n8n/config/ @n8n-io/catalysts
packages/@n8n/backend-common/ @n8n-io/catalysts
packages/@n8n/backend-test-utils/ @n8n-io/catalysts
packages/@n8n/di/ @n8n-io/catalysts
packages/@n8n/errors/ @n8n-io/catalysts
packages/@n8n/constants/ @n8n-io/catalysts
packages/@n8n/utils/ @n8n-io/catalysts
packages/@n8n/api-types/ @n8n-io/catalysts
packages/@n8n/workflow-sdk/ @n8n-io/instance-ai
packages/@n8n/task-runner/ @n8n-io/catalysts
packages/@n8n/task-runner-python/ @n8n-io/catalysts
packages/@n8n/expression-runtime/ @n8n-io/catalysts
packages/@n8n/db/ @n8n-io/catalysts
packages/@n8n/json-schema-to-zod/ @n8n-io/catalysts
packages/@n8n/crdt/ @n8n-io/catalysts
packages/@n8n/extension-sdk/ @n8n-io/catalysts
packages/@n8n/eslint-config/ @n8n-io/qa-dx
packages/@n8n/typescript-config/ @n8n-io/qa-dx
packages/@n8n/db/src/migrations/ @n8n-io/migrations-review
# Top-level paths
scripts/ @n8n-io/qa-dx
patches/ @n8n-io/qa-dx
assets/ @n8n-io/adore
security/ @n8n-io/qa-dx
# @n8n/cli
packages/@n8n/cli/ @n8n-io/adore
packages/@n8n/cli/src/commands/credential/ @n8n-io/iam
packages/@n8n/cli/src/commands/user/ @n8n-io/iam
packages/@n8n/cli/src/commands/data-table/ @n8n-io/adore
packages/@n8n/cli/src/commands/tag/ @n8n-io/adore
packages/@n8n/cli/src/commands/project/ @n8n-io/ligo
packages/@n8n/cli/src/commands/source-control/ @n8n-io/ligo
packages/@n8n/cli/src/commands/variable/ @n8n-io/ligo
packages/@n8n/cli/src/commands/skill/ @n8n-io/ai
# packages/cli
packages/cli/ @n8n-io/catalysts
packages/cli/src/scaling/ @n8n-io/catalysts
packages/cli/src/concurrency/ @n8n-io/catalysts
packages/cli/src/execution-lifecycle/ @n8n-io/catalysts
packages/cli/src/executions/ @n8n-io/catalysts
packages/cli/src/task-runners/ @n8n-io/catalysts
packages/cli/src/webhooks/ @n8n-io/catalysts
packages/cli/src/push/ @n8n-io/catalysts
packages/cli/src/commands/ @n8n-io/catalysts
packages/cli/src/config/ @n8n-io/catalysts
packages/cli/src/eventbus/ @n8n-io/catalysts
packages/cli/src/events/ @n8n-io/catalysts
packages/cli/src/security-audit/ @n8n-io/catalysts
packages/cli/src/modules/workflow-index/ @n8n-io/catalysts
packages/cli/src/modules/breaking-changes/ @n8n-io/catalysts
packages/cli/src/modules/otel/ @n8n-io/ligo
packages/cli/src/auth/ @n8n-io/iam
packages/cli/src/credentials/ @n8n-io/iam
packages/cli/src/mfa/ @n8n-io/iam
packages/cli/src/oauth/ @n8n-io/iam
packages/cli/src/permissions.ee/ @n8n-io/iam
packages/cli/src/sso.ee/ @n8n-io/iam
packages/cli/src/user-management/ @n8n-io/iam
packages/cli/src/license/ @n8n-io/iam
packages/cli/src/modules/ldap.ee/ @n8n-io/iam
packages/cli/src/modules/log-streaming.ee/ @n8n-io/iam
packages/cli/src/modules/sso-oidc/ @n8n-io/iam
packages/cli/src/modules/sso-saml/ @n8n-io/iam
packages/cli/src/modules/provisioning.ee/ @n8n-io/iam
packages/cli/src/modules/dynamic-credentials.ee/ @n8n-io/iam
packages/cli/src/modules/redaction/ @n8n-io/iam
packages/cli/src/modules/instance-registry/ @n8n-io/iam
packages/cli/src/modules/token-exchange/ @n8n-io/iam
packages/cli/src/environments.ee/ @n8n-io/ligo
packages/cli/src/public-api/ @n8n-io/ligo
packages/cli/src/modules/source-control.ee/ @n8n-io/ligo
packages/cli/src/modules/external-secrets.ee/ @n8n-io/ligo
packages/cli/src/modules/insights/ @n8n-io/ligo
packages/cli/src/collaboration/ @n8n-io/catalysts
packages/cli/src/binary-data/ @n8n-io/catalysts
packages/cli/src/posthog/ @n8n-io/adore
packages/cli/src/modules/data-table/ @n8n-io/adore
packages/cli/src/evaluation.ee/ @n8n-io/ai
packages/cli/src/chat/ @n8n-io/ai
packages/cli/src/tool-generation/ @n8n-io/ai
packages/cli/src/modules/workflow-builder/ @n8n-io/ai
packages/cli/src/modules/mcp/ @n8n-io/ai
packages/cli/src/modules/quick-connect/ @n8n-io/ai
packages/cli/src/modules/chat-hub/ @n8n-io/ai
packages/cli/src/modules/instance-ai/ @n8n-io/instance-ai
packages/cli/src/modules/community-packages/ @n8n-io/nodes
# CLI controllers
packages/cli/src/controllers/auth.controller.ts @n8n-io/iam
packages/cli/src/controllers/invitation.controller.ts @n8n-io/iam
packages/cli/src/controllers/me.controller.ts @n8n-io/iam
packages/cli/src/controllers/mfa.controller.ts @n8n-io/iam
packages/cli/src/controllers/owner.controller.ts @n8n-io/iam
packages/cli/src/controllers/password-reset.controller.ts @n8n-io/iam
packages/cli/src/controllers/role.controller.ts @n8n-io/iam
packages/cli/src/controllers/users.controller.ts @n8n-io/iam
packages/cli/src/controllers/user-settings.controller.ts @n8n-io/iam
packages/cli/src/controllers/api-keys.controller.ts @n8n-io/iam
packages/cli/src/controllers/security-settings.controller.ts @n8n-io/iam
packages/cli/src/controllers/oauth/ @n8n-io/iam
packages/cli/src/controllers/ai.controller.ts @n8n-io/ai
packages/cli/src/controllers/annotation-tags.controller.ee.ts @n8n-io/ai
packages/cli/src/controllers/cta.controller.ts @n8n-io/adore
packages/cli/src/controllers/folder.controller.ts @n8n-io/adore
packages/cli/src/controllers/tags.controller.ts @n8n-io/adore
packages/cli/src/controllers/binary-data.controller.ts @n8n-io/adore
packages/cli/src/controllers/dynamic-templates.controller.ts @n8n-io/adore
packages/cli/src/controllers/posthog.controller.ts @n8n-io/adore
packages/cli/src/controllers/translation.controller.ts @n8n-io/adore
packages/cli/src/controllers/project.controller.ts @n8n-io/ligo
packages/cli/src/controllers/workflow-statistics.controller.ts @n8n-io/ligo
packages/cli/src/controllers/node-types.controller.ts @n8n-io/nodes
packages/cli/src/controllers/dynamic-node-parameters.controller.ts @n8n-io/nodes
packages/cli/src/controllers/e2e.controller.ts @n8n-io/qa-dx
# CLI services
packages/cli/src/services/jwt.service.ts @n8n-io/iam
packages/cli/src/services/user.service.ts @n8n-io/iam
packages/cli/src/services/role.service.ts @n8n-io/iam
packages/cli/src/services/role-cache.service.ts @n8n-io/iam
packages/cli/src/services/password.utility.ts @n8n-io/iam
packages/cli/src/services/public-api-key.service.ts @n8n-io/iam
packages/cli/src/services/security-settings.service.ts @n8n-io/iam
packages/cli/src/services/ssrf/ @n8n-io/catalysts
packages/cli/src/services/static-auth-service.ts @n8n-io/iam
packages/cli/src/services/access.service.ts @n8n-io/iam
packages/cli/src/services/ai.service.ts @n8n-io/ai
packages/cli/src/services/ai-usage.service.ts @n8n-io/ai
packages/cli/src/services/ai-workflow-builder.service.ts @n8n-io/ai
packages/cli/src/services/annotation-tag.service.ee.ts @n8n-io/ai
packages/cli/src/services/folder.service.ts @n8n-io/adore
packages/cli/src/services/tag.service.ts @n8n-io/adore
packages/cli/src/services/cta.service.ts @n8n-io/adore
packages/cli/src/services/dynamic-templates.service.ts @n8n-io/adore
packages/cli/src/services/frontend.service.ts @n8n-io/adore
packages/cli/src/services/banner.service.ts @n8n-io/adore
packages/cli/src/services/project.service.ee.ts @n8n-io/ligo
packages/cli/src/services/workflow-statistics.service.ts @n8n-io/ligo
packages/cli/src/services/export.service.ts @n8n-io/ligo
packages/cli/src/services/import.service.ts @n8n-io/ligo
packages/cli/src/services/ownership.service.ts @n8n-io/ligo
packages/cli/src/services/dynamic-node-parameters.service.ts @n8n-io/nodes
# Adore
packages/frontend/editor-ui/ @n8n-io/frontend
packages/frontend/editor-ui/src/features/ai/ @n8n-io/ai
packages/frontend/editor-ui/src/features/credentials/ @n8n-io/iam
packages/frontend/editor-ui/src/features/execution/ @n8n-io/ligo
packages/frontend/editor-ui/src/features/project-roles/ @n8n-io/iam
packages/frontend/editor-ui/src/features/integrations/ @n8n-io/nodes
packages/frontend/@n8n/design-system/ @n8n-io/design
packages/frontend/@n8n/stores/ @n8n-io/frontend
packages/frontend/@n8n/composables/ @n8n-io/frontend
packages/frontend/@n8n/rest-api-client/ @n8n-io/frontend
packages/frontend/@n8n/storybook/ @n8n-io/design
packages/frontend/@n8n/i18n/ @n8n-io/frontend
packages/@n8n/stylelint-config/ @n8n-io/qa-dx
# AI
packages/@n8n/instance-ai/ @n8n-io/instance-ai
packages/@n8n/nodes-langchain/ @n8n-io/ai
packages/@n8n/ai-utilities/ @n8n-io/ai
packages/@n8n/ai-node-sdk/ @n8n-io/ai
packages/@n8n/ai-workflow-builder.ee/ @n8n-io/ai
packages/@n8n/agents/ @n8n-io/ai
packages/frontend/@n8n/chat/ @n8n-io/ai
# Chat
packages/@n8n/chat-hub/ @n8n-io/ai
# Nodes
packages/@n8n/codemirror-lang/ @n8n-io/nodes
packages/@n8n/codemirror-lang-html/ @n8n-io/nodes
packages/@n8n/codemirror-lang-sql/ @n8n-io/nodes
packages/nodes-base/ @n8n-io/nodes
packages/@n8n/decorators/ @n8n-io/catalysts
packages/node-dev/ @n8n-io/nodes
packages/@n8n/create-node/ @n8n-io/nodes
packages/@n8n/node-cli/ @n8n-io/nodes
packages/@n8n/imap/ @n8n-io/iam
packages/@n8n/syslog-client/ @n8n-io/iam
packages/@n8n/scan-community-package/ @n8n-io/nodes
packages/@n8n/eslint-plugin-community-nodes/ @n8n-io/nodes
packages/@n8n/computer-use/ @n8n-io/nodes
packages/@n8n/local-gateway/ @n8n-io/nodes
packages/@n8n/mcp-browser/ @n8n-io/nodes
packages/@n8n/mcp-browser-extension/ @n8n-io/nodes
# IAM
packages/@n8n/permissions/ @n8n-io/iam
packages/@n8n/client-oauth2/ @n8n-io/iam
# LiGo
packages/extensions/insights/ @n8n-io/ligo
# CI/CD
.github/ @n8n-io/qa-dx
docker/ @n8n-io/qa-dx
# QA
packages/testing/ @n8n-io/qa-dx
packages/@n8n/benchmark/ @n8n-io/qa-dx
packages/@n8n/vitest-config/ @n8n-io/qa-dx

View File

@ -487,7 +487,7 @@ Team ownership mappings in `CODEOWNERS`:
| `ubuntu-latest` | 2 | Simple jobs, fork PR E2E |
| `blacksmith-2vcpu-ubuntu-2204` | 2 | Standard builds, E2E shards |
| `blacksmith-4vcpu-ubuntu-2204` | 4 | Unit tests, typecheck, lint |
| `blacksmith-8vcpu-ubuntu-2204` | 8 | E2E coverage (weekly) |
| `blacksmith-8vcpu-ubuntu-2204` | 8 | Heavy parallel workloads |
| `blacksmith-4vcpu-ubuntu-2204-arm` | 4 | ARM64 Docker builds |
### Selection Guidelines
@ -500,7 +500,7 @@ Team ownership mappings in `CODEOWNERS`:
**`blacksmith-4vcpu-ubuntu-2204`** - Unit tests (parallelized), linting (parallel file processing), typechecking (CPU-intensive), E2E test shards
**`blacksmith-8vcpu-ubuntu-2204`** - Heavy parallel workloads, full E2E coverage runs
**`blacksmith-8vcpu-ubuntu-2204`** - Heavy parallel workloads
### Runner Provider Toggle

View File

@ -1,6 +1,10 @@
import { describe, it } from 'node:test';
import { describe, it, before, after } from 'node:test';
import assert from 'node:assert/strict';
import { matchGlob, parseFilters, evaluateFilter, runValidate } from '../ci-filter.mjs';
import { execFileSync } from 'node:child_process';
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs';
import { tmpdir } from 'node:os';
import { join } from 'node:path';
import { matchGlob, parseFilters, evaluateFilter, runValidate, getChangedFiles, getMergeBase } from '../ci-filter.mjs';
// --- matchGlob ---
@ -172,6 +176,70 @@ describe('evaluateFilter', () => {
});
});
// --- getChangedFiles + getMergeBase (integration, exercises real git) ---
describe('getChangedFiles', () => {
const repoDir = mkdtempSync(join(tmpdir(), 'ci-filter-'));
const remoteDir = mkdtempSync(join(tmpdir(), 'ci-filter-remote-'));
const originalCwd = process.cwd();
const git = (args: string[], cwd: string = repoDir) =>
execFileSync('git', args, { cwd, stdio: 'pipe' }).toString().trim();
before(() => {
// Bare remote so the action's `git fetch origin <ref>` works
execFileSync('git', ['init', '--bare', '-b', 'main', remoteDir], { stdio: 'pipe' });
git(['init', '-b', 'main'], repoDir);
git(['config', 'user.email', 'test@test.local']);
git(['config', 'user.name', 'test']);
git(['remote', 'add', 'origin', remoteDir]);
// Common ancestor commit
writeFileSync(join(repoDir, 'shared.ts'), 'shared\n');
git(['add', '.']);
git(['commit', '-m', 'root']);
git(['push', 'origin', 'main']);
// PR branches off main, adds a file
git(['checkout', '-b', 'pr-branch']);
writeFileSync(join(repoDir, 'pr-only.ts'), 'pr\n');
git(['add', '.']);
git(['commit', '-m', 'PR change']);
// Master drifts forward, modifying shared.ts (the pre-fix bug surface)
git(['checkout', 'main']);
writeFileSync(join(repoDir, 'shared.ts'), 'shared\ndrift-from-master\n');
git(['commit', '-am', 'master moves']);
git(['push', 'origin', 'main']);
// Sit on the PR branch as if running CI
git(['checkout', 'pr-branch']);
process.chdir(repoDir);
});
after(() => {
process.chdir(originalCwd);
rmSync(repoDir, { recursive: true, force: true });
rmSync(remoteDir, { recursive: true, force: true });
});
it('returns only PR-introduced files (master drift does not pollute)', () => {
const changed = getChangedFiles('main');
assert.deepEqual(changed, ['pr-only.ts']);
});
it('getMergeBase returns the common ancestor commit', () => {
const mergeBase = getMergeBase();
assert.match(mergeBase, /^[a-f0-9]{40}$/);
const expected = git(['merge-base', 'FETCH_HEAD', 'HEAD']);
assert.equal(mergeBase, expected);
});
it('rejects unsafe base refs', () => {
assert.throws(() => getChangedFiles('main; rm -rf /'), /Unsafe/);
assert.throws(() => getChangedFiles('main$evil'), /Unsafe/);
});
});
// --- runValidate ---
describe('runValidate', () => {

View File

@ -30,6 +30,9 @@ outputs:
base-ref:
description: 'Resolved base ref used for the diff (filter mode only)'
value: ${{ steps.run.outputs.base-ref }}
merge-base:
description: 'Merge-base SHA between FETCH_HEAD and HEAD (filter mode only)'
value: ${{ steps.run.outputs.merge-base }}
runs:
using: 'composite'

View File

@ -98,14 +98,30 @@ export function getChangedFiles(baseRef) {
if (!SAFE_REF.test(baseRef)) {
throw new Error(`Unsafe base ref: "${baseRef}"`);
}
execSync(`git fetch --depth=1 origin ${baseRef}`, { stdio: 'pipe' });
const output = execSync('git diff --name-only FETCH_HEAD HEAD', { encoding: 'utf-8' });
// Deepen the fetch so the merge base is reachable from this shallow clone.
// A 2-dot diff (FETCH_HEAD HEAD) reports anything that differs in either
// direction, so files added to base-branch after the PR diverged show up as
// "changed" — spuriously triggering path-filtered jobs. The merge base
// scopes the diff to PR-only changes.
execSync(`git fetch --no-tags --prune --deepen=200 origin ${baseRef}`, { stdio: 'pipe' });
const output = execSync('git diff --name-only --merge-base FETCH_HEAD HEAD', {
encoding: 'utf-8',
});
return output
.split('\n')
.map((f) => f.trim())
.filter(Boolean);
}
/**
* Resolve the merge-base SHA between FETCH_HEAD and HEAD.
* Used to give downstream tools (e.g. janitor's AST diff) a stable, PR-only
* comparison point that doesn't drift when the base branch moves forward.
*/
export function getMergeBase() {
return execSync('git merge-base FETCH_HEAD HEAD', { encoding: 'utf-8' }).trim();
}
// --- Filter evaluation ---
/**
@ -155,7 +171,9 @@ export function runFilter() {
const filters = parseFilters(filtersInput);
const changedFiles = getChangedFiles(baseRef);
const mergeBase = getMergeBase();
console.log(`Merge base: ${mergeBase}`);
console.log(`Changed files (${changedFiles.length}):`);
for (const f of changedFiles) {
console.log(` ${f}`);
@ -172,6 +190,7 @@ export function runFilter() {
setOutput('results', JSON.stringify(results));
setOutput('changed-files', changedFiles.join('\n'));
setOutput('base-ref', baseRef);
setOutput('merge-base', mergeBase);
}
// --- Mode: validate ---

View File

@ -45,13 +45,19 @@ runs:
mkdir -p "$PNPM_STORE_PATH"
fi
- name: Install Aikido SafeChain
if: runner.os != 'Windows'
- name: Configure SafeChain
shell: bash
run: |
VERSION="1.4.1"
EXPECTED_SHA256="628235987175072a4255aa3f5f0128f31795b63970f1970ae8a04d07bf8527b0"
node .github/scripts/retry.mjs --attempts 3 --delay 10 \
"curl -fsSL -o install-safe-chain.sh https://github.com/AikidoSec/safe-chain/releases/download/${VERSION}/install-safe-chain.sh"
# SafeChain only reads configs from this directory https://github.com/AikidoSec/safe-chain#configuration-options-1
mkdir -p "$HOME/.safe-chain"
cp "${{ github.action_path }}/safe-chain.config.json" "$HOME/.safe-chain/config.json"
- name: Install Aikido SafeChain
run: |
VERSION="1.5.1"
EXPECTED_SHA256="7c910fff717649c86cc8ca960e6c054d3734da2d660050e3bcfc54029e3b485b"
node .github/scripts/retry.mjs --attempts 3 --delay 10 -- \
curl -fsSL -o install-safe-chain.sh "https://github.com/AikidoSec/safe-chain/releases/download/${VERSION}/install-safe-chain.sh"
echo "${EXPECTED_SHA256} install-safe-chain.sh" | sha256sum -c -
sh install-safe-chain.sh --ci
rm install-safe-chain.sh
@ -60,16 +66,11 @@ runs:
- name: Install Dependencies
if: ${{ inputs.install-command != '' }}
env:
INSTALL_COMMAND: ${{ inputs.install-command }}
INSTALL_COMMAND: ${{ inputs.install-command }}
run: |
$INSTALL_COMMAND
shell: bash
- name: Disable safe-chain
if: runner.os != 'Windows'
run: safe-chain teardown
shell: bash
- name: Configure Turborepo Cache
uses: rharkor/caching-for-turbo@0abc2381e688c4d2832f0665a68a01c6e82f0d6c # v2.3.11

View File

@ -0,0 +1,16 @@
{
"npm": {
"minimumPackageAgeExclusions": [
"@n8n/*",
"@n8n_io/*",
"n8n",
"n8n-containers",
"n8n-core",
"n8n-editor-ui",
"n8n-node-dev",
"n8n-nodes-base",
"n8n-playwright",
"n8n-workflow"
]
}
}

View File

@ -11,7 +11,7 @@ const exec = promisify(child_process.exec);
/**
* @param {string | semver.SemVer} currentVersion
*/
function generateExperimentalVersion(currentVersion) {
export function generateExperimentalVersion(currentVersion) {
const parsed = semver.parse(currentVersion);
if (!parsed) throw new Error(`Invalid version: ${currentVersion}`);
@ -28,84 +28,31 @@ function generateExperimentalVersion(currentVersion) {
return `${parsed.major}.${parsed.minor}.${parsed.patch}-exp.0`;
}
const rootDir = process.cwd();
const releaseType = /** @type { import('semver').ReleaseType | "experimental" } */ (
process.env.RELEASE_TYPE
);
assert.match(releaseType, /^(patch|minor|major|experimental|premajor)$/, 'Invalid RELEASE_TYPE');
// TODO: if releaseType is `auto` determine release type based on the changelog
const lastTag = (await exec('git describe --tags --match "n8n@*" --abbrev=0')).stdout.trim();
const packages = JSON.parse(
(
await exec(
`pnpm ls -r --only-projects --json | jq -r '[.[] | { name: .name, version: .version, path: .path, private: .private}]'`,
)
).stdout,
);
const packageMap = {};
for (let { name, path, version, private: isPrivate } of packages) {
if (isPrivate && path !== rootDir) {
continue;
}
if (path === rootDir) {
name = 'monorepo-root';
}
const isDirty = await exec(`git diff --quiet HEAD ${lastTag} -- ${path}`)
.then(() => false)
.catch((error) => true);
packageMap[name] = { path, isDirty, version };
/**
* @param {{ pnpm?: { overrides?: Record<string, string> }, overrides?: Record<string, string> }} pkg
* @returns {Record<string, string>}
*/
export function getOverrides(pkg) {
return { ...pkg.pnpm?.overrides, ...pkg.overrides };
}
assert.ok(
Object.values(packageMap).some(({ isDirty }) => isDirty),
'No changes found since the last release',
);
// Propagate isDirty transitively: if a package's dependency will be bumped,
// that package also needs a bump (e.g. design-system → editor-ui → cli).
// Detect root-level changes that affect resolved dep versions without touching individual
// package.json files: pnpm.overrides (applies to all specifiers)
// and pnpm-workspace.yaml catalog entries (applies only to deps using a "catalog:…" specifier).
const rootPkgJson = JSON.parse(await readFile(resolve(rootDir, 'package.json'), 'utf-8'));
const rootPkgJsonAtTag = await exec(`git show ${lastTag}:package.json`)
.then(({ stdout }) => JSON.parse(stdout))
.catch(() => ({}));
const getOverrides = (pkg) => ({ ...pkg.pnpm?.overrides, ...pkg.overrides });
const currentOverrides = getOverrides(rootPkgJson);
const previousOverrides = getOverrides(rootPkgJsonAtTag);
const changedOverrides = new Set(
Object.keys({ ...currentOverrides, ...previousOverrides }).filter(
(k) => currentOverrides[k] !== previousOverrides[k],
),
);
const parseWorkspaceYaml = (content) => {
/**
* @param {string} content
* @returns {Record<string, unknown>}
*/
export function parseWorkspaceYaml(content) {
try {
return /** @type {Record<string, unknown>} */ (parse(content) ?? {});
} catch {
return {};
}
};
const workspaceYaml = parseWorkspaceYaml(
await readFile(resolve(rootDir, 'pnpm-workspace.yaml'), 'utf-8').catch(() => ''),
);
const workspaceYamlAtTag = parseWorkspaceYaml(
await exec(`git show ${lastTag}:pnpm-workspace.yaml`)
.then(({ stdout }) => stdout)
.catch(() => ''),
);
const getCatalogs = (ws) => {
}
/**
* @param {Record<string, unknown>} ws
* @returns {Map<string, Record<string, string>>}
*/
export function getCatalogs(ws) {
const result = new Map();
if (ws.catalog) {
result.set('default', /** @type {Record<string,string>} */ (ws.catalog));
@ -116,98 +63,232 @@ const getCatalogs = (ws) => {
}
return result;
};
// changedCatalogEntries: Map<catalogName, Set<depName>>
const currentCatalogs = getCatalogs(workspaceYaml);
const previousCatalogs = getCatalogs(workspaceYamlAtTag);
const changedCatalogEntries = new Map();
for (const catalogName of new Set([...currentCatalogs.keys(), ...previousCatalogs.keys()])) {
const current = currentCatalogs.get(catalogName) ?? {};
const previous = previousCatalogs.get(catalogName) ?? {};
const changedDeps = new Set(
Object.keys({ ...current, ...previous }).filter((dep) => current[dep] !== previous[dep]),
);
if (changedDeps.size > 0) {
changedCatalogEntries.set(catalogName, changedDeps);
}
}
// Store full dep objects (with specifiers) so we can inspect "catalog:…" values below.
const depsByPackage = {};
for (const packageName in packageMap) {
const packageFile = resolve(packageMap[packageName].path, 'package.json');
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
depsByPackage[packageName] = /** @type {Record<string,string>} */ (
packageJson.dependencies ?? {}
/**
* @param {Record<string, string>} currentOverrides
* @param {Record<string, string>} previousOverrides
* @returns {Set<string>}
*/
export function computeChangedOverrides(currentOverrides, previousOverrides) {
return new Set(
Object.keys({ ...currentOverrides, ...previousOverrides }).filter(
(k) => currentOverrides[k] !== previousOverrides[k],
),
);
}
// Mark packages dirty if any dep had a root-level override or catalog version change.
for (const [packageName, deps] of Object.entries(depsByPackage)) {
if (packageMap[packageName].isDirty) continue;
for (const [dep, specifier] of Object.entries(deps)) {
if (changedOverrides.has(dep)) {
packageMap[packageName].isDirty = true;
break;
/**
* @param {Map<string, Record<string, string>>} currentCatalogs
* @param {Map<string, Record<string, string>>} previousCatalogs
* @returns {Map<string, Set<string>>}
*/
export function computeChangedCatalogEntries(currentCatalogs, previousCatalogs) {
const changedCatalogEntries = new Map();
for (const catalogName of new Set([...currentCatalogs.keys(), ...previousCatalogs.keys()])) {
const current = currentCatalogs.get(catalogName) ?? {};
const previous = previousCatalogs.get(catalogName) ?? {};
const changedDeps = new Set(
Object.keys({ ...current, ...previous }).filter((dep) => current[dep] !== previous[dep]),
);
if (changedDeps.size > 0) {
changedCatalogEntries.set(catalogName, changedDeps);
}
if (typeof specifier === 'string' && specifier.startsWith('catalog:')) {
const catalogName = specifier === 'catalog:' ? 'default' : specifier.slice(8);
if (changedCatalogEntries.get(catalogName)?.has(dep)) {
}
return changedCatalogEntries;
}
/**
* Mark packages as dirty if any dep had a root-level override or catalog version change.
* Mutates packageMap in place.
*
* @param {Record<string, { isDirty: boolean }>} packageMap
* @param {Record<string, Record<string, string>>} depsByPackage
* @param {Set<string>} changedOverrides
* @param {Map<string, Set<string>>} changedCatalogEntries
*/
export function markDirtyByRootChanges(
packageMap,
depsByPackage,
changedOverrides,
changedCatalogEntries,
) {
for (const [packageName, deps] of Object.entries(depsByPackage)) {
if (packageMap[packageName].isDirty) continue;
for (const [dep, specifier] of Object.entries(deps)) {
if (changedOverrides.has(dep)) {
packageMap[packageName].isDirty = true;
break;
}
if (typeof specifier === 'string' && specifier.startsWith('catalog:')) {
const catalogName = specifier === 'catalog:' ? 'default' : specifier.slice(8);
if (changedCatalogEntries.get(catalogName)?.has(dep)) {
packageMap[packageName].isDirty = true;
break;
}
}
}
}
}
let changed = true;
while (changed) {
changed = false;
for (const packageName in packageMap) {
if (packageMap[packageName].isDirty) continue;
if (Object.keys(depsByPackage[packageName]).some((dep) => packageMap[dep]?.isDirty)) {
packageMap[packageName].isDirty = true;
changed = true;
/**
* Propagate isDirty transitively: if a package's dependency will be bumped,
* that package also needs a bump. Mutates packageMap in place.
*
* @param {Record<string, { isDirty: boolean }>} packageMap
* @param {Record<string, Record<string, string>>} depsByPackage
*/
export function propagateDirtyTransitively(packageMap, depsByPackage) {
let changed = true;
while (changed) {
changed = false;
for (const packageName in packageMap) {
if (packageMap[packageName].isDirty) continue;
if (Object.keys(depsByPackage[packageName]).some((dep) => packageMap[dep]?.isDirty)) {
packageMap[packageName].isDirty = true;
changed = true;
}
}
}
}
// Keep the monorepo version up to date with the released version
packageMap['monorepo-root'].version = packageMap['n8n'].version;
for (const packageName in packageMap) {
const { path, version, isDirty } = packageMap[packageName];
const packageFile = resolve(path, 'package.json');
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
const dependencyIsDirty = Object.keys(packageJson.dependencies || {}).some(
(dependencyName) => packageMap[dependencyName]?.isDirty,
);
let newVersion = version;
if (isDirty || dependencyIsDirty) {
switch (releaseType) {
case 'experimental':
newVersion = generateExperimentalVersion(version);
break;
case 'premajor':
newVersion = semver.inc(
/**
* @param {string} version
* @param {import('semver').ReleaseType | 'experimental'} releaseType
* @returns {string}
*/
export function computeNewVersion(version, releaseType) {
switch (releaseType) {
case 'experimental':
return generateExperimentalVersion(version);
case 'premajor':
return /** @type {string} */ (
semver.inc(
version,
version.includes('-rc.') ? 'prerelease' : 'premajor',
undefined,
'rc',
);
break;
default:
newVersion = semver.inc(version, releaseType);
break;
}
)
);
default:
return /** @type {string} */ (semver.inc(version, releaseType));
}
packageJson.version = packageMap[packageName].nextVersion = newVersion;
await writeFile(packageFile, JSON.stringify(packageJson, null, 2) + '\n');
}
console.log(packageMap['n8n'].nextVersion);
async function bumpVersions() {
const rootDir = process.cwd();
const releaseType = /** @type { import('semver').ReleaseType | "experimental" } */ (
process.env.RELEASE_TYPE
);
assert.match(releaseType, /^(patch|minor|major|experimental|premajor)$/, 'Invalid RELEASE_TYPE');
// TODO: if releaseType is `auto` determine release type based on the changelog
const lastTag = (await exec('git describe --tags --match "n8n@*" --abbrev=0')).stdout.trim();
const packages = JSON.parse(
(
await exec(
`pnpm ls -r --only-projects --json | jq -r '[.[] | { name: .name, version: .version, path: .path, private: .private}]'`,
)
).stdout,
);
/** @type {Record<string, { path: string, isDirty: boolean, version: string, nextVersion?: string }>} */
const packageMap = {};
for (let { name, path, version, private: isPrivate } of packages) {
if (isPrivate && path !== rootDir) {
continue;
}
if (path === rootDir) {
name = 'monorepo-root';
}
const isDirty = await exec(`git diff --quiet HEAD ${lastTag} -- ${path}`)
.then(() => false)
.catch(() => true);
packageMap[name] = { path, isDirty, version };
}
assert.ok(
Object.values(packageMap).some(({ isDirty }) => isDirty),
'No changes found since the last release',
);
// Propagate isDirty transitively: if a package's dependency will be bumped,
// that package also needs a bump (e.g. design-system → editor-ui → cli).
// Detect root-level changes that affect resolved dep versions without touching individual
// package.json files: pnpm.overrides (applies to all specifiers)
// and pnpm-workspace.yaml catalog entries (applies only to deps using a "catalog:…" specifier).
const rootPkgJson = JSON.parse(await readFile(resolve(rootDir, 'package.json'), 'utf-8'));
const rootPkgJsonAtTag = await exec(`git show ${lastTag}:package.json`)
.then(({ stdout }) => JSON.parse(stdout))
.catch(() => ({}));
const changedOverrides = computeChangedOverrides(
getOverrides(rootPkgJson),
getOverrides(rootPkgJsonAtTag),
);
const workspaceYaml = parseWorkspaceYaml(
await readFile(resolve(rootDir, 'pnpm-workspace.yaml'), 'utf-8').catch(() => ''),
);
const workspaceYamlAtTag = parseWorkspaceYaml(
await exec(`git show ${lastTag}:pnpm-workspace.yaml`)
.then(({ stdout }) => stdout)
.catch(() => ''),
);
const changedCatalogEntries = computeChangedCatalogEntries(
getCatalogs(workspaceYaml),
getCatalogs(workspaceYamlAtTag),
);
// Store full dep objects (with specifiers) so we can inspect "catalog:…" values below.
/** @type {Record<string, Record<string, string>>} */
const depsByPackage = {};
for (const packageName in packageMap) {
const packageFile = resolve(packageMap[packageName].path, 'package.json');
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
depsByPackage[packageName] = /** @type {Record<string,string>} */ (
packageJson.dependencies ?? {}
);
}
// Mark packages dirty if any dep had a root-level override or catalog version change.
markDirtyByRootChanges(packageMap, depsByPackage, changedOverrides, changedCatalogEntries);
propagateDirtyTransitively(packageMap, depsByPackage);
// Keep the monorepo version up to date with the released version
packageMap['monorepo-root'].version = packageMap['n8n'].version;
for (const packageName in packageMap) {
const { path, version, isDirty } = packageMap[packageName];
const packageFile = resolve(path, 'package.json');
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
const dependencyIsDirty = Object.keys(packageJson.dependencies || {}).some(
(dependencyName) => packageMap[dependencyName]?.isDirty,
);
let newVersion = version;
if (isDirty || dependencyIsDirty) {
newVersion = computeNewVersion(version, releaseType);
}
packageJson.version = packageMap[packageName].nextVersion = newVersion;
await writeFile(packageFile, JSON.stringify(packageJson, null, 2) + '\n');
}
console.log(packageMap['n8n'].nextVersion);
}
// only run when executed directly, not when imported by tests
if (import.meta.url === `file://${process.argv[1]}`) {
bumpVersions();
}

380
.github/scripts/bump-versions.test.mjs vendored Normal file
View File

@ -0,0 +1,380 @@
/**
* Run these tests with:
*
* node --test ./.github/scripts/bump-versions.test.mjs
*/
import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import {
generateExperimentalVersion,
getOverrides,
parseWorkspaceYaml,
getCatalogs,
computeChangedOverrides,
computeChangedCatalogEntries,
markDirtyByRootChanges,
propagateDirtyTransitively,
computeNewVersion,
} from './bump-versions.mjs';
describe('generateExperimentalVersion', () => {
it('creates -exp.0 from a stable version', () => {
assert.equal(generateExperimentalVersion('1.2.3'), '1.2.3-exp.0');
});
it('increments exp minor when already at exp.0', () => {
assert.equal(generateExperimentalVersion('1.2.3-exp.0'), '1.2.3-exp.1');
});
it('increments exp minor when already at exp.5', () => {
assert.equal(generateExperimentalVersion('1.2.3-exp.5'), '1.2.3-exp.6');
});
it('creates -exp.0 from a version with a different pre-release tag', () => {
assert.equal(generateExperimentalVersion('1.2.3-beta.1'), '1.2.3-exp.0');
});
it('handles multi-digit version numbers', () => {
assert.equal(generateExperimentalVersion('10.20.30'), '10.20.30-exp.0');
});
it('throws on an invalid version string', () => {
assert.throws(() => generateExperimentalVersion('not-a-version'), /Invalid version/);
});
});
describe('getOverrides', () => {
it('returns empty object when no overrides exist', () => {
assert.deepEqual(getOverrides({}), {});
});
it('returns pnpm.overrides when only pnpm.overrides is set', () => {
assert.deepEqual(getOverrides({ pnpm: { overrides: { lodash: '^4.0.0' } } }), {
lodash: '^4.0.0',
});
});
it('returns overrides when only top-level overrides is set', () => {
assert.deepEqual(getOverrides({ overrides: { lodash: '^4.0.0' } }), { lodash: '^4.0.0' });
});
it('merges both fields with top-level overrides taking precedence for the same key', () => {
assert.deepEqual(
getOverrides({
pnpm: { overrides: { lodash: '^3.0.0', underscore: '^1.0.0' } },
overrides: { lodash: '^4.0.0' },
}),
{ lodash: '^4.0.0', underscore: '^1.0.0' },
);
});
});
describe('parseWorkspaceYaml', () => {
it('parses valid YAML into an object', () => {
assert.deepEqual(parseWorkspaceYaml('catalog:\n lodash: "^4.0.0"'), {
catalog: { lodash: '^4.0.0' },
});
});
it('returns empty object for an empty string', () => {
assert.deepEqual(parseWorkspaceYaml(''), {});
});
it('returns empty object for invalid YAML', () => {
assert.deepEqual(parseWorkspaceYaml(': - invalid: [yaml}'), {});
});
});
describe('getCatalogs', () => {
it('returns empty map when no catalog or catalogs field exists', () => {
assert.equal(getCatalogs({}).size, 0);
});
it('returns a "default" entry for the top-level catalog field', () => {
const result = getCatalogs({ catalog: { lodash: '^4.0.0' } });
assert.equal(result.size, 1);
assert.deepEqual(result.get('default'), { lodash: '^4.0.0' });
});
it('returns named entries from the catalogs field', () => {
const result = getCatalogs({ catalogs: { react18: { react: '^18.0.0' } } });
assert.equal(result.size, 1);
assert.deepEqual(result.get('react18'), { react: '^18.0.0' });
});
it('returns both default and named catalog entries when both fields are present', () => {
const result = getCatalogs({
catalog: { lodash: '^4.0.0' },
catalogs: { react18: { react: '^18.0.0' } },
});
assert.equal(result.size, 2);
assert.deepEqual(result.get('default'), { lodash: '^4.0.0' });
assert.deepEqual(result.get('react18'), { react: '^18.0.0' });
});
});
describe('computeChangedOverrides', () => {
it('returns empty set when nothing changed', () => {
assert.equal(computeChangedOverrides({ lodash: '^4' }, { lodash: '^4' }).size, 0);
});
it('detects an added override', () => {
const result = computeChangedOverrides({ lodash: '^4' }, {});
assert.ok(result.has('lodash'));
});
it('detects a removed override', () => {
const result = computeChangedOverrides({}, { lodash: '^4' });
assert.ok(result.has('lodash'));
});
it('detects a changed override value', () => {
const result = computeChangedOverrides({ lodash: '^4' }, { lodash: '^3' });
assert.ok(result.has('lodash'));
});
it('does not include unchanged overrides', () => {
const result = computeChangedOverrides(
{ lodash: '^4', underscore: '^1' },
{ lodash: '^4', underscore: '^1' },
);
assert.equal(result.size, 0);
});
it('handles mixed changed and unchanged overrides', () => {
const result = computeChangedOverrides(
{ lodash: '^4', underscore: '^2' },
{ lodash: '^4', underscore: '^1' },
);
assert.equal(result.size, 1);
assert.ok(result.has('underscore'));
assert.ok(!result.has('lodash'));
});
});
describe('computeChangedCatalogEntries', () => {
it('returns empty map when nothing changed', () => {
const current = new Map([['default', { lodash: '^4' }]]);
const previous = new Map([['default', { lodash: '^4' }]]);
assert.equal(computeChangedCatalogEntries(current, previous).size, 0);
});
it('detects an added dep in a catalog', () => {
const current = new Map([['default', { lodash: '^4' }]]);
const previous = new Map([['default', {}]]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('default')?.has('lodash'));
});
it('detects a removed dep from a catalog', () => {
const current = new Map([['default', {}]]);
const previous = new Map([['default', { lodash: '^4' }]]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('default')?.has('lodash'));
});
it('detects a changed dep version in a catalog', () => {
const current = new Map([['default', { lodash: '^4' }]]);
const previous = new Map([['default', { lodash: '^3' }]]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('default')?.has('lodash'));
});
it('detects changes in a named catalog', () => {
const current = new Map([['react18', { react: '^18' }]]);
const previous = new Map([['react18', { react: '^17' }]]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('react18')?.has('react'));
});
it('detects a newly added catalog', () => {
const current = new Map([['newCatalog', { lodash: '^4' }]]);
const previous = new Map();
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('newCatalog')?.has('lodash'));
});
it('detects a removed catalog', () => {
const current = new Map();
const previous = new Map([['oldCatalog', { lodash: '^4' }]]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.get('oldCatalog')?.has('lodash'));
});
it('does not include a catalog that has no changed entries', () => {
const current = new Map([
['default', { lodash: '^4' }],
['react18', { react: '^18' }],
]);
const previous = new Map([
['default', { lodash: '^3' }],
['react18', { react: '^18' }],
]);
const result = computeChangedCatalogEntries(current, previous);
assert.ok(result.has('default'));
assert.ok(!result.has('react18'));
});
});
describe('markDirtyByRootChanges', () => {
it('marks a package dirty when its dep appears in changedOverrides', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['lodash']), new Map());
assert.ok(packageMap['pkg-a'].isDirty);
});
it('skips already-dirty packages', () => {
const packageMap = { 'pkg-a': { isDirty: true } };
// No deps, but package is already dirty — should not throw or change state
const depsByPackage = { 'pkg-a': {} };
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['lodash']), new Map());
assert.ok(packageMap['pkg-a'].isDirty);
});
it('marks a package dirty when its dep uses "catalog:" (default catalog) and that entry changed', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { lodash: 'catalog:' } };
const changedCatalogEntries = new Map([['default', new Set(['lodash'])]]);
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
assert.ok(packageMap['pkg-a'].isDirty);
});
it('marks a package dirty when its dep uses "catalog:<name>" and that named catalog entry changed', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { react: 'catalog:react18' } };
const changedCatalogEntries = new Map([['react18', new Set(['react'])]]);
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
assert.ok(packageMap['pkg-a'].isDirty);
});
it('does not mark a package dirty when none of its deps changed', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['underscore']), new Map());
assert.ok(!packageMap['pkg-a'].isDirty);
});
it('does not mark a package dirty when a catalog: dep is in a catalog with no changes', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { lodash: 'catalog:' } };
const changedCatalogEntries = new Map([['default', new Set(['underscore'])]]);
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
assert.ok(!packageMap['pkg-a'].isDirty);
});
it('does not mark a package dirty when a catalog: dep is in a different catalog than the one that changed', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { react: 'catalog:react18' } };
const changedCatalogEntries = new Map([['default', new Set(['react'])]]);
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
assert.ok(!packageMap['pkg-a'].isDirty);
});
});
describe('propagateDirtyTransitively', () => {
it('does nothing when no packages are dirty', () => {
const packageMap = {
'pkg-a': { isDirty: false },
'pkg-b': { isDirty: false },
};
const depsByPackage = {
'pkg-a': { 'pkg-b': 'workspace:*' },
'pkg-b': {},
};
propagateDirtyTransitively(packageMap, depsByPackage);
assert.ok(!packageMap['pkg-a'].isDirty);
assert.ok(!packageMap['pkg-b'].isDirty);
});
it('propagates dirty state one level up the dependency chain', () => {
const packageMap = {
'pkg-a': { isDirty: false },
'pkg-b': { isDirty: true },
};
const depsByPackage = {
'pkg-a': { 'pkg-b': 'workspace:*' },
'pkg-b': {},
};
propagateDirtyTransitively(packageMap, depsByPackage);
assert.ok(packageMap['pkg-a'].isDirty);
});
it('propagates dirty state through multiple levels', () => {
const packageMap = {
'pkg-a': { isDirty: false },
'pkg-b': { isDirty: false },
'pkg-c': { isDirty: true },
};
const depsByPackage = {
'pkg-a': { 'pkg-b': 'workspace:*' },
'pkg-b': { 'pkg-c': 'workspace:*' },
'pkg-c': {},
};
propagateDirtyTransitively(packageMap, depsByPackage);
assert.ok(packageMap['pkg-b'].isDirty, 'pkg-b should be dirty (depends on dirty pkg-c)');
assert.ok(packageMap['pkg-a'].isDirty, 'pkg-a should be dirty (depends on dirty pkg-b)');
});
it('does not mark packages dirty when their deps are external (not in packageMap)', () => {
const packageMap = { 'pkg-a': { isDirty: false } };
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
propagateDirtyTransitively(packageMap, depsByPackage);
assert.ok(!packageMap['pkg-a'].isDirty);
});
it('handles diamond dependency graphs without infinite loops', () => {
// pkg-a depends on pkg-b and pkg-c; both depend on pkg-d (dirty)
const packageMap = {
'pkg-a': { isDirty: false },
'pkg-b': { isDirty: false },
'pkg-c': { isDirty: false },
'pkg-d': { isDirty: true },
};
const depsByPackage = {
'pkg-a': { 'pkg-b': 'workspace:*', 'pkg-c': 'workspace:*' },
'pkg-b': { 'pkg-d': 'workspace:*' },
'pkg-c': { 'pkg-d': 'workspace:*' },
'pkg-d': {},
};
propagateDirtyTransitively(packageMap, depsByPackage);
assert.ok(packageMap['pkg-b'].isDirty);
assert.ok(packageMap['pkg-c'].isDirty);
assert.ok(packageMap['pkg-a'].isDirty);
});
});
describe('computeNewVersion', () => {
it('increments patch version', () => {
assert.equal(computeNewVersion('1.2.3', 'patch'), '1.2.4');
});
it('increments minor version (resets patch)', () => {
assert.equal(computeNewVersion('1.2.3', 'minor'), '1.3.0');
});
it('increments major version (resets minor and patch)', () => {
assert.equal(computeNewVersion('1.2.3', 'major'), '2.0.0');
});
it('creates -exp.0 from a stable version for experimental', () => {
assert.equal(computeNewVersion('1.2.3', 'experimental'), '1.2.3-exp.0');
});
it('increments exp minor for experimental when already an exp version', () => {
assert.equal(computeNewVersion('1.2.3-exp.0', 'experimental'), '1.2.3-exp.1');
});
it('creates a premajor rc version from a stable version', () => {
assert.equal(computeNewVersion('1.2.3', 'premajor'), '2.0.0-rc.0');
});
it('increments the rc prerelease number for premajor when already an rc version', () => {
assert.equal(computeNewVersion('2.0.0-rc.0', 'premajor'), '2.0.0-rc.1');
});
it('increments rc correctly across multiple premajor calls', () => {
assert.equal(computeNewVersion('2.0.0-rc.4', 'premajor'), '2.0.0-rc.5');
});
});

114
.github/scripts/cla/check-signatures.mjs vendored Normal file
View File

@ -0,0 +1,114 @@
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
//
// Collects unique commit authors for the PR (or for the commits a merge
// queue is about to land) and asks the n8n CLA service whether each one
// has signed. Surfaces three buckets to subsequent steps:
// - signed : verified contributors
// - unsigned : verified non-contributors (block the merge)
// - errored : CLA lookup failed (block the merge — fail-closed so we
// never green-light an unverified contribution)
//
// Commits whose author email is not linked to a GitHub account can't be
// looked up by login; they're surfaced separately as `unlinked`.
/**
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
* @typedef { import("@actions/github/lib/context").Context } Context
* @typedef { typeof import("@actions/core") } Core
*/
/**
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
*/
export default async function checkSignatures ({ github, context, core }) {
const { owner, repo } = context.repo;
const prNumber = process.env.PR_NUMBER;
const headSha = process.env.HEAD_SHA;
const baseSha = process.env.BASE_SHA;
const isMergeGroup = process.env.IS_MERGE_GROUP === 'true';
/** @type {Set<string>} */
const authors = new Set();
/** @type {Array<{sha: string, name: string, email: string}>} */
const unlinkedCommits = [];
/**
* @param {Array<any>} commits
*/
const collect = (commits) => {
for (const c of commits) {
// Bot-authored commits don't need a CLA; skip before the linked/unlinked split
// so they don't fall through to `unlinkedCommits` and fail `all_signed`.
if (c.author && c.author.type === 'Bot') continue;
if (c.author && c.author.login) {
authors.add(c.author.login);
} else if (c.commit && c.commit.author) {
unlinkedCommits.push({
sha: c.sha,
name: c.commit.author.name,
email: c.commit.author.email,
});
}
}
};
if (isMergeGroup) {
const { data: comparison } = await github.rest.repos.compareCommitsWithBasehead({
owner,
repo,
basehead: `${baseSha}...${headSha}`,
});
collect(comparison.commits || []);
} else if (prNumber) {
const commits = await github.paginate(github.rest.pulls.listCommits, {
owner,
repo,
pull_number: Number(prNumber),
per_page: 100,
});
collect(commits);
}
const loginList = [...authors];
core.info(`Contributors to check: ${loginList.join(', ') || '(none)'}`);
if (unlinkedCommits.length > 0) {
core.warning(
`${unlinkedCommits.length} commit(s) have an author email not linked to a GitHub account ` +
'and cannot be verified against the CLA service.',
);
}
/** @type {string[]} */
const signed = [];
/** @type {string[]} */
const unsigned = [];
/** @type {string[]} */
const errored = [];
for (const login of loginList) {
const url = `${process.env.CLA_API}?checkContributor=${encodeURIComponent(login)}`;
try {
const res = await fetch(url);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const data = await res.json();
if (data && data.isContributor === true) {
signed.push(login);
} else {
unsigned.push(login);
}
} catch (e) {
core.warning(`CLA lookup failed for @${login}: ${e instanceof Error ? e.message : String(e)}`);
errored.push(login);
}
}
const blocking = [...unsigned, ...errored];
const allSigned = blocking.length === 0 && unlinkedCommits.length === 0;
core.setOutput('signed', signed.join(','));
core.setOutput('unsigned', unsigned.join(','));
core.setOutput('errored', errored.join(','));
core.setOutput('unlinked', JSON.stringify(unlinkedCommits));
core.setOutput('all_signed', String(allSigned));
}

83
.github/scripts/cla/manage-label.mjs vendored Normal file
View File

@ -0,0 +1,83 @@
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
//
// Adds the `cla-signed` label when every contributor has signed, and
// removes it otherwise. Idempotent: re-runs safely without duplicating
// the label or erroring if it's already in the desired state. Creates
// the label on first use so the workflow is self-contained.
/**
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
* @typedef { import("@actions/github/lib/context").Context } Context
* @typedef { typeof import("@actions/core") } Core
*/
const LABEL_NAME = 'cla-signed';
const LABEL_COLOR = '0e8a16'; // GitHub's standard green
const LABEL_DESCRIPTION = 'All contributors on this PR have signed the CLA';
/**
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
*/
export default async function manageClaLabel({ github, context, core }) {
const { owner, repo } = context.repo;
const issue_number = Number(process.env.PR_NUMBER);
const allSigned = process.env.ALL_SIGNED === 'true';
if (allSigned) {
// Make sure the label exists before trying to apply it — addLabels
// errors if the label is missing from the repo.
try {
await github.rest.issues.getLabel({ owner, repo, name: LABEL_NAME });
} catch (e) {
if (errorStatus(e) === 404) {
try {
await github.rest.issues.createLabel({
owner,
repo,
name: LABEL_NAME,
color: LABEL_COLOR,
description: LABEL_DESCRIPTION,
});
} catch (createErr) {
// 422 = race with a parallel run that just created it. Fine.
if (errorStatus(createErr) !== 422) throw createErr;
}
} else {
throw e;
}
}
await github.rest.issues.addLabels({
owner,
repo,
issue_number,
labels: [LABEL_NAME],
});
core.info(`Applied "${LABEL_NAME}" label to PR #${issue_number}`);
} else {
// 404 just means the label wasn't on the PR — nothing to undo.
try {
await github.rest.issues.removeLabel({
owner,
repo,
issue_number,
name: LABEL_NAME,
});
core.info(`Removed "${LABEL_NAME}" label from PR #${issue_number}`);
} catch (e) {
if (errorStatus(e) !== 404) throw e;
}
}
}
/**
* Octokit's request errors carry an HTTP `status` field, but TypeScript
* sees catch parameters as `unknown`. This guard narrows safely.
* @param {unknown} e
* @returns {number | undefined}
*/
function errorStatus(e) {
return typeof e === 'object' && e !== null && 'status' in e && typeof e.status === 'number'
? e.status
: undefined;
}

View File

@ -0,0 +1,66 @@
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
//
// Translates the buckets emitted by check-signatures.mjs into a single
// commit status on the head SHA. The status `context` name is what a
// repository ruleset gates on; description and target_url are best-effort
// human signals.
//
// State mapping:
// - success: every contributor is signed and every commit author is linked
// - error : only failures were API lookup errors (transient)
// - failure: at least one contributor is verified unsigned, or commits
// have author emails not linked to a GitHub account
/**
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
* @typedef { import("@actions/github/lib/context").Context } Context
* @typedef { typeof import("@actions/core") } Core
*/
/**
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
*/
export default async function postFinalClaStatus({ github, context }) {
const allSigned = process.env.ALL_SIGNED === 'true';
const unsigned = (process.env.UNSIGNED ?? '').split(',').filter(Boolean);
const errored = (process.env.ERRORED ?? '').split(',').filter(Boolean);
const unlinked = JSON.parse(process.env.UNLINKED || '[]');
/** @type {'success' | 'failure' | 'error' | 'pending'} */
let state;
let description;
if (allSigned) {
state = 'success';
description = 'All contributors have signed the CLA';
} else if (errored.length > 0 && unsigned.length === 0 && unlinked.length === 0) {
state = 'error';
description = `Could not verify: ${errored.join(', ')}`;
} else {
state = 'failure';
const parts = [];
if (unsigned.length > 0) parts.push(`unsigned: ${unsigned.join(', ')}`);
if (errored.length > 0) parts.push(`errored: ${errored.join(', ')}`);
if (unlinked.length > 0) parts.push(`${unlinked.length} unlinked commit(s)`);
description = parts.join(' | ');
}
// GitHub commit status description is capped at 140 chars.
if (description.length > 140) {
description = description.slice(0, 137) + '…';
}
const prNumber = process.env.PR_NUMBER;
const target_url = prNumber
? `${context.payload.repository?.html_url}/pull/${prNumber}`
: process.env.CLA_SIGN_URL;
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: /** @type {string} */ (process.env.HEAD_SHA),
state,
context: /** @type {string} */ (process.env.STATUS_CONTEXT),
description,
target_url,
});
}

76
.github/scripts/cla/resolve-context.mjs vendored Normal file
View File

@ -0,0 +1,76 @@
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
//
// Reads the triggering event (pull_request_target, issue_comment, or
// merge_group) and emits the head/base SHA and PR number that the rest of
// the workflow needs. For /cla-check comments, also leaves an "eyes"
// reaction so the commenter sees we picked it up.
/**
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
* @typedef { import("@actions/github/lib/context").Context } Context
* @typedef { typeof import("@actions/core") } Core
*/
/**
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
*/
export default async function resolveClaContext({ github, context, core }) {
const { owner, repo } = context.repo;
const event = context.eventName;
let prNumber = '';
let headSha = '';
let baseSha = '';
let isMergeGroup = false;
if (event === 'pull_request_target' && context.payload.pull_request) {
const pr = context.payload.pull_request;
prNumber = String(pr.number);
headSha = pr.head.sha;
baseSha = pr.base.sha;
} else if (event === 'issue_comment' && context.payload.issue) {
prNumber = String(context.payload.issue.number);
const { data: pr } = await github.rest.pulls.get({
owner,
repo,
pull_number: Number(prNumber),
});
headSha = pr.head.sha;
baseSha = pr.base.sha;
// Acknowledge the command so the commenter sees we received it.
try {
await github.rest.reactions.createForIssueComment({
owner,
repo,
comment_id: context.payload.comment?.id || -1,
content: 'eyes',
});
} catch (e) {
core.info(`Could not react to comment: ${e instanceof Error ? e.message : String(e)}`);
}
} else if (event === 'merge_group') {
isMergeGroup = true;
headSha = context.payload.merge_group.head_sha;
baseSha = context.payload.merge_group.base_sha;
} else if (event === 'workflow_dispatch') {
const input = context.payload.inputs?.pr_number;
if (!input) {
core.setFailed('workflow_dispatch requires the pr_number input');
return;
}
prNumber = String(input);
const { data: pr } = await github.rest.pulls.get({
owner,
repo,
pull_number: Number(prNumber),
});
headSha = pr.head.sha;
baseSha = pr.base.sha;
}
core.setOutput('pr_number', prNumber);
core.setOutput('head_sha', headSha);
core.setOutput('base_sha', baseSha);
core.setOutput('is_merge_group', String(isMergeGroup));
}

View File

@ -0,0 +1,104 @@
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
//
// Maintains a single CLA comment per PR, keyed by an HTML marker so the
// same comment is edited in place across re-runs instead of spammed.
// A clean PR that has never been flagged gets no comment at all — only
// PRs that needed a nudge get the eventual "thanks" follow-up.
/**
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
* @typedef { import("@actions/github/lib/context").Context } Context
* @typedef { typeof import("@actions/core") } Core
*/
/**
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
*/
export default async function updatePRComment({ github, context }) {
const { owner, repo } = context.repo;
const issue_number = Number(process.env.PR_NUMBER);
const allSigned = process.env.ALL_SIGNED === 'true';
const unsigned = (process.env.UNSIGNED ?? '').split(',').filter(Boolean);
const errored = (process.env.ERRORED ?? '').split(',').filter(Boolean);
const unlinked = JSON.parse(process.env.UNLINKED || '[]');
const MARKER = /** @type {string} */ (process.env.COMMENT_MARKER);
const comments = await github.paginate(github.rest.issues.listComments, {
owner,
repo,
issue_number,
per_page: 100,
});
// Only adopt the comment as ours if it's bot-authored — otherwise a user
// who copies our marker into their own comment would either hijack the
// thread or make updateComment 403 with insufficient permissions.
const existing = comments.find(
(c) => c.body && c.body.includes(MARKER) && c.user && c.user.type === 'Bot',
);
let body;
if (allSigned) {
// Only leave a "thanks" trail if we already nudged once. Avoids
// pinging every clean PR with a CLA comment.
if (!existing) {
return;
}
body = [
MARKER,
'✅ **CLA Check passed.** All contributors on this PR have signed the n8n CLA — thank you!',
].join('\n');
} else {
const lines = [MARKER, '## CLA signatures required', ''];
lines.push(`Thank you for your submission! We really appreciate it.
Like many open source projects, we ask that you sign our [Contributor License Agreement](${process.env.CLA_SIGN_URL}) before we can accept your contribution.`);
lines.push('');
if (unsigned.length > 0) {
lines.push('**Contributors who still need to sign:**');
for (const u of unsigned) {
lines.push(`- @${u}`);
}
lines.push('');
}
if (errored.length > 0) {
lines.push('**Could not verify (will retry on next push):**');
for (const u of errored) {
lines.push(`- @${u}`);
}
lines.push('');
}
if (unlinked.length > 0) {
lines.push('**Commits authored by an email not linked to a GitHub account:**');
for (const c of unlinked) {
lines.push(`- \`${c.sha.slice(0, 7)}\`${c.name} <${c.email}>`);
}
lines.push('');
lines.push(
'Add the email to your GitHub account ' +
'([instructions](https://docs.github.com/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-email-preferences/adding-an-email-address-to-your-github-account)) ' +
'or amend the commits to use a linked email, then push again.',
);
lines.push('');
}
lines.push('Once signed, comment `/cla-check` on this PR to re-run verification.');
body = lines.join('\n');
}
if (existing) {
await github.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body,
});
} else {
await github.rest.issues.createComment({
owner,
repo,
issue_number,
body,
});
}
}

View File

@ -40,6 +40,8 @@ export const EXCLUDE_PATTERNS = [
'packages/testing/**',
// Lock file (can produce massive diffs on dependency changes)
'pnpm-lock.yaml',
'**/*.md',
'**/*.mdx'
];
const BOT_MARKER = '<!-- pr-size-check -->';

View File

@ -203,4 +203,13 @@ describe('countFilteredAdditions', () => {
];
assert.equal(countFilteredAdditions(files, EXCLUDE_PATTERNS), 50);
});
it('applies EXCLUDE_PATTERNS to markdown files', () => {
const files = [
{ filename: 'packages/cli/src/service.ts', additions: 50 },
{ filename: 'packages/cli/AGENTS.md', additions: 100 },
{ filename: 'packages/frontend/STORIES.mdx', additions: 100 },
];
assert.equal(countFilteredAdditions(files, EXCLUDE_PATTERNS), 50);
});
});

View File

@ -1,19 +1,24 @@
{
"updatedAt": "2026-04-23T14:38:52.015Z",
"updatedAt": "2026-05-11T14:16:56.139Z",
"source": "currents",
"projectId": "LRxcNt",
"quarantined": [
"Canvas Actions > Node hover actions > should execute node",
"Chat user role @capability:proxy > use chat as chat user @auth:chat",
"Code node > Code editor > should execute the placeholder successfully in both modes",
"Data Mapping > maps expressions to updated fields correctly @fixme",
"Data pinning > Advanced pinning scenarios > should be able to reference paired items in node before pinned data",
"Executions Filter > should reset filter and remove badge",
"Debug mode > should enter debug mode for failed executions",
"HITL for Tools @capability:proxy > should add a HITL tool node and run it",
"Langchain Integration @capability:proxy > Advanced Workflow Features > should render runItems for sub-nodes and allow switching between them",
"Inject previous execution > can map keys from previous execution",
"Instance AI remediation guard @capability:proxy > should preserve a submitted workflow when mocked credential verification needs setup",
"Instance AI sidebar @capability:proxy > should delete thread via action menu",
"Instance AI workflow setup actions @capability:proxy > should apply parameter and credential edits and persist them to the workflow",
"Instance AI workflow setup actions @capability:proxy > should partially apply completed cards when Later is clicked on the last step",
"Loads template setup modal correctly",
"Resource Locator > should retrieve list options when other params throw errors",
"NDV Data Display > Schema View > should not display pagination for schema",
"Settings @capability:proxy > set global credentials for a provider",
"Tools usage @capability:proxy > use web search tool in conversation",
"Workflow Executions > when new workflow is not saved > should open executions tab",
"Workflow agent @capability:proxy > sharing workflow agent with project chat user",
"can configure, connect, and sync secrets from LocalStack",
"can create a connection pointing to LocalStack",
"manage workflow agents @auth:admin",

184
.github/workflows/ci-cla-check.yml vendored Normal file
View File

@ -0,0 +1,184 @@
name: 'CI: CLA Check'
# In-house replacement for the GitHub App "CLA Bot".
#
# Triggers
# - pull_request_target (opened/synchronize/reopened): re-checks signatures
# whenever a PR is opened or new commits are pushed.
# - issue_comment (`/cla-check` on a PR): manual re-check after a contributor
# signs the CLA, without needing a push.
# - merge_group: re-checks at merge-queue time so a ruleset can hard-block
# unsigned merges even if the PR check went stale.
#
# Output
# - A commit status named "CLA Check" on the head SHA. Add this name to a
# ruleset's required-checks list to gate merges on it.
# - A single, edited-in-place PR comment listing unsigned contributors.
#
# Implementation
# The heavy lifting lives in .github/scripts/cla/*.mjs. Each step below
# loads its corresponding module and invokes its default export.
on:
pull_request_target:
types: [opened, synchronize, reopened]
issue_comment:
types: [created]
merge_group:
workflow_dispatch:
inputs:
pr_number:
description: 'Pull request number to re-verify'
required: true
type: string
permissions:
contents: read
pull-requests: write
issues: write
statuses: write
concurrency:
group: cla-check-${{ github.event.pull_request.number || github.event.issue.number || github.event.merge_group.head_sha || github.event.inputs.pr_number || github.ref }}
cancel-in-progress: true
env:
STATUS_CONTEXT: 'CLA Check'
CLA_API: 'https://cla-bot-prod.users.n8n.cloud/webhook/cla/check'
CLA_SIGN_URL: 'https://cla-bot-prod.users.n8n.cloud/webhook/cla'
COMMENT_MARKER: '<!-- n8n-cla-check -->'
jobs:
cla-check:
name: Verify CLA signatures
# Skip issue_comment unless it's on a PR and the body starts with /cla-check.
if: >-
github.event_name != 'issue_comment' ||
(github.event.issue.pull_request != null &&
startsWith(github.event.comment.body, '/cla-check'))
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
with:
app-id: ${{ secrets.N8N_ASSISTANT_APP_ID }}
private-key: ${{ secrets.N8N_ASSISTANT_PRIVATE_KEY }}
- name: Checkout CLA scripts
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
sparse-checkout: .github/scripts/cla
sparse-checkout-cone-mode: false
- name: Resolve PR context
id: context
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
const mod = await import('${{ github.workspace }}/.github/scripts/cla/resolve-context.mjs');
await mod.default({ github, context, core });
- name: Post pending commit status
if: steps.context.outputs.head_sha != ''
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: process.env.HEAD_SHA,
state: 'pending',
context: process.env.STATUS_CONTEXT,
description: 'Verifying CLA signatures…',
});
- name: Check CLA signatures
id: check
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
BASE_SHA: ${{ steps.context.outputs.base_sha }}
IS_MERGE_GROUP: ${{ steps.context.outputs.is_merge_group }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
const mod = await import('${{ github.workspace }}/.github/scripts/cla/check-signatures.mjs');
await mod.default({ github, context, core });
- name: Post final commit status
if: always() && steps.context.outputs.head_sha != ''
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
UNSIGNED: ${{ steps.check.outputs.unsigned }}
ERRORED: ${{ steps.check.outputs.errored }}
UNLINKED: ${{ steps.check.outputs.unlinked }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
const mod = await import('${{ github.workspace }}/.github/scripts/cla/post-final-status.mjs');
await mod.default({ github, context, core });
- name: Update PR comment
# Don't comment from merge_group (no PR context) or when the check
# failed to produce a result.
if: >-
always() &&
steps.context.outputs.pr_number != '' &&
steps.check.outputs.all_signed != ''
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
UNSIGNED: ${{ steps.check.outputs.unsigned }}
ERRORED: ${{ steps.check.outputs.errored }}
UNLINKED: ${{ steps.check.outputs.unlinked }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
const mod = await import('${{ github.workspace }}/.github/scripts/cla/update-pr-comment.mjs');
await mod.default({ github, context, core });
- name: Manage cla-signed label
# Skip on merge_group (no PR) and when the check produced no result.
if: >-
always() &&
steps.context.outputs.pr_number != '' &&
steps.check.outputs.all_signed != ''
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
const mod = await import('${{ github.workspace }}/.github/scripts/cla/manage-label.mjs');
await mod.default({ github, context, core });
- name: React to /cla-check comment
if: always() && github.event_name == 'issue_comment' && steps.check.outputs.all_signed != ''
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
try {
await github.rest.reactions.createForIssueComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: context.payload.comment.id,
content: process.env.ALL_SIGNED === 'true' ? '+1' : '-1',
});
} catch (e) {
core.info(`Could not react to comment: ${e.message}`);
}

View File

@ -0,0 +1,23 @@
# .github/workflows/ci-codeowners-validation.yml
name: "CI: Validate CODEOWNERS"
# Only run when CODEOWNERS or packages change
on:
pull_request:
paths:
- ".github/CODEOWNERS"
- "packages/**"
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: mszostok/codeowners-validator@7f3f5e28c6d7b8dfae5731e54ce2272ca384592f #v0.7.4
with:
# Start with safe checks only. Add "owners" and
# experimental_checks: "notowned" once the file has settled
# and skip patterns are configured.
checks: "files,duppatterns,syntax"
github_access_token: "${{ secrets.GITHUB_TOKEN }}"

View File

@ -1,6 +1,7 @@
name: 'CI: PR Quality Checks'
on:
merge_group:
pull_request:
types:
- opened
@ -46,11 +47,14 @@ jobs:
name: Ownership Acknowledgement
# Checks that the author has acknowledged the ownership of their code
# by checking the checkbox in the PR summary.
# Skipped for bot-authored PRs (Dependabot, Renovate, github-actions, Aikido, etc.).
# The required aggregator `required-pr-quality-checks` treats skipped as success.
if: |
github.event_name == 'pull_request' &&
github.event.pull_request.head.repo.full_name == github.repository &&
!contains(github.event.pull_request.labels.*.name, 'automation:backport') &&
!contains(github.event.pull_request.title, '(backport to')
!contains(github.event.pull_request.title, '(backport to') &&
github.event.pull_request.user.type != 'Bot'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
@ -74,12 +78,15 @@ jobs:
check-pr-size:
name: PR Size Limit
# Checks that the PR size doesn't exceed the limit (currently 1000 lines)
# Allows for override via '/size-limit-override' comment
# Allows for override via '/size-limit-override' comment.
# Skipped for bot-authored PRs — dep bumps from Dependabot/Renovate/Aikido
# routinely exceed the size limit and shouldn't be gated on it.
if: |
github.event_name == 'pull_request' &&
github.event.pull_request.head.repo.full_name == github.repository &&
!contains(github.event.pull_request.labels.*.name, 'automation:backport') &&
!contains(github.event.pull_request.title, '(backport to')
!contains(github.event.pull_request.title, '(backport to') &&
github.event.pull_request.user.type != 'Bot'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
@ -99,3 +106,76 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node .github/scripts/quality/check-pr-size.mjs
changes:
name: Detect Changes
if: github.event_name == 'pull_request' || github.event_name == 'merge_group'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
janitor: ${{ fromJSON(steps.filter.outputs.results).janitor == true }}
code-health: ${{ fromJSON(steps.filter.outputs.results)['code-health'] == true }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Detect changed paths
id: filter
uses: ./.github/actions/ci-filter
with:
mode: filter
filters: |
janitor:
packages/testing/playwright/**
packages/testing/janitor/**
code-health:
**/package.json
pnpm-workspace.yaml
.code-health-baseline.json
packages/testing/code-health/**
check-static-analysis:
name: Static Analysis
needs: changes
if: |
github.event_name == 'merge_group' ||
needs.changes.outputs.code-health == 'true' ||
needs.changes.outputs.janitor == 'true'
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Node.js
uses: ./.github/actions/setup-nodejs
with:
build-command: pnpm turbo run build --filter=@n8n/code-health --filter=@n8n/playwright-janitor
- name: Run code-health
if: github.event_name == 'merge_group' || needs.changes.outputs.code-health == 'true'
run: pnpm --filter=@n8n/code-health check
- name: Run janitor
if: ${{ !cancelled() && (github.event_name == 'merge_group' || needs.changes.outputs.janitor == 'true') }}
run: pnpm --filter=n8n-playwright janitor
required-pr-quality-checks:
name: Required PR Quality Checks
needs: [check-ownership-checkbox, check-pr-size, check-static-analysis]
if: always()
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
sparse-checkout: .github/actions/ci-filter
sparse-checkout-cone-mode: false
- name: Validate required checks
uses: ./.github/actions/ci-filter
with:
mode: validate
job-results: ${{ toJSON(needs) }}

View File

@ -41,7 +41,12 @@ jobs:
chromatic:
name: Chromatic
needs: filter
if: needs.filter.outputs.design_system == 'true'
# Skip on fork PRs — they don't have access to the Chromatic secret.
# This job is intentionally not in `required-review-checks` needs, so it
# is non-blocking and won't gate merging.
if: >-
needs.filter.outputs.design_system == 'true' &&
github.event.pull_request.head.repo.full_name == github.repository
uses: ./.github/workflows/test-visual-chromatic.yml
with:
ref: ${{ needs.filter.outputs.commit_sha }}
@ -51,7 +56,7 @@ jobs:
# PRs cannot be merged unless this job passes.
required-review-checks:
name: Required Review Checks
needs: [filter, chromatic]
needs: [filter]
if: always()
runs-on: ubuntu-slim
steps:

View File

@ -22,6 +22,7 @@ jobs:
ci: ${{ fromJSON(steps.ci-filter.outputs.results).ci == true }}
unit: ${{ fromJSON(steps.ci-filter.outputs.results).unit == true }}
e2e: ${{ fromJSON(steps.ci-filter.outputs.results).e2e == true }}
dev_server_smoke: ${{ fromJSON(steps.ci-filter.outputs.results)['dev-server-smoke'] == true }}
workflows: ${{ fromJSON(steps.ci-filter.outputs.results).workflows == true }}
workflow_scripts: ${{ fromJSON(steps.ci-filter.outputs.results)['workflow-scripts'] == true }}
db: ${{ fromJSON(steps.ci-filter.outputs.results).db == true }}
@ -29,6 +30,7 @@ jobs:
e2e_performance: ${{ fromJSON(steps.ci-filter.outputs.results)['e2e-performance'] == true }}
instance_ai_workflow_eval: ${{ fromJSON(steps.ci-filter.outputs.results)['instance-ai-workflow-eval'] == true }}
commit_sha: ${{ steps.commit-sha.outputs.sha }}
merge_base: ${{ steps.ci-filter.outputs.merge-base }}
matrix: ${{ steps.generate-matrix.outputs.matrix }}
skip_tests: ${{ steps.generate-matrix.outputs.skip-tests }}
steps:
@ -63,6 +65,15 @@ jobs:
.github/actions/load-n8n-docker/**
packages/testing/playwright/**
packages/testing/containers/**
dev-server-smoke:
packages/frontend/editor-ui/vite.config.mts
pnpm-workspace.yaml
packages/@n8n/*/package.json
packages/testing/playwright/tests/dev-server-smoke/**
packages/testing/playwright/playwright.config.ts
packages/testing/playwright/playwright-projects.ts
packages/testing/playwright/package.json
.github/workflows/test-dev-server-smoke-reusable.yml
workflows: .github/**
workflow-scripts: .github/scripts/**
performance:
@ -109,9 +120,10 @@ jobs:
if: fromJSON(steps.ci-filter.outputs.results).ci || fromJSON(steps.ci-filter.outputs.results).e2e
env:
CHANGED_FILES: ${{ steps.ci-filter.outputs.changed-files }}
MERGE_BASE: ${{ steps.ci-filter.outputs.merge-base }}
run: |
FILES_CSV=$(echo "$CHANGED_FILES" | tr '\n' ',' | sed 's/,$//')
MATRIX=$(node packages/testing/playwright/scripts/distribute-tests.mjs --matrix 16 --orchestrate --impact "--files=$FILES_CSV" --base=FETCH_HEAD)
MATRIX=$(node packages/testing/playwright/scripts/distribute-tests.mjs --matrix 16 --orchestrate --impact "--files=$FILES_CSV" "--base=$MERGE_BASE")
echo "matrix=$MATRIX" >> "$GITHUB_OUTPUT"
echo "skip-tests=$(node -e "process.stdout.write(JSON.parse(process.argv[1])[0]?.skip === true ? 'true' : 'false')" "$MATRIX")" >> "$GITHUB_OUTPUT"
@ -199,6 +211,7 @@ jobs:
test-mode: docker-artifact
test-command: pnpm --filter=n8n-playwright test:container:sqlite:e2e tests/e2e/building-blocks/workflow-entry-points.spec.ts
workers: '1'
artifact-prefix: sanity
secrets: inherit
# Full e2e run. Internal PRs run multi-main (postgres + redis + caddy + 2 mains + 1 worker).
@ -215,10 +228,23 @@ jobs:
with:
branch: ${{ needs.install-and-build.outputs.commit_sha }}
test-mode: docker-artifact
test-command: ${{ github.event.pull_request.head.repo.fork == true && 'pnpm --filter=n8n-playwright test:container:sqlite:e2e --grep-invert="@licensed"' || 'pnpm --filter=n8n-playwright test:container:multi-main:e2e' }}
test-command: ${{ github.event.pull_request.head.repo.fork == true && 'pnpm --filter=n8n-playwright test:container:sqlite:e2e --grep-invert=@licensed' || 'pnpm --filter=n8n-playwright test:container:multi-main:e2e' }}
workers: '1'
pre-generated-matrix: ${{ needs.install-and-build.outputs.matrix }}
upload-failure-artifacts: ${{ github.event.pull_request.head.repo.fork == true }}
artifact-prefix: e2e
secrets: inherit
# Boots the editor-ui against the Vite dev server and fails on any console
# or page error during load. Catches regressions in dev-mode module
# resolution (missing Vite alias, broken workspace package interop) that
# the production-bundle e2e job bundles around.
dev-server-smoke:
name: Dev-server boot smoke
needs: install-and-build
if: needs.install-and-build.outputs.dev_server_smoke == 'true' && github.event_name != 'merge_group'
uses: ./.github/workflows/test-dev-server-smoke-reusable.yml
with:
ref: ${{ needs.install-and-build.outputs.commit_sha }}
secrets: inherit
db-tests:
@ -266,10 +292,15 @@ jobs:
ref: ${{ needs.install-and-build.outputs.commit_sha }}
secrets: inherit
# Depends on prepare-docker so the eval workflow can load the SHA-keyed image cache.
# prepare-docker may be skipped (its filter excludes .github/**); the eval falls back to a local build.
instance-ai-workflow-evals:
name: Instance AI Workflow Evals
needs: install-and-build
needs: [install-and-build, prepare-docker]
if: >-
!cancelled() &&
needs.install-and-build.result == 'success' &&
(needs.prepare-docker.result == 'success' || needs.prepare-docker.result == 'skipped') &&
needs.install-and-build.outputs.instance_ai_workflow_eval == 'true' &&
github.repository == 'n8n-io/n8n' &&
(github.event_name != 'pull_request' || !github.event.pull_request.head.repo.fork)
@ -291,6 +322,7 @@ jobs:
check-packaging,
sqlite-sanity,
e2e,
dev-server-smoke,
db-tests,
performance,
security-checks,

View File

@ -0,0 +1,43 @@
name: 'Release: Build Daytona snapshot'
on:
workflow_call:
inputs:
n8n_version:
description: 'n8n version to build the Daytona snapshot for'
required: true
type: string
secrets:
DAYTONA_API_KEY:
required: true
DAYTONA_API_URL:
required: false
workflow_dispatch:
inputs:
n8n_version:
description: 'n8n version to build the Daytona snapshot for (e.g. 1.123.0)'
required: true
type: string
permissions:
contents: read
jobs:
build-snapshot:
name: Build versioned Daytona snapshot
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Node.js and build
uses: ./.github/actions/setup-nodejs
- name: Build versioned Daytona snapshot
env:
N8N_VERSION: ${{ inputs.n8n_version }}
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
DAYTONA_API_URL: ${{ secrets.DAYTONA_API_URL }}
run: node packages/@n8n/instance-ai/scripts/build-snapshot.cjs --version "$N8N_VERSION"

View File

@ -76,11 +76,9 @@ jobs:
cp README.md packages/cli/README.md
sed -i "s/default: 'dev'/default: '${{ needs.determine-version-info.outputs.release_type }}'/g" packages/cli/dist/config/schema.js
- name: Publish n8n to NPM with rc tag
env:
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
run: pnpm --filter n8n publish --publish-branch "$PUBLISH_BRANCH" --access public --tag rc --no-git-checks
# Publishing via `pnpm publish -r` is idempotent, as it checks if the package exists
# and only publishes if it doesn't. This is why we do the sub-packages before the main n8n package.
# So if anything goes wrong, we can easily re-try the run instead of abandoning the release.
- name: Publish other packages to NPM
env:
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
@ -92,6 +90,12 @@ jobs:
fi
pnpm publish -r --filter '!n8n' --publish-branch "$PUBLISH_BRANCH" --access public --tag "$PUBLISH_TAG" --no-git-checks
# If we don't use the --tag rc, all releases will default to "latest".
- name: Publish n8n to NPM with rc tag
env:
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
run: pnpm --filter n8n publish --publish-branch "$PUBLISH_BRANCH" --access public --tag rc --no-git-checks
- name: Cleanup rc tag
run: npm dist-tag rm n8n rc
continue-on-error: true
@ -105,6 +109,15 @@ jobs:
release_type: ${{ needs.determine-version-info.outputs.release_type }}
secrets: inherit
build-daytona-snapshot:
name: Build Daytona snapshot
needs: [determine-version-info, publish-to-npm]
if: github.event.pull_request.merged == true
uses: ./.github/workflows/release-build-daytona-snapshot.yml
with:
n8n_version: ${{ needs.determine-version-info.outputs.version }}
secrets: inherit
create-github-release:
name: Create GitHub Release
needs: [determine-version-info, publish-to-npm, publish-to-docker-hub]
@ -183,11 +196,13 @@ jobs:
create-github-release,
move-track-tag,
promote-stable-tag,
build-daytona-snapshot,
]
if: |
always() &&
needs.publish-to-npm.result == 'success' &&
needs.create-github-release.result == 'success' &&
needs.build-daytona-snapshot.result == 'success' &&
(needs.move-track-tag.result == 'success' || needs.move-track-tag.result == 'skipped') &&
(needs.promote-stable-tag.result == 'success' || needs.promote-stable-tag.result == 'skipped')
uses: ./.github/workflows/release-publish-post-release.yml

View File

@ -56,7 +56,7 @@ jobs:
output-file: sbom-source.cdx.json
- name: Attest SBOM for source release
uses: actions/attest-sbom@07e74fc4e78d1aad915e867f9a094073a9f71527 # v4.0.0
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-path: './package.json'
sbom-path: 'sbom-source.cdx.json'

View File

@ -0,0 +1,49 @@
name: 'Test: Dev-server boot smoke'
on:
workflow_call:
inputs:
ref:
description: 'Git ref to test'
required: true
type: string
env:
NODE_OPTIONS: '--max-old-space-size=6144'
PLAYWRIGHT_BROWSERS_PATH: packages/testing/playwright/.playwright-browsers
jobs:
smoke:
name: Dev-server smoke
runs-on: ${{ vars.RUNNER_PROVIDER == 'github' && 'ubuntu-latest' || 'blacksmith-4vcpu-ubuntu-2204' }}
timeout-minutes: 10
permissions:
contents: read
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
- name: Setup and Build
uses: ./.github/actions/setup-nodejs
- name: Install Browsers
run: pnpm turbo run install-browsers --filter=n8n-playwright
- name: Run dev-server smoke spec
# Run from repo root so PLAYWRIGHT_BROWSERS_PATH (relative) resolves
# correctly. cd-ing into the playwright package double-nests it.
run: pnpm --filter=n8n-playwright test:dev-server-smoke --reporter=list
- name: Upload Failure Artifacts
if: ${{ failure() }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: dev-server-smoke-report
path: |
packages/testing/playwright/test-results/
packages/testing/playwright/playwright-report/
retention-days: 7

View File

@ -5,48 +5,58 @@ on:
- cron: '0 2 * * 1' # Every Monday at 2 AM
workflow_dispatch: # Allow manual triggering
env:
NODE_OPTIONS: --max-old-space-size=16384
PLAYWRIGHT_WORKERS: 4
PLAYWRIGHT_BROWSERS_PATH: packages/testing/playwright/.playwright-browsers
jobs:
coverage:
runs-on: blacksmith-8vcpu-ubuntu-2204
name: Coverage Tests
prepare-docker:
name: Prepare Docker (coverage)
uses: ./.github/workflows/prepare-docker-reusable.yml
with:
build-variant: coverage
runner: blacksmith-8vcpu-ubuntu-2204
secrets: inherit
e2e:
name: E2E (coverage)
needs: prepare-docker
uses: ./.github/workflows/test-e2e-reusable.yml
with:
test-mode: docker-artifact
test-command: pnpm --filter=n8n-playwright test:container:coverage
workers: '1'
runner: blacksmith-4vcpu-ubuntu-2204
timeout-minutes: 45
pre-generated-matrix: '[{"shard":1,"images":""},{"shard":2,"images":""},{"shard":3,"images":""},{"shard":4,"images":""}]'
artifact-prefix: coverage
secrets: inherit
aggregate:
name: Aggregate Coverage
needs: e2e
if: always() && needs.e2e.result != 'skipped' && needs.e2e.result != 'cancelled'
runs-on: blacksmith-4vcpu-ubuntu-2204
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Environment
uses: ./.github/actions/setup-nodejs
env:
INCLUDE_TEST_CONTROLLER: 'true'
- name: Build Docker Image with Coverage
run: pnpm build:docker:coverage
env:
INCLUDE_TEST_CONTROLLER: 'true'
- name: Download shard artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
pattern: coverage-shard-*
path: /tmp/shards/
- name: Install Browsers
run: pnpm turbo run install-browsers --filter=n8n-playwright
- name: Run Container Coverage Tests
id: coverage-tests
- name: Collect coverage JSON
shell: bash
run: |
pnpm --filter n8n-playwright test:container:sqlite \
--workers=${{ env.PLAYWRIGHT_WORKERS }}
env:
BUILD_WITH_COVERAGE: 'true'
CURRENTS_RECORD_KEY: ${{ secrets.CURRENTS_RECORD_KEY }}
CURRENTS_PROJECT_ID: 'LRxcNt'
QA_METRICS_WEBHOOK_URL: ${{ secrets.QA_METRICS_WEBHOOK_URL }}
QA_METRICS_WEBHOOK_USER: ${{ secrets.QA_METRICS_WEBHOOK_USER }}
QA_METRICS_WEBHOOK_PASSWORD: ${{ secrets.QA_METRICS_WEBHOOK_PASSWORD }}
mkdir -p packages/testing/playwright/.nyc_output/coverage
found=$(find /tmp/shards -path '*/.nyc_output/coverage/*.json' 2>/dev/null | wc -l)
echo "Found $found coverage JSON files across shards"
find /tmp/shards -path '*/.nyc_output/coverage/*.json' \
-exec cp {} packages/testing/playwright/.nyc_output/coverage/ \;
ls -la packages/testing/playwright/.nyc_output/coverage/ || true
- name: Generate Coverage Report
if: always() && steps.coverage-tests.outcome != 'skipped'
run: pnpm --filter n8n-playwright coverage:report
- name: Upload Coverage Report Artifact
@ -68,7 +78,7 @@ jobs:
fail_ci_if_error: false
- name: Analyse Coverage Gaps
if: always() && steps.coverage-tests.outcome != 'skipped'
if: always()
env:
CODECOV_API_TOKEN: ${{ secrets.CODECOV_API_TOKEN }}
run: |
@ -76,7 +86,7 @@ jobs:
--md --top=15 --out-json=coverage-gaps.json >> "$GITHUB_STEP_SUMMARY"
- name: Upload Coverage Gap Report
if: always() && steps.coverage-tests.outcome != 'skipped'
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: coverage-gap-report

View File

@ -23,21 +23,20 @@ jobs:
benchmark:
needs: [prepare-docker]
name: ${{ matrix.profile }}
name: benchmarking
strategy:
fail-fast: false
matrix:
include:
- profile: benchmark-direct
runner: blacksmith-4vcpu-ubuntu-2204
- profile: benchmark-queue
runner: blacksmith-8vcpu-ubuntu-2204
- profile: benchmark-queue-tuned
runner: blacksmith-8vcpu-ubuntu-2204
- runner: blacksmith-8vcpu-ubuntu-2204
uses: ./.github/workflows/test-e2e-reusable.yml
with:
test-mode: docker-artifact
test-command: pnpm --filter=n8n-playwright test:all --project='${{ matrix.profile }}:infrastructure' --workers=1
# Runs the full benchmark suite. Each spec brings its own container via
# `test.use({ capability })`, so workers must be 1 (one container at a time).
test-command: 'pnpm --filter=n8n-playwright test:benchmark'
workers: '1'
runner: ${{ matrix.runner }}
timeout-minutes: 60
timeout-minutes: 120
artifact-prefix: benchmark
secrets: inherit

View File

@ -19,4 +19,5 @@ jobs:
test-mode: docker-artifact
test-command: pnpm --filter=n8n-playwright test:performance
currents-project-id: 'O9BJaN'
artifact-prefix: performance
secrets: inherit

View File

@ -32,11 +32,6 @@ on:
required: false
default: 30
type: number
upload-failure-artifacts:
description: 'Upload test failure artifacts (screenshots, traces, videos). Enable for community PRs without Currents access.'
required: false
default: false
type: boolean
currents-project-id:
description: 'Currents project ID for reporting'
required: false
@ -52,6 +47,11 @@ on:
required: false
default: ''
type: string
artifact-prefix:
description: 'Prefix for uploaded shard artifacts'
required: false
default: 'e2e'
type: string
env:
NODE_OPTIONS: ${{ contains(inputs.runner, '2vcpu') && '--max-old-space-size=6144' || '' }}
@ -121,15 +121,17 @@ jobs:
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
N8N_TEST_ENV: ${{ inputs.n8n-env }}
- name: Upload Failure Artifacts
if: ${{ failure() && inputs.upload-failure-artifacts }}
- name: Upload Shard Artifacts
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: playwright-report-shard-${{ matrix.shard }}
name: ${{ inputs.artifact-prefix }}-shard-${{ matrix.shard }}
path: |
packages/testing/playwright/test-results/
packages/testing/playwright/playwright-report/
retention-days: 7
packages/testing/playwright/.nyc_output/
retention-days: 1
if-no-files-found: ignore
- name: Cancel Currents run if workflow is cancelled
if: ${{ cancelled() }}

View File

@ -29,6 +29,7 @@ jobs:
workers: '1'
pre-generated-matrix: '[{"shard":1},{"shard":2},{"shard":3},{"shard":4},{"shard":5},{"shard":6},{"shard":7},{"shard":8},{"shard":9},{"shard":10},{"shard":11},{"shard":12},{"shard":13},{"shard":14},{"shard":15},{"shard":16}]'
n8n-env: '{"N8N_EXPRESSION_ENGINE":"vm"}'
artifact-prefix: vm-expressions
secrets: inherit
notify-on-failure:

View File

@ -29,6 +29,12 @@ jobs:
name: 'Run Evals'
runs-on: blacksmith-4vcpu-ubuntu-2204
timeout-minutes: 45
env:
# Each port hosts an independent n8n container. The eval CLI's
# work-stealing allocator dispatches builds across them, capped per-lane.
# 9 lanes on 4vcpu — builds are LLM-bound so CPU headroom is sufficient;
# bump back to 8vcpu if contention shows up.
LANE_PORTS: '5678,5679,5680,5681,5682,5683,5684,5685,5686'
permissions:
contents: read
pull-requests: write
@ -45,56 +51,115 @@ jobs:
with:
build-command: 'pnpm build'
- name: Build Docker image
# Cache populated by prepare-docker; fallback covers PRs that only touch this workflow file.
- name: Load n8n Docker image
id: load-image
continue-on-error: true
uses: ./.github/actions/load-n8n-docker
- name: Build Docker image (fallback on cache miss)
if: steps.load-image.outcome == 'failure'
run: pnpm build:docker
env:
INCLUDE_TEST_CONTROLLER: 'true'
- name: Start n8n container
- name: Start n8n containers
env:
EVALS_ANTHROPIC_KEY: ${{ secrets.EVALS_ANTHROPIC_KEY }}
N8N_LICENSE_ACTIVATION_KEY: ${{ secrets.N8N_LICENSE_ACTIVATION_KEY }}
N8N_LICENSE_CERT: ${{ secrets.N8N_LICENSE_CERT }}
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
run: |
docker run -d --name n8n-eval \
-e E2E_TESTS=true \
-e N8N_ENABLED_MODULES=instance-ai \
-e N8N_AI_ENABLED=true \
-e N8N_INSTANCE_AI_MODEL_API_KEY=${{ secrets.EVALS_ANTHROPIC_KEY }} \
-e N8N_LICENSE_ACTIVATION_KEY=${{ secrets.N8N_LICENSE_ACTIVATION_KEY }} \
-e N8N_LICENSE_CERT=${{ secrets.N8N_LICENSE_CERT }} \
-e N8N_ENCRYPTION_KEY=${{ secrets.N8N_ENCRYPTION_KEY }} \
-p 5678:5678 \
n8nio/n8n:local
echo "Waiting for n8n to be ready..."
for i in $(seq 1 60); do
if curl -s http://localhost:5678/healthz/readiness -o /dev/null -w "%{http_code}" | grep -q 200; then
echo "n8n ready after ${i}s"
exit 0
fi
sleep 1
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
for i in "${!PORTS[@]}"; do
port="${PORTS[$i]}"
docker run -d --name "n8n-eval-$((i+1))" \
-e E2E_TESTS=true \
-e N8N_ENABLED_MODULES=instance-ai \
-e N8N_AI_ENABLED=true \
-e N8N_INSTANCE_AI_MODEL_API_KEY="$EVALS_ANTHROPIC_KEY" \
-e N8N_AI_ASSISTANT_BASE_URL="" \
-e N8N_INSTANCE_AI_SANDBOX_ENABLED=true \
-e N8N_INSTANCE_AI_SANDBOX_PROVIDER=daytona \
-e DAYTONA_API_URL=https://app.daytona.io/api \
-e DAYTONA_API_KEY="$DAYTONA_API_KEY" \
-e N8N_LICENSE_ACTIVATION_KEY="$N8N_LICENSE_ACTIVATION_KEY" \
-e N8N_LICENSE_CERT="$N8N_LICENSE_CERT" \
-e N8N_ENCRYPTION_KEY="$N8N_ENCRYPTION_KEY" \
-p "$port:5678" \
n8nio/n8n:local
done
# 120s budget per port: containers booting in parallel on a shared
# 4vcpu runner contend for CPU/disk during n8n's startup (DB migrations,
# license init), so each takes longer than a solo boot.
for port in "${PORTS[@]}"; do
ready=false
for i in $(seq 1 120); do
if curl -s "http://localhost:$port/healthz/readiness" -o /dev/null -w "%{http_code}" | grep -q 200; then
echo "n8n on port $port ready after ${i}s"
ready=true
break
fi
sleep 1
done
if [ "$ready" != "true" ]; then
echo "::error::n8n on port $port failed to start within 120s"
for n in $(docker ps -aq --filter "name=n8n-eval-"); do
echo "Logs for $n:"
docker logs "$n" --tail 30 || true
done
exit 1
fi
done
echo "::error::n8n failed to start within 60s"
docker logs n8n-eval --tail 30
exit 1
- name: Create test user
- name: Create test users
run: |
curl -sf -X POST http://localhost:5678/rest/e2e/reset \
-H "Content-Type: application/json" \
-d '{
"owner":{"email":"nathan@n8n.io","password":"PlaywrightTest123","firstName":"Eval","lastName":"Owner"},
"admin":{"email":"admin@n8n.io","password":"PlaywrightTest123","firstName":"Admin","lastName":"User"},
"members":[],
"chat":{"email":"chat@n8n.io","password":"PlaywrightTest123","firstName":"Chat","lastName":"User"}
}'
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
for port in "${PORTS[@]}"; do
curl -sf -X POST "http://localhost:$port/rest/e2e/reset" \
-H "Content-Type: application/json" \
-d '{
"owner":{"email":"nathan@n8n.io","password":"PlaywrightTest123","firstName":"Eval","lastName":"Owner"},
"admin":{"email":"admin@n8n.io","password":"PlaywrightTest123","firstName":"Admin","lastName":"User"},
"members":[],
"chat":{"email":"chat@n8n.io","password":"PlaywrightTest123","firstName":"Chat","lastName":"User"}
}'
done
# Belt-and-suspenders: env vars set sandbox config but persisted admin
# settings can override. Per-lane assertion catches env-injection hiccups
# or unexpected DB-side state. A single misconfigured lane would
# silently route some builds through tool mode and pollute results.
- name: Assert sandbox is enabled on every lane
run: |
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
bad=0
for i in "${!PORTS[@]}"; do
port="${PORTS[$i]}"
lane="$((i+1))"
curl -sf -X POST "http://localhost:$port/rest/login" \
-H "Content-Type: application/json" \
-d '{"emailOrLdapLoginId":"nathan@n8n.io","password":"PlaywrightTest123"}' \
-c "/tmp/cookies-$port.txt" -o /dev/null
cfg=$(curl -sf -b "/tmp/cookies-$port.txt" \
"http://localhost:$port/rest/instance-ai/settings" \
| jq -r '.data | "\(.sandboxEnabled) \(.sandboxProvider)"')
if [ "$cfg" != "true daytona" ]; then
echo "::error::lane $lane (port $port): expected 'true daytona', got '$cfg'"
bad=$((bad+1))
else
echo " lane $lane: sandboxEnabled=true sandboxProvider=daytona ok"
fi
done
if [ "$bad" -gt 0 ]; then
echo "::error::$bad lane(s) misconfigured - eval would mix sandbox + tool-mode builds"
exit 1
fi
- name: Run Instance AI Evals
continue-on-error: true
working-directory: packages/@n8n/instance-ai
run: >-
pnpm eval:instance-ai
--base-url http://localhost:5678
--concurrency 4
--verbose
--iterations 3
${{ inputs.filter && format('--filter "{0}"', inputs.filter) || '' }}
env:
N8N_INSTANCE_AI_MODEL_API_KEY: ${{ secrets.EVALS_ANTHROPIC_KEY }}
LANGSMITH_TRACING: 'true'
@ -102,32 +167,98 @@ jobs:
LANGSMITH_API_KEY: ${{ secrets.EVALS_LANGSMITH_API_KEY }}
LANGSMITH_REVISION_ID: ${{ github.sha }}
LANGSMITH_BRANCH: ${{ github.head_ref || github.ref_name }}
run: |
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
URLS=()
for port in "${PORTS[@]}"; do
URLS+=("http://localhost:$port")
done
BASE_URLS=$(IFS=,; printf '%s' "${URLS[*]}")
pnpm eval:instance-ai \
--base-url "$BASE_URLS" \
--concurrency 32 \
--verbose \
--iterations 5 \
${{ inputs.filter && format('--filter "{0}"', inputs.filter) || '' }}
- name: Stop n8n container
# Captures sandbox/builder/Daytona signals that surface during the eval
# (after migrations finish). Two layers of secret-leak defense:
#
# 1. Filter to specific diagnostic patterns — never tail raw output.
# The grep allowlist scopes the log surface to lines we care
# about for debugging (sandbox lifecycle, builder, errors).
#
# 2. Re-register secrets via ::add-mask:: so any line that does
# match the allowlist has the secret values replaced with ***
# before reaching the GH Actions log. GitHub auto-masks
# ${{ secrets.X }} references, but the masking is fragile
# against transformed or split values; explicit registration
# reinforces it.
#
# Runs even on eval failure so we have the post-mortem regardless.
- name: Capture n8n container logs (debug)
if: ${{ always() }}
run: docker stop n8n-eval && docker rm n8n-eval || true
env:
EVALS_ANTHROPIC_KEY: ${{ secrets.EVALS_ANTHROPIC_KEY }}
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
N8N_LICENSE_ACTIVATION_KEY: ${{ secrets.N8N_LICENSE_ACTIVATION_KEY }}
N8N_LICENSE_CERT: ${{ secrets.N8N_LICENSE_CERT }}
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
run: |
# Layer 2 — defense in depth: explicitly mask each secret's value.
# ::add-mask:: is a single-line workflow command. Multi-line secrets
# (e.g. N8N_LICENSE_CERT is PEM-encoded) must be masked one line at
# a time, otherwise only the first line is registered.
for v in "$EVALS_ANTHROPIC_KEY" "$DAYTONA_API_KEY" \
"$N8N_LICENSE_ACTIVATION_KEY" "$N8N_LICENSE_CERT" \
"$N8N_ENCRYPTION_KEY"; do
[ -z "$v" ] && continue
while IFS= read -r line; do
[ -n "$line" ] && echo "::add-mask::$line"
done <<< "$v"
done
# Layer 1 — accuracy filter: only surface diagnostic signals.
# `tail -100` after the filter so we get the LATEST matching lines
# (post-eval failure signal), not the earliest startup-time ones.
SIGNALS='sandbox|builder|daytona|instance.?ai|error|warn|reject|exception|fail'
for c in $(docker ps -aq --filter "name=n8n-eval-"); do
name=$(docker inspect --format '{{.Name}}' "$c" | sed 's|^/||')
echo ""
echo "============================================================"
echo "=== $name (filtered diagnostic signals, last 100 lines) ==="
echo "============================================================"
docker logs "$c" 2>&1 \
| grep -ivE 'migration' \
| grep -iE "$SIGNALS" \
| tail -100 \
|| true
done
- name: Stop n8n containers
if: ${{ always() }}
run: |
mapfile -t ids < <(docker ps -aq --filter "name=n8n-eval-")
if [ "${#ids[@]}" -gt 0 ]; then
docker stop "${ids[@]}" 2>/dev/null || true
docker rm "${ids[@]}" 2>/dev/null || true
fi
- name: Post eval results to PR
if: ${{ always() && github.event_name == 'pull_request' }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
RESULTS_FILE="packages/@n8n/instance-ai/eval-results.json"
if [ ! -f "$RESULTS_FILE" ]; then
echo "No eval results file found"
# The eval CLI writes the full PR comment as eval-pr-comment.md
# (see comparison/format.ts:formatComparisonMarkdown). It includes
# the alert, aggregate, comparison sections, per-test-case results
# collapsed, and failure details collapsed. CI just relays it.
COMMENT_FILE="packages/@n8n/instance-ai/eval-pr-comment.md"
if [ ! -f "$COMMENT_FILE" ]; then
echo "No PR comment file found (eval likely cancelled before writing results)"
exit 0
fi
# Build the full comment body with jq
jq -r '
"### Instance AI Workflow Eval Results\n\n" +
"**\(.summary.built)/\(.summary.testCases) built | \(.totalRuns) run(s) | pass@\(.totalRuns): \(.summary.passAtK * 100 | floor)% | pass^\(.totalRuns): \(.summary.passHatK * 100 | floor)% | iterations: \(.summary.passRatePerIter)**\n\n" +
"| Workflow | Build | pass@\(.totalRuns) | pass^\(.totalRuns) |\n|---|---|---|---|\n" +
([.testCases[] as $tc | "| \($tc.name) | \($tc.buildSuccessCount)/\($tc.totalRuns) | \(([$tc.scenarios[] | .passAtK] | add) / ($tc.scenarios | length) * 100 | floor)% | \(([$tc.scenarios[] | .passHatK] | add) / ($tc.scenarios | length) * 100 | floor)% |"] | join("\n")) +
"\n\n<details><summary>Failure details</summary>\n\n" +
([.testCases[] as $tc | $tc.scenarios[] | select(.passHatK < 1) | "**\($tc.name) / \(.name)** — \(.passCount)/\(.totalRuns) passed" + "\n" + ([.runs[] | select(.passed == false) | "> Run\(if .failureCategory then " [\(.failureCategory)]" else "" end): \(.reasoning | .[0:200])"] | join("\n"))] | join("\n\n")) +
"\n</details>"
' "$RESULTS_FILE" > /tmp/eval-comment.md
cp "$COMMENT_FILE" /tmp/eval-comment.md
# Find and update existing eval comment, or create new one
COMMENT_ID=$(gh api "repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments" \

View File

@ -34,4 +34,4 @@ jobs:
skip: 'release/**'
onlyChanged: true
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
exitZeroOnChanges: false
exitZeroOnChanges: true

View File

@ -31,4 +31,6 @@ jobs:
install-command: pnpm install --frozen-lockfile --dir ./.github/scripts --ignore-workspace
- name: Ensure release-candidate branches
env:
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
run: node ./.github/scripts/ensure-release-candidate-branches.mjs

5
.gitignore vendored
View File

@ -25,6 +25,7 @@ packages/**/.turbo
*.swp
CHANGELOG-*.md
*.mdx
!packages/frontend/@n8n/design-system/**/*.mdx
build-storybook.log
build.log
*.junit.xml
@ -35,6 +36,8 @@ packages/testing/playwright/playwright-report
packages/testing/playwright/test-results
packages/testing/playwright/eval-results.json
packages/@n8n/instance-ai/eval-results.json
packages/@n8n/instance-ai/.eval-output/
packages/@n8n/instance-ai/eval-pr-comment.md
packages/testing/playwright/.playwright-browsers
packages/testing/playwright/.playwright-cli
test-results/
@ -59,6 +62,8 @@ packages/cli/src/commands/export/outputs
.data/
.claude/settings.local.json
.claude/plans/
.claude/worktrees/
.claude/specs/
.cursor/plans/
.superset
.conductor

View File

@ -146,14 +146,11 @@ const children = getChildNodes(workflow.connections, 'NodeName', 'main', 1);
- Import from appropriate error classes in each package
### Frontend Development
- Refer to `packages/frontend/AGENTS.md`
- **All UI text must use i18n** - add translations to `@n8n/i18n` package
- **Use CSS variables directly** - never hardcode spacing as px values
- **data-testid must be a single value** (no spaces or multiple values)
- For style changes and design-system updates, follow
`.agents/design-system-style-rules.md`
When implementing CSS, refer to @packages/frontend/CLAUDE.md for guidelines on
CSS variables and styling conventions.
- Always use `design-system-rules` skill in reviews
### Testing Guidelines
- **Always work from within the package directory** when running tests

View File

@ -1,3 +1,253 @@
# [2.21.0](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.21.0) (2026-05-12)
### Bug Fixes
* Add warning to Computer Use install modal ([#30094](https://github.com/n8n-io/n8n/issues/30094)) ([ecf96ad](https://github.com/n8n-io/n8n/commit/ecf96ad30c8d29641db07cd78885ea28aff26199))
* **ai-builder:** Allow restoring archived workflows from Instance AI ([#29813](https://github.com/n8n-io/n8n/issues/29813)) ([a33a89a](https://github.com/n8n-io/n8n/commit/a33a89a215d6cef39895858bf36c00c15abfdd9d))
* **ai-builder:** Preserve collected planning context ([#29916](https://github.com/n8n-io/n8n/issues/29916)) ([5e3aa1a](https://github.com/n8n-io/n8n/commit/5e3aa1a726e903387344d3a4ed51e97811e4ff02))
* **ai-builder:** Resolve HitlTool variants to base node in get_node_types ([#29731](https://github.com/n8n-io/n8n/issues/29731)) ([ed9471a](https://github.com/n8n-io/n8n/commit/ed9471a5321747bbca003bee7d6a37d54bb79cb2))
* **Airtable Node:** Fix typecast option dropping attachment field updates ([#29556](https://github.com/n8n-io/n8n/issues/29556)) ([0cafc71](https://github.com/n8n-io/n8n/commit/0cafc717a274053f698e988d6f44a27a8b936e83))
* Align undici override across major versions ([#30028](https://github.com/n8n-io/n8n/issues/30028)) ([6b893b4](https://github.com/n8n-io/n8n/commit/6b893b45a0d05dfb08ea7b732f775c28b6ccf801))
* **Calendly Trigger Node:** Use API v2 for webhook subscriptions ([#29771](https://github.com/n8n-io/n8n/issues/29771)) ([0edcdcf](https://github.com/n8n-io/n8n/commit/0edcdcfe8529b6296f1a1f0d8b8af3841a14a466))
* **core:** Activate agent chat integrations on every main ([#30029](https://github.com/n8n-io/n8n/issues/30029)) ([6f4f0a0](https://github.com/n8n-io/n8n/commit/6f4f0a0303e1f0f0cd57a5b0dab08347010b7241))
* **core:** Add configurable retries and error details to S3 ([#28309](https://github.com/n8n-io/n8n/issues/28309)) ([e2576ca](https://github.com/n8n-io/n8n/commit/e2576ca25bc973b315bdcbff1a1b2d3309bc647d))
* **core:** Add ESLint rule to prevent error instances in toThrow assertions ([#29889](https://github.com/n8n-io/n8n/issues/29889)) ([75ed71c](https://github.com/n8n-io/n8n/commit/75ed71c00142e8bbdfb851691d5fc3de3cfada36))
* **core:** Add liveness timeouts for Instance AI ([#30145](https://github.com/n8n-io/n8n/issues/30145)) ([52a4bcb](https://github.com/n8n-io/n8n/commit/52a4bcb23a9398b1327acd0ec39df7a9e00b48b6))
* **core:** Add support for context establishment hooks in webhook mode ([#29893](https://github.com/n8n-io/n8n/issues/29893)) ([04e9b25](https://github.com/n8n-io/n8n/commit/04e9b258a887c07b62774f09e3921932038a3984))
* **core:** Add workflow structure validation ([#29699](https://github.com/n8n-io/n8n/issues/29699)) ([bec74ae](https://github.com/n8n-io/n8n/commit/bec74aeb4fda198853b3ea82ed135a1db3ba4988))
* **core:** Advance Postgres IDENTITY sequences after entity import ([#29762](https://github.com/n8n-io/n8n/issues/29762)) ([ca33060](https://github.com/n8n-io/n8n/commit/ca33060e0bd30c6d077f8dd18ca8492d50c06a92))
* **core:** Agent sessions correctly quoting columns in queries for Postgres ([#29999](https://github.com/n8n-io/n8n/issues/29999)) ([9f92005](https://github.com/n8n-io/n8n/commit/9f92005938a1b481b89558b4e82a198da6ec4e8c))
* **core:** Agents called from workflows use the workflows owner/user ID for calling further workflows through the agent ([#30242](https://github.com/n8n-io/n8n/issues/30242)) ([9072ee3](https://github.com/n8n-io/n8n/commit/9072ee3beb1789f34008cb0f85f361dcac8cae26))
* **core:** Allow GIT_SSH_COMMAND in simple-git after 3.36.0 upgrade ([#29894](https://github.com/n8n-io/n8n/issues/29894)) ([f42be90](https://github.com/n8n-io/n8n/commit/f42be9030e7f549da5ed6dc3902d058c2ebbadcb))
* **core:** Allow profile edits when SSO is no longer active ([#29765](https://github.com/n8n-io/n8n/issues/29765)) ([2714f00](https://github.com/n8n-io/n8n/commit/2714f001218d1323233c1920c94ed02a5ce8dcf1))
* **core:** Allow same-domain redirects in instance-ai web research (TRUST-73) ([#30107](https://github.com/n8n-io/n8n/issues/30107)) ([3123f25](https://github.com/n8n-io/n8n/commit/3123f2551be75fb282628b9106b060975fb983fc))
* **core:** Always create instance-ai sandbox workspace dirs (TRUST-79) ([#30106](https://github.com/n8n-io/n8n/issues/30106)) ([5e88748](https://github.com/n8n-io/n8n/commit/5e887483344daad5e11bee97d3315a9b2b38d0c9))
* **core:** Avoid MCP get_execution hang on circular references ([#30051](https://github.com/n8n-io/n8n/issues/30051)) ([60e23e1](https://github.com/n8n-io/n8n/commit/60e23e10e01f20f73fb1c61d74b5ca44a4c677f6))
* **core:** Check npm provenance in community package scanner ([#29667](https://github.com/n8n-io/n8n/issues/29667)) ([804f51c](https://github.com/n8n-io/n8n/commit/804f51cf0d8411b4d4df6f593fdea787b97fad51))
* **core:** Clarify 0-based indexing in workflow SDK prompts and JSDoc ([#29734](https://github.com/n8n-io/n8n/issues/29734)) ([fba873c](https://github.com/n8n-io/n8n/commit/fba873c37e76f01d28443c5276b2d92bd333602a))
* **core:** Clarify agent builder prompt guidance ([#30127](https://github.com/n8n-io/n8n/issues/30127)) ([75646c4](https://github.com/n8n-io/n8n/commit/75646c45271831bf8d03653baf024d201d5fae6d))
* **core:** Defer credential setup during workflow builds ([#30181](https://github.com/n8n-io/n8n/issues/30181)) ([bb73952](https://github.com/n8n-io/n8n/commit/bb73952fcc9aff4eed0af6bb99fb10f65d48df3d))
* **core:** Emit missing auth audit events for OIDC and SSO-restricted login ([#29856](https://github.com/n8n-io/n8n/issues/29856)) ([dd812c5](https://github.com/n8n-io/n8n/commit/dd812c5010ca28ca38c238bfa8c57fe39ac816d5))
* **core:** Export boolean CSV values as true/false for Data Tables ([#30007](https://github.com/n8n-io/n8n/issues/30007)) ([94d91e1](https://github.com/n8n-io/n8n/commit/94d91e13bfcaf360099a0a3816b0025502b145f4))
* **core:** Filter WaitTracker to only poll waiting executions ([#29898](https://github.com/n8n-io/n8n/issues/29898)) ([5c7921f](https://github.com/n8n-io/n8n/commit/5c7921f71c95d97f6730e6b28b06947b1cfbaa23))
* **core:** Fix duplicate task request on runner defer ([#28315](https://github.com/n8n-io/n8n/issues/28315)) ([80c8a6c](https://github.com/n8n-io/n8n/commit/80c8a6c2fdc97624c9b4b3e97b8ff20aca641552))
* **core:** Harden axios error handling against non-string error stack ([#29100](https://github.com/n8n-io/n8n/issues/29100)) ([2dbf02e](https://github.com/n8n-io/n8n/commit/2dbf02e63e5ddee8d9e4a94f2ad3cd1f5321f2a7))
* **core:** Improve AI chat file upload handling and error states ([#29701](https://github.com/n8n-io/n8n/issues/29701)) ([afe119b](https://github.com/n8n-io/n8n/commit/afe119be1409ac2cb198f7a41dc12ed25f5cf106))
* **core:** Improve documentation usage in mcp tools ([#30210](https://github.com/n8n-io/n8n/issues/30210)) ([e8827cd](https://github.com/n8n-io/n8n/commit/e8827cd6e8ff3eb03ceab6965574bacf10c719d0))
* **core:** Initialise encryption key proxy on worker and webhook instances ([#29912](https://github.com/n8n-io/n8n/issues/29912)) ([ae57e60](https://github.com/n8n-io/n8n/commit/ae57e606b4f5cf691bceb01489e5991cf31911ef))
* **core:** Inline AI_NODE_SDK_VERSION to save memory by not loading @n8n/ai-utilities on boot ([#30113](https://github.com/n8n-io/n8n/issues/30113)) ([f709e53](https://github.com/n8n-io/n8n/commit/f709e5382448926e15e36571aa9fd32db238e36d))
* **core:** Persist agent chat draft across modes and hide unfinished tool-approval toggle ([#30123](https://github.com/n8n-io/n8n/issues/30123)) ([7094b48](https://github.com/n8n-io/n8n/commit/7094b48c9444024af6c14b72b49b47b555db52ef))
* **core:** Preserve node positions on AI workflow updates ([#29850](https://github.com/n8n-io/n8n/issues/29850)) ([f2764f0](https://github.com/n8n-io/n8n/commit/f2764f04c0e663268fe40737c55c8c1a0f33173b))
* **core:** Prevent proxy layer accumulation in ObservableObject ([#30129](https://github.com/n8n-io/n8n/issues/30129)) ([0a76135](https://github.com/n8n-io/n8n/commit/0a761355c4836433c379ee8933c0198621879ae0))
* **core:** Propagate waitTill from worker to main in scaling mode ([#30099](https://github.com/n8n-io/n8n/issues/30099)) ([3702ff8](https://github.com/n8n-io/n8n/commit/3702ff8eb31547d51e3b56b484bf6a731296f9cf))
* **core:** Scope credential resolution ([#30156](https://github.com/n8n-io/n8n/issues/30156)) ([174f0f8](https://github.com/n8n-io/n8n/commit/174f0f805e0d5715d2d80e5c0282a94b79e9a390))
* **core:** Simple-git update broke https connection ([#29998](https://github.com/n8n-io/n8n/issues/29998)) ([01300e9](https://github.com/n8n-io/n8n/commit/01300e9b9b7e0f80f1852c5e1e4b3df9a42404c4))
* **core:** Simplify Slack redirect URL verification process for agents ([#30033](https://github.com/n8n-io/n8n/issues/30033)) ([8201281](https://github.com/n8n-io/n8n/commit/820128196cf550ab8cf371fbebb3457b9fd35d22))
* **core:** Skip disabled tool nodes when mapping AI Agent tool sources ([#29460](https://github.com/n8n-io/n8n/issues/29460)) ([bd7eeb7](https://github.com/n8n-io/n8n/commit/bd7eeb7bc89032b9a0db467cb53f37bfef71647e))
* **core:** Skip unknown fixedCollection keys instead of throwing ([#29689](https://github.com/n8n-io/n8n/issues/29689)) ([a30772c](https://github.com/n8n-io/n8n/commit/a30772c933544d06b560a3c66ec69cd4f7b8574f))
* **core:** Stop applying node-defined sensitive output fields to runtime data ([#30198](https://github.com/n8n-io/n8n/issues/30198)) ([f4e8088](https://github.com/n8n-io/n8n/commit/f4e8088cb8df24443eec0482e2c58346c1e30016))
* **core:** Stop logging password reset token values ([#29405](https://github.com/n8n-io/n8n/issues/29405)) ([bc8d196](https://github.com/n8n-io/n8n/commit/bc8d196931b35118ca6078a5845e8549bbba7e6b))
* **core:** Support type filters on global credential lookups ([#30002](https://github.com/n8n-io/n8n/issues/30002)) ([8e0f37d](https://github.com/n8n-io/n8n/commit/8e0f37d100b45d4105ca168bb8f62ec2c1328cf2))
* **core:** Throw on bare OutputSelector passed to .add()/.to() ([#29736](https://github.com/n8n-io/n8n/issues/29736)) ([60a5122](https://github.com/n8n-io/n8n/commit/60a51229e0db92a00788eb12586ea6376276645d))
* **core:** Validate AI builder credential IDs before save ([#30070](https://github.com/n8n-io/n8n/issues/30070)) ([ceaebc6](https://github.com/n8n-io/n8n/commit/ceaebc6cbe7cde2269aee4be6966d021f136f9c6))
* Correct connect.html path in browser extension ([#29714](https://github.com/n8n-io/n8n/issues/29714)) ([9b3b29b](https://github.com/n8n-io/n8n/commit/9b3b29b5058da42ec736c14cc8af5726b2a64e4b))
* **EditImage Node:** Fix composite operation failing with stream empty buffer ([#30088](https://github.com/n8n-io/n8n/issues/30088)) ([0cc163b](https://github.com/n8n-io/n8n/commit/0cc163b7dcccbfa68c065faa466b2b50f21c4a97))
* **editor:** Add expand/collapse to chat panel in Agents ([#30069](https://github.com/n8n-io/n8n/issues/30069)) ([f87094c](https://github.com/n8n-io/n8n/commit/f87094cf6e5efe7c89ef16c4253525091479b356))
* **editor:** Disable chat during interactive agent choices ([#30111](https://github.com/n8n-io/n8n/issues/30111)) ([8171cf0](https://github.com/n8n-io/n8n/commit/8171cf0b32ee5aa74dd240bb8f99a3250e428217))
* **editor:** Fix Agents styling issues from merge regression ([#30032](https://github.com/n8n-io/n8n/issues/30032)) ([478d499](https://github.com/n8n-io/n8n/commit/478d4998a8055a3d5f81b93120d67282546f125a))
* **editor:** Fix collapse/expand for Chat sidebar ([#29378](https://github.com/n8n-io/n8n/issues/29378)) ([ee847d1](https://github.com/n8n-io/n8n/commit/ee847d1624636914323b8b06f145ae811101528f))
* **editor:** Improve sidebar new resource menu UX ([#29597](https://github.com/n8n-io/n8n/issues/29597)) ([d5af542](https://github.com/n8n-io/n8n/commit/d5af542f254ba4846f3f393404e24bc5ec998283))
* **editor:** Make sure trimmed placeholder never reaches backend ([#29842](https://github.com/n8n-io/n8n/issues/29842)) ([f7c7acc](https://github.com/n8n-io/n8n/commit/f7c7acc2441481235d81a38ea14ed637546d3b40))
* **editor:** Match input height with mode selector in resource locator ([#30075](https://github.com/n8n-io/n8n/issues/30075)) ([277431b](https://github.com/n8n-io/n8n/commit/277431b88b195d92a32e35a7df7f8df907d9cb44))
* **editor:** Polish encryption keys settings page ([#30008](https://github.com/n8n-io/n8n/issues/30008)) ([5cbd2dd](https://github.com/n8n-io/n8n/commit/5cbd2dd1e9a66cb1d00d89191395f2b417c7a08b))
* **editor:** Preserve decimal suffix when duplicating a node ([#29541](https://github.com/n8n-io/n8n/issues/29541)) ([08a36d7](https://github.com/n8n-io/n8n/commit/08a36d7515eda29acd6c5e03f7968d4896465b3d))
* **editor:** Refresh node icon when diff sidebar selection changes ([#29816](https://github.com/n8n-io/n8n/issues/29816)) ([ff41613](https://github.com/n8n-io/n8n/commit/ff41613533980f8f2a0ff7baef5fd2a63d981636))
* **editor:** Rename canvas header dropdown action to Description ([#29719](https://github.com/n8n-io/n8n/issues/29719)) ([49e7b05](https://github.com/n8n-io/n8n/commit/49e7b056b4a21b6341ce1811a597476d37dfa42f))
* **editor:** Rename encryption keys "Type" column to "Status" ([#29966](https://github.com/n8n-io/n8n/issues/29966)) ([e71afed](https://github.com/n8n-io/n8n/commit/e71afedfab84b3b7b88fe9c4e2a36cd31ac6206b))
* **editor:** Render tooltips above popovers ([#29997](https://github.com/n8n-io/n8n/issues/29997)) ([ba5b3d1](https://github.com/n8n-io/n8n/commit/ba5b3d13b116d8e055fe3a4dce1b5349545ff540))
* **editor:** Resolve expressions in 'Go to Sub-workflow' navigation ([#29843](https://github.com/n8n-io/n8n/issues/29843)) ([d6bae35](https://github.com/n8n-io/n8n/commit/d6bae35e8f8f0399cd722606d911ae2c67b60431))
* Fix 15 security issues in fast-xml-builder, basic-ftp, fast-uri and 5 more ([#30169](https://github.com/n8n-io/n8n/issues/30169)) ([267fe49](https://github.com/n8n-io/n8n/commit/267fe49d51b7b8bcc80489b0f9f1a585986bc525))
* **Git Node:** Restore Clone and other operations on simple-git 3.36+ ([#30223](https://github.com/n8n-io/n8n/issues/30223)) ([a8aa955](https://github.com/n8n-io/n8n/commit/a8aa95551e5950fd1920c2cce21cd2739b464266))
* **Google Chat Node:** Clarify message resource name field ([#29964](https://github.com/n8n-io/n8n/issues/29964)) ([55df7cb](https://github.com/n8n-io/n8n/commit/55df7cbd0619e483e7e02207bc5084c715dcb53a))
* **Google Sheets Node:** Reduce duplicate API calls in append operation to avoid quota limits ([#29444](https://github.com/n8n-io/n8n/issues/29444)) ([d63e1ae](https://github.com/n8n-io/n8n/commit/d63e1ae84e767df33c1fc394f646e8ca093aa4a3))
* Handle IMAP fetch errors to prevent instance crash and stuck workflows ([#29469](https://github.com/n8n-io/n8n/issues/29469)) ([46d52ff](https://github.com/n8n-io/n8n/commit/46d52ffc7e719f17db56c433ee97a0b48861ba36))
* **HTTP Request Node:** Validate URL type in older node versions ([#29886](https://github.com/n8n-io/n8n/issues/29886)) ([29a864c](https://github.com/n8n-io/n8n/commit/29a864ca9bcd88e82cf5f998c9ea36d2f81a5dee))
* **MongoDB Node:** Resolve collection parameter per item in write operations ([#29956](https://github.com/n8n-io/n8n/issues/29956)) ([582b6ae](https://github.com/n8n-io/n8n/commit/582b6ae9eaaef6a616233e9bd4eda7230c36eb0a))
* **Notion Node:** Paginate Get Many operations beyond 100-item API cap ([#29690](https://github.com/n8n-io/n8n/issues/29690)) ([d318bc1](https://github.com/n8n-io/n8n/commit/d318bc1e330eeb92d84bc35a2ad9cf6931eccfdf))
* **Notion Node:** Serialize staticData as ISO string in NotionTrigger ([#29688](https://github.com/n8n-io/n8n/issues/29688)) ([d2e1eb3](https://github.com/n8n-io/n8n/commit/d2e1eb30f15c1e2380b815f4d1f62b2b98b23e9a))
* **Notion Node:** Update UI URLs from notion.so to notion.com ahead of domain migration ([#29861](https://github.com/n8n-io/n8n/issues/29861)) ([3593131](https://github.com/n8n-io/n8n/commit/35931319b5b987b7cdd7104accea407fd5390582))
* **Oracle DB Node:** Handle the test failures ([#28341](https://github.com/n8n-io/n8n/issues/28341)) ([0697562](https://github.com/n8n-io/n8n/commit/0697562ac9f1507ca0230d02f462889259a5bdcf))
* Restore broken stdlib calls in Python Code node ([#29776](https://github.com/n8n-io/n8n/issues/29776)) ([a786476](https://github.com/n8n-io/n8n/commit/a7864762ca656c8e636df1ea33750dff604b60ab))
* **RSS Feed Read Node:** Respect proxy settings ([#30059](https://github.com/n8n-io/n8n/issues/30059)) ([2e046d5](https://github.com/n8n-io/n8n/commit/2e046d5b7f2ec4a6fbf00107ee088239f87ce8c5))
* **Salesforce Node:** Fix trigger not firing on repeated record updates ([#29107](https://github.com/n8n-io/n8n/issues/29107)) ([f871d44](https://github.com/n8n-io/n8n/commit/f871d44cabc95fb102af8ba1a9e5d2e314205297))
* **Schedule Node:** Fix hourly intervals that don't divide evenly into 24h ([#29778](https://github.com/n8n-io/n8n/issues/29778)) ([1a22c76](https://github.com/n8n-io/n8n/commit/1a22c762703bed75a18de868a7bfb7c60eacc516))
* **Snowflake Node:** Fix issue with Insert and Update operations not working ([#29339](https://github.com/n8n-io/n8n/issues/29339)) ([4c369e8](https://github.com/n8n-io/n8n/commit/4c369e83f26450395a5a28b6c39a04b2c7650f1f))
* **Supabase Node:** Don't display RPCs in an RLC for the table ([#28146](https://github.com/n8n-io/n8n/issues/28146)) ([78aa0e7](https://github.com/n8n-io/n8n/commit/78aa0e70f21df2533a494c02a3e35ca3ab6ca7b0))
* **Wait Node:** Resolve expressions inside Custom HTML form fields ([#30060](https://github.com/n8n-io/n8n/issues/30060)) ([7c1a771](https://github.com/n8n-io/n8n/commit/7c1a77154ccf1a5f2a11da3cdf0949b2883c85fb))
* **YouTube Node:** Fix misspelled "unlisted" privacy status value in Video Update operation ([#30203](https://github.com/n8n-io/n8n/issues/30203)) ([96b018d](https://github.com/n8n-io/n8n/commit/96b018d3569623e1696a28981b24120a3ceb46d0))
### Features
* **Acuity Scheduling Trigger Node:** Add webhook request verification ([#29261](https://github.com/n8n-io/n8n/issues/29261)) ([da41470](https://github.com/n8n-io/n8n/commit/da41470311a03a15beb5d7361c0385b7dd9acc12))
* Add fully dynamic disclaimer to Quick Connect offer ([#29852](https://github.com/n8n-io/n8n/issues/29852)) ([b6127d8](https://github.com/n8n-io/n8n/commit/b6127d8722ff1bddd9eb5786a6cbd90ce2f98ac1))
* **ai-builder:** Add per-PR eval regression detection vs LangSmith baseline ([#29456](https://github.com/n8n-io/n8n/issues/29456)) ([bbe3e2d](https://github.com/n8n-io/n8n/commit/bbe3e2d1487e06df1e58057ec8c47edb5ad19aa7))
* **ai-builder:** Guarantee user-visible output on terminal states ([#29636](https://github.com/n8n-io/n8n/issues/29636)) ([4d9e624](https://github.com/n8n-io/n8n/commit/4d9e624b4113d06a4cc7a632aed357806349abcb))
* **Asana Trigger Node:** Add webhook request verification ([#29258](https://github.com/n8n-io/n8n/issues/29258)) ([94e4033](https://github.com/n8n-io/n8n/commit/94e403300b44d2f25f4d88dd3d9d1300adfea3bc))
* **Cal Trigger Node:** Add webhook request verification ([#29484](https://github.com/n8n-io/n8n/issues/29484)) ([3276edc](https://github.com/n8n-io/n8n/commit/3276edce10dfc7e59aa12e43fd7fc566f91723c4))
* **Calendly Trigger Node:** Add webhook request verification ([#29482](https://github.com/n8n-io/n8n/issues/29482)) ([e929f9f](https://github.com/n8n-io/n8n/commit/e929f9fbe751742da7f27658ded1ff0101af19d2))
* **core:** Accept merge.input(n) inside ifElse/switch branch targets in workflow-sdk ([#29716](https://github.com/n8n-io/n8n/issues/29716)) ([34f2107](https://github.com/n8n-io/n8n/commit/34f2107071478591a1c98b65576262c40408a157))
* **core:** Add flag to import workflow cli to activate workflow on import ([#29770](https://github.com/n8n-io/n8n/issues/29770)) ([283071e](https://github.com/n8n-io/n8n/commit/283071e6114fd8e8b5063e1ba38daf158bd762d2))
* **core:** Add IP rate limiting to dynamic credential authentication endpoints ([#30199](https://github.com/n8n-io/n8n/issues/30199)) ([515ae7c](https://github.com/n8n-io/n8n/commit/515ae7ced4b109880306788cb16977c15de92279))
* **core:** Add MCP tool to list credentials ([#29438](https://github.com/n8n-io/n8n/issues/29438)) ([d6cc3be](https://github.com/n8n-io/n8n/commit/d6cc3bedd1c4e7a2849eb5cf2acf538fb3a8f3da))
* **core:** Add multi-config evaluations backend ([#29784](https://github.com/n8n-io/n8n/issues/29784)) ([8116e0a](https://github.com/n8n-io/n8n/commit/8116e0a4858044712e45c078e06e0a36103d141c))
* **core:** Add n8n-object-validation ESLint rule for community nodes ([#29698](https://github.com/n8n-io/n8n/issues/29698)) ([701f9a4](https://github.com/n8n-io/n8n/commit/701f9a462773c204a6dc8bd15c533f9c07cd6e08))
* **core:** Add no-template-placeholders ESLint rule for community nodes ([#29796](https://github.com/n8n-io/n8n/issues/29796)) ([c4056b2](https://github.com/n8n-io/n8n/commit/c4056b255edd4420fde6cb5e1028b61f10b2bcf7))
* **core:** Add observational memory storage foundation ([#29814](https://github.com/n8n-io/n8n/issues/29814)) ([be4ef22](https://github.com/n8n-io/n8n/commit/be4ef225336166937a8847c2f2615bfd29e40765))
* **core:** Define community packages with environment variables ([#29961](https://github.com/n8n-io/n8n/issues/29961)) ([730c3e1](https://github.com/n8n-io/n8n/commit/730c3e12a55a38cdbe9090eabef508cd56d67a9e))
* **core:** Generate service-specific OAuth2 credentials for dedicated MCP tools ([#29884](https://github.com/n8n-io/n8n/issues/29884)) ([8617067](https://github.com/n8n-io/n8n/commit/86170674b72acc16d781eafd08cd762c55a7672f))
* **core:** Server-side pagination, sorting, and filtering for encryption keys ([#29708](https://github.com/n8n-io/n8n/issues/29708)) ([9afbe13](https://github.com/n8n-io/n8n/commit/9afbe13b81f00f0ea7730541b4909e31b1080249))
* **core:** Transform MCP server configs into dedicated MCP tools ([#29493](https://github.com/n8n-io/n8n/issues/29493)) ([4dce41f](https://github.com/n8n-io/n8n/commit/4dce41f79573f864fde16df622c028134d743f03))
* **core:** Use McpManagerClient and enforce whether MCP server connections are allowed ([#29694](https://github.com/n8n-io/n8n/issues/29694)) ([8235474](https://github.com/n8n-io/n8n/commit/82354742d348850d8cb6efc6ffe490c53ff0a8a0))
* **Customer.io Trigger Node:** Add webhook request verification ([#29480](https://github.com/n8n-io/n8n/issues/29480)) ([a772016](https://github.com/n8n-io/n8n/commit/a772016e36a87d1fbbacbee59ebcd80dbe3b9150))
* **editor:** Add envFeatureFlag and copyButton property options ([#29733](https://github.com/n8n-io/n8n/issues/29733)) ([75053fe](https://github.com/n8n-io/n8n/commit/75053fec9373076abfba3db01a967f54f8274e83))
* **editor:** Cap eval concurrency slider at admin-set limit ([#29807](https://github.com/n8n-io/n8n/issues/29807)) ([6232de4](https://github.com/n8n-io/n8n/commit/6232de4d477ffa56e0082d87a5b63d1c9ef00d4c))
* **editor:** Eval run detail loading + error states (TRUST-70 follow-up) ([#29817](https://github.com/n8n-io/n8n/issues/29817)) ([6f9b99a](https://github.com/n8n-io/n8n/commit/6f9b99a3cf1207ece10a6bd6239a5005c6a10540))
* **editor:** Redesign evaluation run detail page ([#29592](https://github.com/n8n-io/n8n/issues/29592)) ([9014bae](https://github.com/n8n-io/n8n/commit/9014baea7ea952aaf782c53bce03d3a8f0ae5ddf))
* **editor:** Show locked state and permission notice on data redaction workflow settings ([#30022](https://github.com/n8n-io/n8n/issues/30022)) ([7635131](https://github.com/n8n-io/n8n/commit/7635131bd396252f51d29e7407099eafa92a304f))
* **Figma Trigger Node:** Add OAuth2 authentication support ([#30079](https://github.com/n8n-io/n8n/issues/30079)) ([e3e70d6](https://github.com/n8n-io/n8n/commit/e3e70d6068a3d543b29b1bd24682101ecb2e641f))
* **Figma Trigger Node:** Add webhook request verification ([#29262](https://github.com/n8n-io/n8n/issues/29262)) ([910822f](https://github.com/n8n-io/n8n/commit/910822fb0951f6ead55fc000e7743a8ee13e82e9))
* **Formstack Trigger Node:** Add webhook request verification ([#29495](https://github.com/n8n-io/n8n/issues/29495)) ([4e28652](https://github.com/n8n-io/n8n/commit/4e2865206c72833d9fe585ed941ecc83c1bec699))
* **GitLab Trigger Node:** Add webhook request verification ([#29260](https://github.com/n8n-io/n8n/issues/29260)) ([fbf89bd](https://github.com/n8n-io/n8n/commit/fbf89bde1164a19365fe4418405ddec7108543d9))
* **Jira Node:** Add OAuth2 (3LO) support ([#29414](https://github.com/n8n-io/n8n/issues/29414)) ([4d5bafc](https://github.com/n8n-io/n8n/commit/4d5bafc146125fa22d05cf924c5e68bc51263722))
* **MailerLite Trigger Node:** Add webhook request verification ([#29491](https://github.com/n8n-io/n8n/issues/29491)) ([12b7cc6](https://github.com/n8n-io/n8n/commit/12b7cc67395bf1991235ae0f00739d9f2803cb9c))
* **Mautic Trigger Node:** Add webhook request verification ([#29658](https://github.com/n8n-io/n8n/issues/29658)) ([eaadf19](https://github.com/n8n-io/n8n/commit/eaadf190b89f21f74bc3a25b16803576f91e9618))
* **Microsoft Outlook Node:** Add location and attendees fields to calendar events ([#29844](https://github.com/n8n-io/n8n/issues/29844)) ([2e21c5f](https://github.com/n8n-io/n8n/commit/2e21c5fcf83a2fc86659c7464b2bc6672230389f))
* **Microsoft Outlook Node:** Add support for recurring event instances ([#29802](https://github.com/n8n-io/n8n/issues/29802)) ([dab3653](https://github.com/n8n-io/n8n/commit/dab3653f8016b7f9187559658ea6ef58220df2d1))
* **Onfleet Trigger Node:** Add webhook request verification ([#29485](https://github.com/n8n-io/n8n/issues/29485)) ([133a5aa](https://github.com/n8n-io/n8n/commit/133a5aa0adae69f86f1603bd9ad85c852c0ccdf5))
* **Strava Node:** Allow custom OAuth2 scopes ([#29972](https://github.com/n8n-io/n8n/issues/29972)) ([5abcae6](https://github.com/n8n-io/n8n/commit/5abcae686cf1b64e06bbbd6f62b6871bc4feec56))
* **Taiga Trigger Node:** Add webhook request verification ([#29487](https://github.com/n8n-io/n8n/issues/29487)) ([3c97c49](https://github.com/n8n-io/n8n/commit/3c97c49d63c824c2a3b4284beecf8957c44c1c16))
* **Trello Trigger Node:** Add webhook request verification ([#29252](https://github.com/n8n-io/n8n/issues/29252)) ([8f1f42d](https://github.com/n8n-io/n8n/commit/8f1f42d18056ba51e450ba90ba3be65cbf9745aa))
* **Twilio Trigger Node:** Add webhook request verification ([#29259](https://github.com/n8n-io/n8n/issues/29259)) ([acc9643](https://github.com/n8n-io/n8n/commit/acc964381189aaacbeb584a16c0155ba6f96ffa1))
# [2.20.0](https://github.com/n8n-io/n8n/compare/n8n@2.19.0...n8n@2.20.0) (2026-05-05)
### Bug Fixes
* **ai-builder:** Add boundaries on the workflow builder remediation loops ([#29430](https://github.com/n8n-io/n8n/issues/29430)) ([2259f32](https://github.com/n8n-io/n8n/commit/2259f32de88c103b088b450bf46990ad2e939942))
* **ai-builder:** Allow skipping final ask-user question ([#29563](https://github.com/n8n-io/n8n/issues/29563)) ([661f990](https://github.com/n8n-io/n8n/commit/661f9908bce51076811c76c854f165f4c5acaccf))
* **ai-builder:** Filter LangSmith eval dataset by local file slugs ([#29507](https://github.com/n8n-io/n8n/issues/29507)) ([54d9286](https://github.com/n8n-io/n8n/commit/54d9286d922e0cad17d5c5de10a052d653c1591b))
* **ai-builder:** Handle properties with contradicting displayOptions as OR alternatives instead of AND ([#29500](https://github.com/n8n-io/n8n/issues/29500)) ([84ac811](https://github.com/n8n-io/n8n/commit/84ac8110f8d70dd653b4d40cb63259522731b0d0))
* **ai-builder:** Stop builder from adding auth to inbound trigger nodes by default ([#29648](https://github.com/n8n-io/n8n/issues/29648)) ([c28d501](https://github.com/n8n-io/n8n/commit/c28d501ba1630861fa0993d0d85f08efb635a5a4))
* Allow 5-field cron expressions with step values in polling nodes ([#29447](https://github.com/n8n-io/n8n/issues/29447)) ([d18f183](https://github.com/n8n-io/n8n/commit/d18f183b211416d5b74cfdc2e740b9c663ede134))
* **Anthropic Chat Model Node:** Add adaptive thinking mode for Claude Opus 4.7+ ([#29467](https://github.com/n8n-io/n8n/issues/29467)) ([90d875c](https://github.com/n8n-io/n8n/commit/90d875ce3e5a2a004a5a3d8f28ac4e9820b109f4))
* **Compare Datasets Node:** Preserve falsy values in mix mode except fields ([#29666](https://github.com/n8n-io/n8n/issues/29666)) ([62ddc5c](https://github.com/n8n-io/n8n/commit/62ddc5c443273559c286a1d2eb19efdca345ac9a))
* **core:** Accept placeholder() inside node credentials slot ([#29691](https://github.com/n8n-io/n8n/issues/29691)) ([dc6bd68](https://github.com/n8n-io/n8n/commit/dc6bd68de3b419fb1e23806781bbc125b621ed8a))
* **core:** Acquire expression isolate for dynamic node parameter requests ([#29671](https://github.com/n8n-io/n8n/issues/29671)) ([418f1f2](https://github.com/n8n-io/n8n/commit/418f1f2edb6abfebe1085b8c3b5c1b22530f1a5c))
* **core:** Add file path validation to localFile source ([#29464](https://github.com/n8n-io/n8n/issues/29464)) ([7277566](https://github.com/n8n-io/n8n/commit/7277566c64c36f5e43c17a2e620da2408ab1dcb7))
* **core:** Add GET handler to MCP endpoint for Streamable HTTP spec compliance ([#28787](https://github.com/n8n-io/n8n/issues/28787)) ([4ae0322](https://github.com/n8n-io/n8n/commit/4ae0322ef246348892000d0539904e56c122d204))
* **core:** Add timeout to external secrets provider refresh ([#29679](https://github.com/n8n-io/n8n/issues/29679)) ([e350429](https://github.com/n8n-io/n8n/commit/e35042999f7d477ed1da59f43ef03605763ac2bf))
* **core:** Apply credential allowed domains in declarative node requests ([#29082](https://github.com/n8n-io/n8n/issues/29082)) ([8551b1b](https://github.com/n8n-io/n8n/commit/8551b1b90ce16b31a017bd07177694ef39ad226d))
* **core:** Correct LDAP search filter construction ([#29388](https://github.com/n8n-io/n8n/issues/29388)) ([32dd743](https://github.com/n8n-io/n8n/commit/32dd7433b7ef168161e32c20939859060da9827c))
* **core:** Fix code node executions hanging when idle timer overlaps with task acceptance ([#29239](https://github.com/n8n-io/n8n/issues/29239)) ([7bd3532](https://github.com/n8n-io/n8n/commit/7bd3532f07c151568634e84f3ae24f38ab8e60e4))
* **core:** Fix MCP OAuth discovery URL construction and grant type selection ([#27283](https://github.com/n8n-io/n8n/issues/27283)) ([d92ec16](https://github.com/n8n-io/n8n/commit/d92ec168aa5f984513874e2978f73d8f2cbdc80e))
* **core:** Force saving executions when instance AI executes WFs ([#29515](https://github.com/n8n-io/n8n/issues/29515)) ([ef56501](https://github.com/n8n-io/n8n/commit/ef56501d4729b5b508a4c5e60263d10a8fc9db76))
* **core:** Gate Instance AI edits to pre-existing workflows ([#29501](https://github.com/n8n-io/n8n/issues/29501)) ([6175fd6](https://github.com/n8n-io/n8n/commit/6175fd6f7b56ead0176938657085b763c1204681))
* **core:** Generate array types for properties with multipleValues ([#29410](https://github.com/n8n-io/n8n/issues/29410)) ([fb65c61](https://github.com/n8n-io/n8n/commit/fb65c6155ee9ae5b11a2c409f35e98c206aaf164))
* **core:** Handle missing runData during execution recovery ([#29513](https://github.com/n8n-io/n8n/issues/29513)) ([8b7b4f5](https://github.com/n8n-io/n8n/commit/8b7b4f575d9d9b5b02a8ddf67aaff6b3d5279d78))
* **core:** Harden Set node workflow SDK contract ([#29568](https://github.com/n8n-io/n8n/issues/29568)) ([625ed5e](https://github.com/n8n-io/n8n/commit/625ed5e95a90f30e07e88253515713056e406f5b))
* **core:** Include stack trace in error logs for non-ApplicationError errors ([#29496](https://github.com/n8n-io/n8n/issues/29496)) ([16d1461](https://github.com/n8n-io/n8n/commit/16d1461858107697eac399039c834c7296fe8868))
* **core:** Increase default task runner grant token TTL to 30s ([#29443](https://github.com/n8n-io/n8n/issues/29443)) ([328f4b8](https://github.com/n8n-io/n8n/commit/328f4b8b964d587763bf14b1980916046878f0f0))
* **core:** Isolate expressions on chat resumption and test webhook deactivation ([#29703](https://github.com/n8n-io/n8n/issues/29703)) ([568e5a2](https://github.com/n8n-io/n8n/commit/568e5a24bf8f4e73d0b134dbac1631535bba10a7))
* **core:** Make MCP client registration cap tunable and surface a proper limit error ([#29429](https://github.com/n8n-io/n8n/issues/29429)) ([dad4231](https://github.com/n8n-io/n8n/commit/dad423155f1ee105e3ed1eab0b65a8d8bc2ee3a3))
* **core:** Make task runner grant token TTL configurable ([#29357](https://github.com/n8n-io/n8n/issues/29357)) ([3f350a8](https://github.com/n8n-io/n8n/commit/3f350a85770680895be5723803ef51453476fed2))
* **core:** Pass nodeTypesProvider to validate workflows fully at instance AI ([#29333](https://github.com/n8n-io/n8n/issues/29333)) ([388cd79](https://github.com/n8n-io/n8n/commit/388cd79908418d558fff36f938969cdc79fc60c2))
* **core:** Persist execution context before writing to db ([#28973](https://github.com/n8n-io/n8n/issues/28973)) ([c4bb5ae](https://github.com/n8n-io/n8n/commit/c4bb5ae8df8e7de4c7b919a82d3cf2f492edcc5b))
* **core:** Recreate data table backing tables on entity import ([#29454](https://github.com/n8n-io/n8n/issues/29454)) ([6bca1fa](https://github.com/n8n-io/n8n/commit/6bca1fa26f0d1a23c8c7e175dc6ae590eeb2036e))
* **core:** Reject empty webhookMethods in community lint rule ([#29474](https://github.com/n8n-io/n8n/issues/29474)) ([34d7a02](https://github.com/n8n-io/n8n/commit/34d7a02df73f233ef55fc78e3ea8167bc2b32a1f))
* **core:** Reset Redis retry counter on successful reconnect ([#29377](https://github.com/n8n-io/n8n/issues/29377)) ([7722023](https://github.com/n8n-io/n8n/commit/7722023abd8ffb2f96a7dbec0ba51e4d7454ea05))
* **core:** Respect global admin scope when listing favorites ([#29472](https://github.com/n8n-io/n8n/issues/29472)) ([d9d1e7c](https://github.com/n8n-io/n8n/commit/d9d1e7c44a1bcf074cdbec234b0d8d4ddb8d7d5e))
* **core:** Restore peer project discovery in share dropdowns ([#29537](https://github.com/n8n-io/n8n/issues/29537)) ([2a0e2fb](https://github.com/n8n-io/n8n/commit/2a0e2fb47ae1d82cd2354db8c2013ea46f24f21e))
* **core:** Round fractional time saved values before inserting into insights BIGINT column ([#29553](https://github.com/n8n-io/n8n/issues/29553)) ([74d55b9](https://github.com/n8n-io/n8n/commit/74d55b9c681273ae79fbaf39693bd3b37d83b66a))
* **core:** Show AI Builder draft workflows in workflow list ([#29670](https://github.com/n8n-io/n8n/issues/29670)) ([dc52bbd](https://github.com/n8n-io/n8n/commit/dc52bbd5329a27245a5fe2a1da45d9e8efe6a549))
* **core:** Use editor base URL for workflow and execution links ([#23630](https://github.com/n8n-io/n8n/issues/23630)) ([896461b](https://github.com/n8n-io/n8n/commit/896461bee3c356e66b282763cd31427a137ebd62))
* **core:** Validate workflow import URL requests ([#29178](https://github.com/n8n-io/n8n/issues/29178)) ([ecd0ba8](https://github.com/n8n-io/n8n/commit/ecd0ba8ebabc99055441290d543f0bd87a33df31))
* **core:** Wire EncryptionKeyProxy provider on bootstrap ([#29581](https://github.com/n8n-io/n8n/issues/29581)) ([ee7260c](https://github.com/n8n-io/n8n/commit/ee7260c4959b0dff8636606aebdac10eddd76e36))
* **DeepL Node:** Update credentials to use header-based authentication ([#24614](https://github.com/n8n-io/n8n/issues/24614)) ([b72bd19](https://github.com/n8n-io/n8n/commit/b72bd1987c33b15cd658d2a038b9763c6fb83b55))
* Drop template search tools from builder ([#29573](https://github.com/n8n-io/n8n/issues/29573)) ([9b00ccb](https://github.com/n8n-io/n8n/commit/9b00ccbfd1cfb123533397126123f5d2ad34071f))
* **editor:** Add proper bg color for hover state with color-mix() ([#29590](https://github.com/n8n-io/n8n/issues/29590)) ([6698c42](https://github.com/n8n-io/n8n/commit/6698c42e4ed4706825f5d2e3bac39641e261f153))
* **editor:** Align message box button radius with N8nButton ([#29397](https://github.com/n8n-io/n8n/issues/29397)) ([bc315d0](https://github.com/n8n-io/n8n/commit/bc315d087fd772218b2f3caa047c86493c048f27))
* **editor:** Fix OAuth2 credential showing "Needs first setup" after connecting ([#29617](https://github.com/n8n-io/n8n/issues/29617)) ([243f665](https://github.com/n8n-io/n8n/commit/243f665e60bff1c2531977c3f860aa7589a321e9))
* **editor:** Fix sub-workflow folder placement and connection loss ([#28770](https://github.com/n8n-io/n8n/issues/28770)) ([44579d6](https://github.com/n8n-io/n8n/commit/44579d6d3ae59a1f4eedf9a0b49cecb006053072))
* **editor:** Ignore paste events on read-only canvas ([#29673](https://github.com/n8n-io/n8n/issues/29673)) ([34c49b9](https://github.com/n8n-io/n8n/commit/34c49b9c238de5d5ee0b9421918435c4582eb13a))
* **editor:** Keep publish actions menu enabled for published workflows ([#29396](https://github.com/n8n-io/n8n/issues/29396)) ([c65fa28](https://github.com/n8n-io/n8n/commit/c65fa28e1caac5a49e6a5e82d3354ed631be0df4))
* **editor:** Load more executions on tall screens ([#29407](https://github.com/n8n-io/n8n/issues/29407)) ([a273a9d](https://github.com/n8n-io/n8n/commit/a273a9d3f498d8112605f1277ce7848d8bd357c3))
* **editor:** Make instance ai resource link chips open resources ([#29577](https://github.com/n8n-io/n8n/issues/29577)) ([b97ca36](https://github.com/n8n-io/n8n/commit/b97ca36a99d099288cfc127df98038b2b64c03d5))
* **editor:** Make textarea resize handle accessible in NDV ([#29676](https://github.com/n8n-io/n8n/issues/29676)) ([9fda733](https://github.com/n8n-io/n8n/commit/9fda7332c4c0a8851a7482365a967ea18db2a816))
* **editor:** Mark workflow dirty after debug pinData changes ([#28886](https://github.com/n8n-io/n8n/issues/28886)) ([2beb006](https://github.com/n8n-io/n8n/commit/2beb0062a5f92c883f18abaf9ea33590a41aca49))
* **editor:** Never block publishing on node execution issues ([#29479](https://github.com/n8n-io/n8n/issues/29479)) ([5a56459](https://github.com/n8n-io/n8n/commit/5a564591291989f13ac667eed575332f7f4d2a6a))
* **editor:** Polish encryption keys date range filter ([#29569](https://github.com/n8n-io/n8n/issues/29569)) ([56412bc](https://github.com/n8n-io/n8n/commit/56412bcce2ef1d364acdbe422f5c88762319bb22))
* **editor:** Remove clipping for focus panel textarea ([#28677](https://github.com/n8n-io/n8n/issues/28677)) ([5361257](https://github.com/n8n-io/n8n/commit/5361257a80e515e1cc26cdf10e8ceb78c9ec70be))
* **editor:** Restore read-only mode for archived workflows on canvas ([#29559](https://github.com/n8n-io/n8n/issues/29559)) ([a7ef741](https://github.com/n8n-io/n8n/commit/a7ef7416b111384d250f975e718c691b2674fef6))
* **editor:** Show permission-aware message on redacted input/output panels ([#29521](https://github.com/n8n-io/n8n/issues/29521)) ([83c400e](https://github.com/n8n-io/n8n/commit/83c400e8d47c875f57dce26680358595822ce012))
* **editor:** Surface unofficial verified community node tools in AI Tools picker ([#28985](https://github.com/n8n-io/n8n/issues/28985)) ([f77dfd1](https://github.com/n8n-io/n8n/commit/f77dfd1a11591124e6db61c72ed207067bae6214))
* Fix ollama node url path and thinking tokens ([#23963](https://github.com/n8n-io/n8n/issues/23963)) ([4ea1153](https://github.com/n8n-io/n8n/commit/4ea1153dfb903346bead9e6d328ec8f543c80559))
* **Google Drive Node:** Resolve original file name when copying with empty name ([#28896](https://github.com/n8n-io/n8n/issues/28896)) ([c274976](https://github.com/n8n-io/n8n/commit/c2749768aa5d173c3354e8d31a18c438ebd5fdfb))
* **Merge Node:** Improve SQL Query mode memory efficiency and error reporting ([#28993](https://github.com/n8n-io/n8n/issues/28993)) ([12275c8](https://github.com/n8n-io/n8n/commit/12275c86d992115fef2ded4e5f172730222c5669))
* **Microsoft Outlook Trigger Node:** Use per-folder endpoints for folder-scoped message polling ([#29663](https://github.com/n8n-io/n8n/issues/29663)) ([f401f91](https://github.com/n8n-io/n8n/commit/f401f9101d08fc62eef7e051f3baa23638c80c1b))
* No Credits state for n8n Connect badge ([#29375](https://github.com/n8n-io/n8n/issues/29375)) ([47ad397](https://github.com/n8n-io/n8n/commit/47ad39777f9525324524f2595fc4506065f33a9c))
* **Notion Node:** Support app.notion.com URL format for page and block ID extraction ([#29554](https://github.com/n8n-io/n8n/issues/29554)) ([221c7f7](https://github.com/n8n-io/n8n/commit/221c7f7410d25b89b052e89d745184675b69dc53))
* **Postgres Node:** Output Large-Format Numbers As option ignored after pool is cached ([#29477](https://github.com/n8n-io/n8n/issues/29477)) ([a65e181](https://github.com/n8n-io/n8n/commit/a65e181a2213f1b984c225539302a1a12a30cc9b))
* **Salesforce Node:** Allow overriding JWT audience with My Domain URL ([#29016](https://github.com/n8n-io/n8n/issues/29016)) ([9decb1e](https://github.com/n8n-io/n8n/commit/9decb1e2a9f6d6612014354d7ca6f8b62600ce9d))
* **Schedule Node:** Cap day-of-month jitter at 28 ([#29614](https://github.com/n8n-io/n8n/issues/29614)) ([86f47ee](https://github.com/n8n-io/n8n/commit/86f47ee6dc88397b05bfb784b0092674ba3b4289))
* Skip AI tool generation for community trigger nodes ([#29453](https://github.com/n8n-io/n8n/issues/29453)) ([c724dac](https://github.com/n8n-io/n8n/commit/c724dace38ec1e3aa69de40d48e068cf36c962b0))
* **Snowflake Node:** Avoid call stack overflow on large result sets ([#29200](https://github.com/n8n-io/n8n/issues/29200)) ([b2ac67f](https://github.com/n8n-io/n8n/commit/b2ac67f15452c625d4dee146a040b6324cdfefbb))
* **Telegram Trigger Node:** Drop pending updates when creating a new webhook ([#29103](https://github.com/n8n-io/n8n/issues/29103)) ([4358f1d](https://github.com/n8n-io/n8n/commit/4358f1d51c588e76d03aa677f9b7deabbbc1af9d))
* **Todoist Node:** Migrate to Todoist unified API v1 endpoints ([#29532](https://github.com/n8n-io/n8n/issues/29532)) ([5799481](https://github.com/n8n-io/n8n/commit/5799481d1c3bf14806d11ba2928af4f7f88db29f))
* Use explicit node references for AI memory session keys ([#29473](https://github.com/n8n-io/n8n/issues/29473)) ([139b803](https://github.com/n8n-io/n8n/commit/139b803daefca44fd66a92156867d77ccdffcc66))
* Validate sql ([#24706](https://github.com/n8n-io/n8n/issues/24706)) ([47a6658](https://github.com/n8n-io/n8n/commit/47a6658b2d4cd2d4be5e59b0d61f9bd25b553007))
* **Zammad Node:** Add To and CC fields for email articles ([#28860](https://github.com/n8n-io/n8n/issues/28860)) ([e04f027](https://github.com/n8n-io/n8n/commit/e04f027b5dd008eb0c9354d166c716a93cdc48b7))
### Features
* Add instance-level JWKS URI endpoint for JWE public key distribution ([#29498](https://github.com/n8n-io/n8n/issues/29498)) ([794334c](https://github.com/n8n-io/n8n/commit/794334cd79f1ee5a05cd0d818fc801920e0fe6d9))
* Add no-runtime-dependencies ESLint rule ([#29366](https://github.com/n8n-io/n8n/issues/29366)) ([8aace75](https://github.com/n8n-io/n8n/commit/8aace75535f53ebf37c2a547849e044948c99cb8))
* Add pairwise workflow eval pipeline ([#29123](https://github.com/n8n-io/n8n/issues/29123)) ([fdceec2](https://github.com/n8n-io/n8n/commit/fdceec21b996a1456ceb44389e760a80d75d49a1))
* Add valid-credential-references ESLint rule ([#29452](https://github.com/n8n-io/n8n/issues/29452)) ([c6c6f8f](https://github.com/n8n-io/n8n/commit/c6c6f8ff3889a48ac73d5e5bb242e88818707fc0))
* **core:** Add --include and --exclude flags to import:credentials command ([#29364](https://github.com/n8n-io/n8n/issues/29364)) ([f5132b9](https://github.com/n8n-io/n8n/commit/f5132b9e9abe23eb1a2b1225d889f1dd83d83f94))
* **core:** Add configurable event log path per process ([#29403](https://github.com/n8n-io/n8n/issues/29403)) ([45effb8](https://github.com/n8n-io/n8n/commit/45effb8959e4013d46a022a5a3f901e9d0284d35))
* **core:** Add endpoint to toggle mcp access for multiple workflows ([#29007](https://github.com/n8n-io/n8n/issues/29007)) ([0d907d6](https://github.com/n8n-io/n8n/commit/0d907d67945dfd9624eda6f3fb634cee4bd2d195))
* **core:** Add JWE decryption to OAuth2 credential flow ([#29497](https://github.com/n8n-io/n8n/issues/29497)) ([ad7cdcc](https://github.com/n8n-io/n8n/commit/ad7cdcc04f47e1c34754636098ff698b7b153d05))
* **core:** Add MCP tool search executions ([#29161](https://github.com/n8n-io/n8n/issues/29161)) ([1d9548c](https://github.com/n8n-io/n8n/commit/1d9548c81f6a984882aadd7091cd649967aa7201))
* **core:** Add migration for postgres variable values ([#29489](https://github.com/n8n-io/n8n/issues/29489)) ([898ba5a](https://github.com/n8n-io/n8n/commit/898ba5ae2562542af11031b5dfdf0400afb91fbd))
* **core:** Add preAuthentication support to requestOAuth2 pipeline ([#29418](https://github.com/n8n-io/n8n/issues/29418)) ([473d49c](https://github.com/n8n-io/n8n/commit/473d49c9b18ff4d8226f54fe0c5c8a2a1c6fdca5))
* **core:** Bootstrap legacy CBC and initial GCM encryption keys on startup ([#29400](https://github.com/n8n-io/n8n/issues/29400)) ([9576ab9](https://github.com/n8n-io/n8n/commit/9576ab907cc3bdb560d1b40a1582ecf67c253d3a))
* **core:** Broadcast workflow settings updates ([#29459](https://github.com/n8n-io/n8n/issues/29459)) ([9cb1605](https://github.com/n8n-io/n8n/commit/9cb160585c05ccb1770554cd0998ea4d9b0ab3cc))
* **core:** Decouple insights pruning max age from license ([#29527](https://github.com/n8n-io/n8n/issues/29527)) ([45c18fb](https://github.com/n8n-io/n8n/commit/45c18fb09c04749063edc3545c38ad37006c0c49))
* **core:** Fix user access control logic ([#29481](https://github.com/n8n-io/n8n/issues/29481)) ([484cb2e](https://github.com/n8n-io/n8n/commit/484cb2efba8b33555c4d34bb95680d16a3328c1e))
* **core:** Manage MCP settings via environment variables ([#29368](https://github.com/n8n-io/n8n/issues/29368)) ([05e10e2](https://github.com/n8n-io/n8n/commit/05e10e268083fd7f9f1176634f0c1cab88297b94))
* **core:** Run evaluation test cases in parallel behind PostHog rollout flag ([#29412](https://github.com/n8n-io/n8n/issues/29412)) ([4c76aa1](https://github.com/n8n-io/n8n/commit/4c76aa1467d08d5f188cf8b7716b52b410f2bd65))
* **core:** Use versioned prebuilt Daytona snapshots for Instance AI sandboxes ([#29359](https://github.com/n8n-io/n8n/issues/29359)) ([308d0b4](https://github.com/n8n-io/n8n/commit/308d0b42b32a3372bac3a759b15ee410c9d095eb))
* **core:** Warn and skip on duplicate scheduled executions ([#28649](https://github.com/n8n-io/n8n/issues/28649)) ([b8b7571](https://github.com/n8n-io/n8n/commit/b8b75719ba373a27f60c6f471b170216fe7c41a9))
* **editor:** Add data encryption keys settings page ([#29068](https://github.com/n8n-io/n8n/issues/29068)) ([656f9c2](https://github.com/n8n-io/n8n/commit/656f9c2d7fc635c117efaeb40bb0fb98256f5ba3))
* **editor:** Add environment variable to disable workflow autosave ([#25144](https://github.com/n8n-io/n8n/issues/25144)) ([a2afc47](https://github.com/n8n-io/n8n/commit/a2afc47c226a716b7ae059306e684748c9d65947))
* **editor:** Add reveal redacted data permission to custom roles execution section ([#29526](https://github.com/n8n-io/n8n/issues/29526)) ([be22095](https://github.com/n8n-io/n8n/commit/be22095646c0daf2bbdc2afb7ebc4c1e4a50e349))
* **editor:** Add transition on Sidebar collapsed ([#29650](https://github.com/n8n-io/n8n/issues/29650)) ([07b5343](https://github.com/n8n-io/n8n/commit/07b53430f9e9efefaa78d90d3a613d5518ede4e5))
* **editor:** Hide model selector for unsupported AI Gateway actions ([#29588](https://github.com/n8n-io/n8n/issues/29588)) ([0f7776e](https://github.com/n8n-io/n8n/commit/0f7776e972c1d94d0f61d6d8855865802ef2a273))
* **editor:** Move Switch component to core design system ([#27322](https://github.com/n8n-io/n8n/issues/27322)) ([758f89c](https://github.com/n8n-io/n8n/commit/758f89c9ef4b936e1904c244698ccb4d92f6dd51))
* **editor:** Track IdP role mapping in provisioning telemetry ([#29416](https://github.com/n8n-io/n8n/issues/29416)) ([40da23f](https://github.com/n8n-io/n8n/commit/40da23f68899bc11240b252d417aa01dec8485a9))
* **editor:** Update copy for mcp settings ([#29399](https://github.com/n8n-io/n8n/issues/29399)) ([5f93b48](https://github.com/n8n-io/n8n/commit/5f93b48e79067251e782940489848f81f897d3a4))
* Include updatedAt in encryption key response DTO ([#29424](https://github.com/n8n-io/n8n/issues/29424)) ([569f94b](https://github.com/n8n-io/n8n/commit/569f94bb828bdd662bb291bd1d566e4e2a8ebdae))
* **instance-ai:** Orchestrator-executed checkpoint tasks for planned workflow verification ([#29049](https://github.com/n8n-io/n8n/issues/29049)) ([ad359b5](https://github.com/n8n-io/n8n/commit/ad359b5e2ceaaf2ba04559e43117d81bc5f2df25))
* **Netlify Trigger Node:** Add webhook request verification ([#29256](https://github.com/n8n-io/n8n/issues/29256)) ([1516ec7](https://github.com/n8n-io/n8n/commit/1516ec7c06ab797dbf94fd1b8a0322209e6ee0bc))
* **Slack Node:** Allow users to configure OAuth2 scopes ([#28728](https://github.com/n8n-io/n8n/issues/28728)) ([aa0daf9](https://github.com/n8n-io/n8n/commit/aa0daf9fb630661d35e8bd006ed3b749051f7a7d))
* Validate workflow-sdk output topology against mode ([#29363](https://github.com/n8n-io/n8n/issues/29363)) ([0a80722](https://github.com/n8n-io/n8n/commit/0a80722dcb3fcdbc23d9e768413b3141ec329adc))
# [2.19.0](https://github.com/n8n-io/n8n/compare/n8n@2.18.0...n8n@2.19.0) (2026-04-28)

View File

@ -43,7 +43,7 @@ reviews:
## Step 4: Design System Style Rules
Follow `.agents/design-system-style-rules.md` for all CSS/SCSS/Vue style
Follow `.claude/plugins/n8n/skills/design-system-rules/SKILL.md` for all CSS/SCSS/Vue style
review guidance.
Enforcement level:
@ -213,7 +213,7 @@ reviews:
humans handle edge cases.
- name: Design System Tokens
description: |-
Follow `.agents/design-system-style-rules.md`.
Follow `.claude/plugins/n8n/skills/design-system-rules/SKILL.md`.
Apply balanced enforcement:
- Strong warning: hard-coded visual values, legacy token usage, and

View File

@ -0,0 +1,20 @@
ARG NODE_VERSION=24.14.1
FROM node:${NODE_VERSION}-alpine3.22
ENV NODE_ENV=production
RUN apk add --no-cache tini
WORKDIR /app
# `compiled/` is produced by `pnpm build:docker`. It's a `pnpm deploy --prod`
# output containing package.json, dist/, and a node_modules with only
# production dependencies — no devDeps, no workspace bloat.
COPY --chown=node:node ./compiled /app
USER node
EXPOSE 3000
ENTRYPOINT ["tini", "--"]
CMD ["node", "dist/serve.js"]

View File

@ -49,6 +49,8 @@ const config = {
// This resolve the path mappings from the tsconfig relative to each jest.config.js
moduleNameMapper: {
'^@n8n/utils$': resolve(__dirname, 'packages/@n8n/utils/dist/index.cjs'),
// jest-resolve@29 doesn't honor `./lib/*` subpath patterns in @anthropic-ai/sdk's exports map
'^@anthropic-ai/sdk/lib/(.*)$': '@anthropic-ai/sdk/lib/$1.js',
...(compilerOptions?.paths
? pathsToModuleNameMapper(compilerOptions.paths, {
prefix: `<rootDir>${compilerOptions.baseUrl ? `/${compilerOptions.baseUrl.replace(/^\.\//, '')}` : ''}`,

View File

@ -1,6 +1,6 @@
{
"name": "n8n-monorepo",
"version": "2.19.0",
"version": "2.21.0",
"private": true,
"engines": {
"node": ">=22.16",
@ -73,7 +73,7 @@
"jest-mock-extended": "^3.0.4",
"lefthook": "^1.7.15",
"license-checker": "^25.0.1",
"nock": "^14.0.1",
"nock": "^14.0.14",
"nodemon": "^3.0.1",
"npm-run-all2": "^7.0.2",
"p-limit": "^3.1.0",
@ -103,7 +103,6 @@
"@mistralai/mistralai": "^1.10.0",
"@n8n/typeorm>@sentry/node": "catalog:sentry",
"@types/node": "^20.17.50",
"axios": "1.15.0",
"chokidar": "4.0.3",
"esbuild": "^0.25.0",
"expr-eval@2.0.2": "npm:expr-eval-fork@3.0.0",
@ -137,10 +136,10 @@
"@smithy/config-resolver": ">=4.4.0",
"@rudderstack/rudder-sdk-node@<=3.0.0": "3.0.0",
"diff": "8.0.3",
"undici@5": "^6.24.0",
"undici@6": "^6.24.0",
"undici@7": "^7.24.0",
"tar": "^7.5.11",
"hono": "4.12.14",
"ajv@6": "6.14.0",
"ajv@7": "8.18.0",
"ajv@8": "8.18.0",
@ -167,7 +166,12 @@
"@xmldom/xmldom": "0.8.13",
"langsmith": "0.5.19",
"yaml@<=2.8.3": "2.8.3",
"fast-xml-parser": "5.7.0"
"axios": "1.16.0",
"fast-xml-parser": "5.7.2",
"hono": "4.12.18",
"@anthropic-ai/sdk@<=0.91.1": "0.91.1",
"uuid@<=13.0.1": "13.0.1",
"fast-uri": "3.1.2"
},
"patchedDependencies": {
"bull@4.16.4": "patches/bull@4.16.4.patch",

View File

@ -70,8 +70,7 @@ docs/
```
The **`index.ts`** surface also exports `Workspace` / sandbox / filesystem types,
`SqliteMemory` / `PostgresMemory`, `LangSmithTelemetry`, and `evals` alongside the
core SDK builders.
`InMemoryMemory`, `LangSmithTelemetry`, and `evals` alongside the core SDK builders.
Optional **peer dependencies** (telemetry): `langsmith`, `@opentelemetry/sdk-trace-node`,
`@opentelemetry/sdk-trace-base`, `@opentelemetry/exporter-trace-otlp-http` — all

View File

@ -367,7 +367,7 @@ At end of turn, `saveToMemory()` uses `list.turnDelta()` and
`saveMessagesToThread`. If **semantic recall** is configured with an embedder
and `memory.saveEmbeddings`, new messages are embedded and stored.
**Working memory:** when configured, the runtime injects an `updateWorkingMemory`
**Working memory:** when configured, the runtime injects an `update_working_memory`
tool into the agent's tool set. The current state is included in the system prompt
so the model can read it; when new information should be persisted the model calls
the tool, which validates the input and asynchronously persists via
@ -415,7 +415,7 @@ src/
tool-adapter.ts — buildToolMap, executeTool, toAiSdkTools, suspend / agent-result guards
stream.ts — convertChunk, toTokenUsage
runtime-helpers.ts — normalizeInput, usage merge, stream error helpers, …
working-memory.ts — instruction text, updateWorkingMemory tool builder
working-memory.ts — instruction text, update_working_memory tool builder
strip-orphaned-tool-messages.ts
title-generation.ts
logger.ts

View File

@ -1,6 +1,6 @@
{
"name": "@n8n/agents",
"version": "0.6.0",
"version": "0.7.0",
"description": "AI agent SDK for n8n's code-first execution engine",
"main": "dist/index.js",
"module": "dist/index.js",
@ -24,23 +24,32 @@
"test:integration": "vitest run --config vitest.integration.config.mjs"
},
"dependencies": {
"@ai-sdk/amazon-bedrock": "catalog:",
"@ai-sdk/anthropic": "^3.0.58",
"@ai-sdk/azure": "catalog:",
"@ai-sdk/cohere": "catalog:",
"@ai-sdk/deepseek": "catalog:",
"@ai-sdk/gateway": "catalog:",
"@ai-sdk/google": "^3.0.43",
"@ai-sdk/groq": "catalog:",
"@ai-sdk/mistral": "catalog:",
"@ai-sdk/openai": "^3.0.41",
"@ai-sdk/xai": "^3.0.67",
"@ai-sdk/provider-utils": "^4.0.21",
"@modelcontextprotocol/sdk": "1.26.0",
"ajv": "^8.18.0",
"@ai-sdk/xai": "^3.0.67",
"@libsql/client": "^0.17.0",
"@modelcontextprotocol/sdk": "1.26.0",
"@n8n/ai-utilities": "workspace:*",
"@openrouter/ai-sdk-provider": "catalog:",
"ai": "^6.0.116",
"ajv": "^8.18.0",
"pg": "catalog:",
"zod": "catalog:"
},
"peerDependencies": {
"langsmith": ">=0.3.0",
"@opentelemetry/sdk-trace-node": ">=1.0.0",
"@opentelemetry/exporter-trace-otlp-http": ">=0.50.0",
"@opentelemetry/sdk-trace-base": ">=1.0.0",
"@opentelemetry/exporter-trace-otlp-http": ">=0.50.0"
"@opentelemetry/sdk-trace-node": ">=1.0.0",
"langsmith": "catalog:"
},
"peerDependenciesMeta": {
"langsmith": {

View File

@ -1,445 +0,0 @@
/**
* Tests for the Agent builder focusing on per-run isolation guarantees introduced
* by the "shared config, per-run runtime" refactor.
*/
import { Agent } from '../sdk/agent';
import { AgentEvent } from '../types/runtime/event';
// ---------------------------------------------------------------------------
// Module mocks (same as agent-runtime.test.ts)
// ---------------------------------------------------------------------------
jest.mock('@ai-sdk/openai', () => ({
createOpenAI: () => () => ({ provider: 'openai', modelId: 'mock', specificationVersion: 'v3' }),
}));
jest.mock('@ai-sdk/anthropic', () => ({
createAnthropic: () => () => ({
provider: 'anthropic',
modelId: 'mock',
specificationVersion: 'v3',
}),
}));
// eslint-disable-next-line @typescript-eslint/consistent-type-imports
type AiImport = typeof import('ai');
jest.mock('ai', () => {
const actual = jest.requireActual<AiImport>('ai');
return {
...actual,
generateText: jest.fn(),
streamText: jest.fn(),
tool: jest.fn((config: unknown) => config),
Output: {
object: jest.fn(({ schema }: { schema: unknown }) => ({ _type: 'object', schema })),
},
};
});
// Prevent real catalog HTTP calls
jest.mock('../sdk/catalog', () => ({
getModelCost: jest.fn().mockResolvedValue(undefined),
computeCost: jest.fn(),
}));
// eslint-disable-next-line @typescript-eslint/no-require-imports
const { generateText, streamText } = require('ai') as {
generateText: jest.Mock;
streamText: jest.Mock;
};
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function makeGenerateSuccess(text = 'OK') {
return {
finishReason: 'stop',
usage: { inputTokens: 10, outputTokens: 5, totalTokens: 15 },
response: {
messages: [{ role: 'assistant', content: [{ type: 'text', text }] }],
},
toolCalls: [],
};
}
function* makeChunkStream(chunks: Array<Record<string, unknown>>) {
for (const c of chunks) yield c;
}
function makeStreamSuccess(text = 'Hello') {
return {
fullStream: makeChunkStream([{ type: 'text-delta', textDelta: text }]),
finishReason: Promise.resolve('stop'),
usage: Promise.resolve({ inputTokens: 10, outputTokens: 5, totalTokens: 15 }),
response: Promise.resolve({
messages: [{ role: 'assistant', content: [{ type: 'text', text }] }],
}),
toolCalls: Promise.resolve([]),
};
}
async function drainStream(stream: ReadableStream<unknown>): Promise<void> {
const reader = stream.getReader();
while (true) {
const { done } = await reader.read();
if (done) break;
}
}
function buildAgent() {
return new Agent('test').model('openai/gpt-4o-mini').instructions('You are a test assistant.');
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
describe('Agent — per-run isolation', () => {
beforeEach(() => {
jest.clearAllMocks();
});
describe('concurrent generate() calls', () => {
it('returns independent results for each call', async () => {
generateText
.mockResolvedValueOnce(makeGenerateSuccess('Result A'))
.mockResolvedValueOnce(makeGenerateSuccess('Result B'));
const agent = buildAgent();
const [resultA, resultB] = await Promise.all([
agent.generate('Prompt A'),
agent.generate('Prompt B'),
]);
const textA = resultA.messages
.flatMap((m) => ('content' in m ? m.content : []))
.filter((c) => c.type === 'text')
.map((c) => ('text' in c ? c.text : ''))
.join('');
const textB = resultB.messages
.flatMap((m) => ('content' in m ? m.content : []))
.filter((c) => c.type === 'text')
.map((c) => ('text' in c ? c.text : ''))
.join('');
expect(textA).toBe('Result A');
expect(textB).toBe('Result B');
expect(resultA.runId).not.toBe(resultB.runId);
});
it('aborting one generate() does not cancel the other', async () => {
const abortControllerA = new AbortController();
// Run A resolves only after a delay; we'll abort it via its signal.
// Run B resolves immediately.
let resolveA!: (v: unknown) => void;
const pendingA = new Promise((res) => {
resolveA = res;
});
generateText.mockImplementation(async ({ abortSignal }: { abortSignal?: AbortSignal }) => {
if (abortSignal === abortControllerA.signal || abortSignal?.aborted) {
// Simulate the AI SDK throwing on abort
await new Promise((_, rej) =>
abortSignal.addEventListener('abort', () => rej(new Error('aborted')), {
once: true,
}),
);
}
// Run B path — return immediately
await pendingA;
return makeGenerateSuccess('Result B');
});
const agent = buildAgent();
// Start both runs; abort run A immediately
const runAPromise = agent.generate('Prompt A', { abortSignal: abortControllerA.signal });
abortControllerA.abort();
resolveA(undefined);
const runA = await runAPromise;
expect(runA.finishReason).toBe('error');
// Run B separately (no abort)
generateText.mockResolvedValueOnce(makeGenerateSuccess('Result B'));
const runB = await agent.generate('Prompt B');
const textB = runB.messages
.flatMap((m) => ('content' in m ? m.content : []))
.filter((c) => c.type === 'text')
.map((c) => ('text' in c ? c.text : ''))
.join('');
expect(textB).toBe('Result B');
});
});
describe('concurrent stream() calls', () => {
it('returns independent streams for each call', async () => {
streamText
.mockReturnValueOnce(makeStreamSuccess('Stream A'))
.mockReturnValueOnce(makeStreamSuccess('Stream B'));
const agent = buildAgent();
const [resultA, resultB] = await Promise.all([
agent.stream('Prompt A'),
agent.stream('Prompt B'),
]);
// Both streams should be distinct ReadableStream objects
expect(resultA.stream).not.toBe(resultB.stream);
expect(resultA.runId).not.toBe(resultB.runId);
// Drain both streams to completion
await Promise.all([drainStream(resultA.stream), drainStream(resultB.stream)]);
});
it('aborting one stream does not cancel the other', async () => {
const abortControllerA = new AbortController();
streamText.mockImplementation(({ abortSignal }: { abortSignal?: AbortSignal }) => {
if (abortSignal === abortControllerA.signal) {
return {
fullStream: (async function* () {
// Wait until aborted then throw
await new Promise<void>((_, rej) => {
abortSignal.addEventListener('abort', () => rej(new Error('aborted')), {
once: true,
});
});
yield 'something';
})(),
finishReason: Promise.resolve('error'),
usage: Promise.resolve({ inputTokens: 0, outputTokens: 0, totalTokens: 0 }),
response: Promise.resolve({ messages: [] }),
toolCalls: Promise.resolve([]),
};
}
return makeStreamSuccess('Stream B');
});
const agent = buildAgent();
const [resultA, resultB] = await Promise.all([
agent.stream('Prompt A', { abortSignal: abortControllerA.signal }),
agent.stream('Prompt B'),
]);
// Abort run A
abortControllerA.abort();
// Drain stream B — it should complete successfully regardless of A being aborted
await drainStream(resultB.stream);
// Drain stream A — it will error but shouldn't affect B
await drainStream(resultA.stream).catch(() => {});
});
});
describe('event handlers (on())', () => {
it('fires registered handlers for every concurrent run', async () => {
generateText
.mockResolvedValueOnce(makeGenerateSuccess('A'))
.mockResolvedValueOnce(makeGenerateSuccess('B'));
const agent = buildAgent();
const agentStartEvents: string[] = [];
agent.on(AgentEvent.AgentStart, () => {
agentStartEvents.push('start');
});
await Promise.all([agent.generate('Prompt A'), agent.generate('Prompt B')]);
// Handler should have fired once per run
expect(agentStartEvents).toHaveLength(2);
});
it('handlers registered before first run still fire on every subsequent run', async () => {
generateText
.mockResolvedValueOnce(makeGenerateSuccess('First'))
.mockResolvedValueOnce(makeGenerateSuccess('Second'));
const agent = buildAgent();
const events: string[] = [];
agent.on(AgentEvent.AgentEnd, () => {
events.push('end');
});
await agent.generate('First');
await agent.generate('Second');
expect(events).toHaveLength(2);
});
});
describe('abort() broadcast', () => {
it('aborts all active runs when agent.abort() is called', async () => {
let resolveA!: (v: unknown) => void;
generateText.mockImplementation(async ({ abortSignal }: { abortSignal?: AbortSignal }) => {
// Each call waits until its resolver is called or the signal fires
await new Promise((res, rej) => {
abortSignal?.addEventListener('abort', () => rej(new Error('aborted')), {
once: true,
});
resolveA ??= res;
});
return makeGenerateSuccess();
});
const agent = buildAgent();
const runAPromise = agent.generate('A');
const runBPromise = agent.generate('B');
// Give both calls time to reach the mock and register abort listeners
await new Promise((res) => setTimeout(res, 10));
// Broadcast abort — both runs should be cancelled
agent.abort();
const [runA, runB] = await Promise.all([runAPromise, runBPromise]);
expect(runA.finishReason).toBe('error');
expect(runB.finishReason).toBe('error');
});
});
describe('off() — event handler removal', () => {
it('removes a specific handler so it no longer fires', async () => {
generateText
.mockResolvedValueOnce(makeGenerateSuccess('A'))
.mockResolvedValueOnce(makeGenerateSuccess('B'));
const agent = buildAgent();
const events: string[] = [];
const handler = () => events.push('end');
agent.on(AgentEvent.AgentEnd, handler);
await agent.generate('First');
agent.off(AgentEvent.AgentEnd, handler);
await agent.generate('Second');
// Handler should have fired only for the first run
expect(events).toHaveLength(1);
});
it('removing one handler does not affect other handlers for the same event', async () => {
generateText.mockResolvedValueOnce(makeGenerateSuccess('A'));
const agent = buildAgent();
const firedA: string[] = [];
const firedB: string[] = [];
const handlerA = () => firedA.push('a');
const handlerB = () => firedB.push('b');
agent.on(AgentEvent.AgentEnd, handlerA);
agent.on(AgentEvent.AgentEnd, handlerB);
agent.off(AgentEvent.AgentEnd, handlerA);
await agent.generate('Hello');
expect(firedA).toHaveLength(0);
expect(firedB).toHaveLength(1);
});
it('off() on a handler that was never registered is a no-op', () => {
const agent = buildAgent();
expect(() => agent.off(AgentEvent.AgentEnd, () => {})).not.toThrow();
});
});
describe('trackStreamBus — cleanup on stream cancel', () => {
it('removes the bus from active runs when the consumer cancels the stream', async () => {
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
const agent = buildAgent();
// Access the private set via casting so we can assert its size
const getActiveBuses = () =>
(agent as unknown as { activeEventBuses: Set<unknown> }).activeEventBuses;
const { stream } = await agent.stream('Hello');
// Bus is registered while the stream is live
expect(getActiveBuses().size).toBe(1);
// Consumer cancels instead of draining
await stream.cancel();
// Bus must be removed immediately after cancel
expect(getActiveBuses().size).toBe(0);
});
it('removes the bus from active runs when the consumer drains the stream normally', async () => {
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
const agent = buildAgent();
const getActiveBuses = () =>
(agent as unknown as { activeEventBuses: Set<unknown> }).activeEventBuses;
const { stream } = await agent.stream('Hello');
expect(getActiveBuses().size).toBe(1);
await drainStream(stream);
expect(getActiveBuses().size).toBe(0);
});
it('abort() after stream cancel does not throw on a disposed bus', async () => {
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
const agent = buildAgent();
const { stream } = await agent.stream('Hello');
await stream.cancel();
// agent.abort() should be harmless — no active buses remain
expect(() => agent.abort()).not.toThrow();
});
});
describe('result.getState()', () => {
it('generate() result.getState() reports success after a clean run', async () => {
generateText.mockResolvedValueOnce(makeGenerateSuccess());
const agent = buildAgent();
const result = await agent.generate('Hello');
expect(result.getState().status).toBe('success');
});
it('generate() result.getState() reports failed after an error', async () => {
generateText.mockRejectedValueOnce(new Error('boom'));
const agent = buildAgent();
const result = await agent.generate('Hello');
expect(result.getState().status).toBe('failed');
});
it('stream() result.getState() reports success after the stream is consumed', async () => {
streamText.mockReturnValueOnce(makeStreamSuccess());
const agent = buildAgent();
const { stream, getState } = await agent.stream('Hello');
// State is running while stream is open
expect(getState().status).toBe('running');
await drainStream(stream);
expect(getState().status).toBe('success');
});
});
});

View File

@ -1,405 +0,0 @@
import { z } from 'zod';
import { Agent } from '../sdk/agent';
import { McpClient } from '../sdk/mcp-client';
import { Telemetry } from '../sdk/telemetry';
import { Tool } from '../sdk/tool';
import type { BuiltEval, BuiltGuardrail, BuiltMemory, BuiltProviderTool } from '../types';
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function makeMockMemory(): BuiltMemory {
return {
getThread: jest.fn(),
saveThread: jest.fn(),
deleteThread: jest.fn(),
getMessages: jest.fn(),
saveMessages: jest.fn(),
deleteMessages: jest.fn(),
} as unknown as BuiltMemory;
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
describe('Agent.describe()', () => {
it('returns null/empty fields for an unconfigured agent', () => {
const agent = new Agent('test-agent');
const schema = agent.describe();
expect(schema.model).toEqual({ provider: null, name: null });
expect(schema.credential).toBeNull();
expect(schema.instructions).toBeNull();
expect(schema.description).toBeNull();
expect(schema.tools).toEqual([]);
expect(schema.providerTools).toEqual([]);
expect(schema.memory).toBeNull();
expect(schema.evaluations).toEqual([]);
expect(schema.guardrails).toEqual([]);
expect(schema.mcp).toBeNull();
expect(schema.telemetry).toBeNull();
expect(schema.checkpoint).toBeNull();
expect(schema.config.structuredOutput).toEqual({ enabled: false, schemaSource: null });
expect(schema.config.thinking).toBeNull();
expect(schema.config.toolCallConcurrency).toBeNull();
expect(schema.config.requireToolApproval).toBe(false);
});
// --- Model parsing ---
it('parses two-arg model (provider, name)', () => {
const agent = new Agent('test-agent').model('anthropic', 'claude-sonnet-4-5');
const schema = agent.describe();
expect(schema.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
});
it('parses single-arg model with slash', () => {
const agent = new Agent('test-agent').model('anthropic/claude-sonnet-4-5');
const schema = agent.describe();
expect(schema.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
});
it('parses model without slash', () => {
const agent = new Agent('test-agent').model('gpt-4o');
const schema = agent.describe();
expect(schema.model).toEqual({ provider: null, name: 'gpt-4o' });
});
it('handles object model config', () => {
const agent = new Agent('test-agent').model({
id: 'anthropic/claude-sonnet-4-5',
apiKey: 'sk-test',
});
const schema = agent.describe();
expect(schema.model).toEqual({ provider: null, name: null, raw: 'object' });
});
// --- Credential ---
it('returns credential name', () => {
const agent = new Agent('test-agent').credential('my-anthropic-key');
const schema = agent.describe();
expect(schema.credential).toBe('my-anthropic-key');
});
// --- Instructions ---
it('returns instructions text', () => {
const agent = new Agent('test-agent').instructions('You are helpful.');
const schema = agent.describe();
expect(schema.instructions).toBe('You are helpful.');
});
// --- Custom tool ---
it('describes a custom tool with handler, input schema, and suspend/resume', () => {
const suspendSchema = z.object({ reason: z.string() });
const resumeSchema = z.object({ approved: z.boolean() });
const tool = new Tool('danger')
.description('A dangerous action')
.input(z.object({ target: z.string() }))
.output(z.object({ result: z.string() }))
.suspend(suspendSchema)
.resume(resumeSchema)
.handler(async ({ target }) => await Promise.resolve({ result: target }))
.build();
const agent = new Agent('test-agent').tool(tool);
const schema = agent.describe();
expect(schema.tools).toHaveLength(1);
const ts = schema.tools[0];
expect(ts.name).toBe('danger');
expect(ts.editable).toBe(true);
expect(ts.hasSuspend).toBe(true);
expect(ts.hasResume).toBe(true);
expect(ts.hasToMessage).toBe(false);
expect(ts.inputSchema).toBeTruthy();
expect(ts.outputSchema).toBeTruthy();
// handlerSource is a fallback (compiled JS), CLI overrides with real TypeScript
expect(ts.handlerSource).toContain('target');
// Source string fields are null — CLI patches with original TypeScript
expect(ts.inputSchemaSource).toBeNull();
expect(ts.outputSchemaSource).toBeNull();
expect(ts.suspendSchemaSource).toBeNull();
expect(ts.resumeSchemaSource).toBeNull();
expect(ts.toMessageSource).toBeNull();
expect(ts.requireApproval).toBe(false);
expect(ts.needsApprovalFnSource).toBeNull();
expect(ts.providerOptions).toBeNull();
});
// --- Provider tool ---
it('describes a provider tool in providerTools array', () => {
const providerTool: BuiltProviderTool = {
name: 'anthropic.web_search_20250305',
args: { maxResults: 5 },
};
const agent = new Agent('test-agent').providerTool(providerTool);
const schema = agent.describe();
// Provider tools are now in a separate array
expect(schema.tools).toHaveLength(0);
expect(schema.providerTools).toHaveLength(1);
expect(schema.providerTools[0].name).toBe('anthropic.web_search_20250305');
expect(schema.providerTools[0].source).toBe('');
});
// --- MCP servers ---
it('describes MCP servers in mcp field', () => {
const client = new McpClient([
{ name: 'browser', url: 'http://localhost:9222/mcp', transport: 'streamableHttp' },
{ name: 'fs', command: 'echo', args: ['test'] },
]);
const agent = new Agent('test-agent').mcp(client);
const schema = agent.describe();
// MCP servers are now in a separate mcp field
expect(schema.tools).toHaveLength(0);
expect(schema.mcp).toHaveLength(2);
expect(schema.mcp![0].name).toBe('browser');
expect(schema.mcp![0].configSource).toBe('');
expect(schema.mcp![1].name).toBe('fs');
expect(schema.mcp![1].configSource).toBe('');
});
it('returns null mcp when no clients are configured', () => {
const agent = new Agent('test-agent');
const schema = agent.describe();
expect(schema.mcp).toBeNull();
});
// --- Guardrails ---
it('describes input and output guardrails', () => {
const inputGuard: BuiltGuardrail = {
name: 'pii-filter',
guardType: 'pii',
strategy: 'redact',
_config: { types: ['email', 'phone'] },
};
const outputGuard: BuiltGuardrail = {
name: 'moderation-check',
guardType: 'moderation',
strategy: 'block',
_config: {},
};
const agent = new Agent('test-agent').inputGuardrail(inputGuard).outputGuardrail(outputGuard);
const schema = agent.describe();
expect(schema.guardrails).toHaveLength(2);
expect(schema.guardrails[0]).toEqual({
name: 'pii-filter',
guardType: 'pii',
strategy: 'redact',
position: 'input',
config: { types: ['email', 'phone'] },
source: '',
});
expect(schema.guardrails[1]).toEqual({
name: 'moderation-check',
guardType: 'moderation',
strategy: 'block',
position: 'output',
config: {},
source: '',
});
});
// --- Telemetry ---
it('returns telemetry schema when telemetry builder is set', () => {
const agent = new Agent('test-agent').telemetry(new Telemetry());
const schema = agent.describe();
expect(schema.telemetry).toEqual({ source: '' });
});
it('returns null telemetry when not configured', () => {
const agent = new Agent('test-agent');
const schema = agent.describe();
expect(schema.telemetry).toBeNull();
});
// --- Checkpoint ---
it('returns memory checkpoint when checkpoint is memory', () => {
const agent = new Agent('test-agent').checkpoint('memory');
const schema = agent.describe();
expect(schema.checkpoint).toBe('memory');
});
it('returns null checkpoint when not configured', () => {
const agent = new Agent('test-agent');
const schema = agent.describe();
expect(schema.checkpoint).toBeNull();
});
// --- Memory ---
it('describes memory configuration', () => {
const agent = new Agent('test-agent').memory({
memory: makeMockMemory(),
lastMessages: 20,
semanticRecall: {
topK: 5,
messageRange: { before: 2, after: 2 },
embedder: 'openai/text-embedding-3-small',
},
workingMemory: {
template: 'Current state: {{state}}',
structured: false,
scope: 'resource' as const,
},
});
const schema = agent.describe();
expect(schema.memory).toBeTruthy();
expect(schema.memory!.source).toBeNull();
expect(schema.memory!.lastMessages).toBe(20);
expect(schema.memory!.semanticRecall).toEqual({
topK: 5,
messageRange: { before: 2, after: 2 },
embedder: 'openai/text-embedding-3-small',
});
expect(schema.memory!.workingMemory).toEqual({
type: 'freeform',
template: 'Current state: {{state}}',
});
});
it('describes structured working memory', () => {
const agent = new Agent('test-agent').memory({
memory: makeMockMemory(),
lastMessages: 10,
workingMemory: {
template: '',
structured: true,
schema: z.object({ notes: z.string() }),
scope: 'resource' as const,
},
});
const schema = agent.describe();
expect(schema.memory!.workingMemory!.type).toBe('structured');
expect(schema.memory!.workingMemory!.schema).toBeTruthy();
});
// --- Evaluations ---
it('describes evaluations with evalType, modelId, and handlerSource', () => {
const checkEval: BuiltEval = {
name: 'has-greeting',
description: 'Checks for greeting',
evalType: 'check',
modelId: null,
credentialName: null,
_run: jest.fn(),
};
const judgeEval: BuiltEval = {
name: 'quality-judge',
description: undefined,
evalType: 'judge',
modelId: 'anthropic/claude-haiku-4-5',
credentialName: 'anthropic-key',
_run: jest.fn(),
};
const agent = new Agent('test-agent').eval(checkEval).eval(judgeEval);
const schema = agent.describe();
expect(schema.evaluations).toHaveLength(2);
expect(schema.evaluations[0]).toEqual({
name: 'has-greeting',
description: 'Checks for greeting',
type: 'check',
modelId: null,
hasCredential: false,
credentialName: null,
handlerSource: null,
});
expect(schema.evaluations[1]).toEqual({
name: 'quality-judge',
description: null,
type: 'judge',
modelId: 'anthropic/claude-haiku-4-5',
hasCredential: true,
credentialName: 'anthropic-key',
handlerSource: null,
});
});
// --- Thinking config ---
it('describes anthropic thinking config', () => {
const agent = new Agent('test-agent')
.model('anthropic', 'claude-sonnet-4-5')
.thinking('anthropic', { budgetTokens: 10000 });
const schema = agent.describe();
expect(schema.config.thinking).toEqual({
provider: 'anthropic',
budgetTokens: 10000,
});
});
it('describes openai thinking config', () => {
const agent = new Agent('test-agent')
.model('openai', 'o3-mini')
.thinking('openai', { reasoningEffort: 'high' });
const schema = agent.describe();
expect(schema.config.thinking).toEqual({
provider: 'openai',
reasoningEffort: 'high',
});
});
// --- requireToolApproval ---
it('reflects requireToolApproval flag', () => {
const agent = new Agent('test-agent').requireToolApproval();
const schema = agent.describe();
expect(schema.config.requireToolApproval).toBe(true);
});
// --- toolCallConcurrency ---
it('reflects toolCallConcurrency', () => {
const agent = new Agent('test-agent').toolCallConcurrency(5);
const schema = agent.describe();
expect(schema.config.toolCallConcurrency).toBe(5);
});
// --- Structured output ---
it('describes structured output with schemaSource null', () => {
const outputSchema = z.object({ code: z.string(), explanation: z.string() });
const agent = new Agent('test-agent').structuredOutput(outputSchema);
const schema = agent.describe();
expect(schema.config.structuredOutput.enabled).toBe(true);
expect(schema.config.structuredOutput.schemaSource).toBeNull();
});
});

View File

@ -1,606 +0,0 @@
import { z } from 'zod';
import { Agent } from '../sdk/agent';
import { isSuspendResult } from '../sdk/from-schema';
import type { HandlerExecutor } from '../types/sdk/handler-executor';
import type { AgentSchema, ToolSchema } from '../types/sdk/schema';
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function mockExecutor(): HandlerExecutor {
return {
executeTool: jest.fn().mockResolvedValue({ result: 'mocked' }),
executeToMessage: jest.fn().mockResolvedValue(undefined),
executeEval: jest.fn().mockResolvedValue({ score: 1 }),
evaluateSchema: jest.fn().mockResolvedValue(undefined),
evaluateExpression: jest.fn().mockResolvedValue(undefined),
};
}
function minimalSchema(overrides: Partial<AgentSchema> = {}): AgentSchema {
return {
model: { provider: 'anthropic', name: 'claude-sonnet-4-5' },
credential: 'my-credential',
instructions: 'You are helpful.',
description: null,
tools: [],
providerTools: [],
memory: null,
evaluations: [],
guardrails: [],
mcp: null,
telemetry: null,
checkpoint: null,
config: {
structuredOutput: { enabled: false, schemaSource: null },
thinking: null,
toolCallConcurrency: null,
requireToolApproval: false,
},
...overrides,
};
}
function makeToolSchema(overrides: Partial<ToolSchema> = {}): ToolSchema {
return {
name: 'test-tool',
description: 'A test tool',
type: 'custom',
editable: true,
inputSchemaSource: null,
outputSchemaSource: null,
handlerSource: null,
suspendSchemaSource: null,
resumeSchemaSource: null,
toMessageSource: null,
requireApproval: false,
needsApprovalFnSource: null,
providerOptions: null,
inputSchema: { type: 'object', properties: { query: { type: 'string' } } },
outputSchema: null,
hasSuspend: false,
hasResume: false,
hasToMessage: false,
...overrides,
};
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
describe('Agent.fromSchema()', () => {
it('reconstructs basic agent config', async () => {
const schema = minimalSchema();
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
expect(described.credential).toBe('my-credential');
expect(described.instructions).toBe('You are helpful.');
});
it('reconstructs model with only name (no provider)', async () => {
const schema = minimalSchema({
model: { provider: null, name: 'gpt-4o' },
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.model).toEqual({ provider: null, name: 'gpt-4o' });
});
it('reconstructs thinking config with correct provider arg', async () => {
const schema = minimalSchema({
config: {
structuredOutput: { enabled: false, schemaSource: null },
thinking: { provider: 'anthropic', budgetTokens: 10000 },
toolCallConcurrency: null,
requireToolApproval: false,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.config.thinking).toEqual({
provider: 'anthropic',
budgetTokens: 10000,
});
});
it('reconstructs openai thinking config', async () => {
const schema = minimalSchema({
model: { provider: 'openai', name: 'o3-mini' },
config: {
structuredOutput: { enabled: false, schemaSource: null },
thinking: { provider: 'openai', reasoningEffort: 'high' },
toolCallConcurrency: null,
requireToolApproval: false,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.config.thinking).toEqual({
provider: 'openai',
reasoningEffort: 'high',
});
});
it('creates proxy handlers for custom tools', async () => {
const toolSchema = makeToolSchema({
name: 'search',
description: 'Search the web',
});
const schema = minimalSchema({ tools: [toolSchema] });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.tools).toHaveLength(1);
expect(described.tools[0].name).toBe('search');
expect(described.tools[0].description).toBe('Search the web');
expect(described.tools[0].editable).toBe(true);
});
it('adds WorkflowTool markers for non-editable tools', async () => {
const toolSchema = makeToolSchema({ name: 'Send Email', type: 'workflow', editable: false });
const schema = minimalSchema({ tools: [toolSchema] });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
// Non-editable tools become WorkflowTool markers in declaredTools
const markers = agent.declaredTools.filter(
(t) => '__workflowTool' in t && (t as Record<string, unknown>).__workflowTool === true,
);
expect(markers).toHaveLength(1);
expect(markers[0].name).toBe('Send Email');
});
it('reconstructs memory from schema fields', async () => {
const schema = minimalSchema({
memory: {
source: null,
storage: 'memory',
lastMessages: 20,
semanticRecall: null,
workingMemory: null,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.memory).toBeTruthy();
expect(described.memory!.lastMessages).toBe(20);
expect(described.memory!.storage).toBe('memory');
});
it('sets toolCallConcurrency when specified', async () => {
const schema = minimalSchema({
config: {
structuredOutput: { enabled: false, schemaSource: null },
thinking: null,
toolCallConcurrency: 5,
requireToolApproval: false,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.config.toolCallConcurrency).toBe(5);
});
it('sets requireToolApproval when true', async () => {
const schema = minimalSchema({
config: {
structuredOutput: { enabled: false, schemaSource: null },
thinking: null,
toolCallConcurrency: null,
requireToolApproval: true,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.config.requireToolApproval).toBe(true);
});
it('sets checkpoint when specified', async () => {
const schema = minimalSchema({ checkpoint: 'memory' });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.checkpoint).toBe('memory');
});
it('delegates tool execution to handlerExecutor', async () => {
const executor = mockExecutor();
const toolSchema = makeToolSchema({ name: 'my-tool' });
const schema = minimalSchema({ tools: [toolSchema] });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
// Access the built tool's handler via declaredTools
const tools = agent.declaredTools;
expect(tools).toHaveLength(1);
const result = await tools[0].handler!({ query: 'test' }, { parentTelemetry: undefined });
expect(executor.executeTool).toHaveBeenCalledWith(
'my-tool',
{ query: 'test' },
{ parentTelemetry: undefined },
);
expect(result).toEqual({ result: 'mocked' });
});
it('reconstructs guardrails with correct position', async () => {
const schema = minimalSchema({
guardrails: [
{
name: 'pii-guard',
guardType: 'pii',
strategy: 'redact',
position: 'input',
config: { detectionTypes: ['email', 'phone'] },
source: '',
},
{
name: 'mod-guard',
guardType: 'moderation',
strategy: 'block',
position: 'output',
config: {},
source: '',
},
],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.guardrails).toHaveLength(2);
expect(described.guardrails[0].name).toBe('pii-guard');
expect(described.guardrails[0].position).toBe('input');
expect(described.guardrails[0].guardType).toBe('pii');
expect(described.guardrails[1].name).toBe('mod-guard');
expect(described.guardrails[1].position).toBe('output');
});
it('reconstructs evals with proxy _run', async () => {
const executor = mockExecutor();
const schema = minimalSchema({
evaluations: [
{
name: 'accuracy',
description: 'Check accuracy',
type: 'check',
modelId: null,
credentialName: null,
hasCredential: false,
handlerSource: null,
},
{
name: 'quality',
description: 'Judge quality',
type: 'judge',
modelId: 'anthropic/claude-sonnet-4-5',
credentialName: 'anthropic',
hasCredential: true,
handlerSource: null,
},
],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const described = agent.describe();
expect(described.evaluations).toHaveLength(2);
expect(described.evaluations[0].name).toBe('accuracy');
expect(described.evaluations[0].type).toBe('check');
expect(described.evaluations[1].name).toBe('quality');
expect(described.evaluations[1].type).toBe('judge');
});
it('reconstructs provider tools', async () => {
const schema = minimalSchema({
providerTools: [{ name: 'anthropic.web_search_20250305', source: '' }],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
const described = agent.describe();
expect(described.providerTools).toHaveLength(1);
expect(described.providerTools[0].name).toBe('anthropic.web_search_20250305');
});
it('evaluates provider tool source via evaluateExpression', async () => {
const executor = mockExecutor();
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
name: 'anthropic.web_search_20250305',
args: { maxUses: 5 },
});
const schema = minimalSchema({
providerTools: [
{
name: 'anthropic.web_search_20250305',
source: 'providerTools.anthropicWebSearch({ maxUses: 5 })',
},
],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const described = agent.describe();
expect(executor.evaluateExpression).toHaveBeenCalledWith(
'providerTools.anthropicWebSearch({ maxUses: 5 })',
);
expect(described.providerTools).toHaveLength(1);
expect(described.providerTools[0].name).toBe('anthropic.web_search_20250305');
});
it('evaluates structuredOutput schema via evaluateSchema', async () => {
const zodSchema = z.object({ answer: z.string() });
const executor = mockExecutor();
(executor.evaluateSchema as jest.Mock).mockResolvedValue(zodSchema);
const schema = minimalSchema({
config: {
structuredOutput: { enabled: true, schemaSource: 'z.object({ answer: z.string() })' },
thinking: null,
toolCallConcurrency: null,
requireToolApproval: false,
},
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const described = agent.describe();
expect(executor.evaluateSchema).toHaveBeenCalledWith('z.object({ answer: z.string() })');
expect(described.config.structuredOutput.enabled).toBe(true);
});
it('handles suspend result detection via isSuspendResult', () => {
const suspendMarker = Symbol.for('n8n.agent.suspend');
const suspendResult = { [suspendMarker]: true, payload: { message: 'approve?' } };
const nonSuspend = { result: 42 };
expect(isSuspendResult(suspendResult)).toBe(true);
expect(isSuspendResult(nonSuspend)).toBe(false);
expect(isSuspendResult(null)).toBe(false);
expect(isSuspendResult(undefined)).toBe(false);
});
it('delegates interruptible tool execution with suspend detection', async () => {
const suspendMarker = Symbol.for('n8n.agent.suspend');
const executor = {
...mockExecutor(),
executeTool: jest.fn().mockResolvedValue({
[suspendMarker]: true,
payload: { message: 'Please approve' },
}),
};
const toolSchema = makeToolSchema({
name: 'suspend-tool',
hasSuspend: true,
});
const schema = minimalSchema({ tools: [toolSchema] });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const tools = agent.declaredTools;
expect(tools).toHaveLength(1);
// Call with an interruptible context
let suspendedPayload: unknown;
const ctx = {
parentTelemetry: undefined,
resumeData: undefined,
// eslint-disable-next-line @typescript-eslint/require-await
suspend: jest.fn().mockImplementation(async (payload: unknown) => {
suspendedPayload = payload;
return { suspended: true };
}),
};
await tools[0].handler!({ query: 'test' }, ctx);
expect(ctx.suspend).toHaveBeenCalledWith({ message: 'Please approve' });
expect(suspendedPayload).toEqual({ message: 'Please approve' });
});
it('reconstructs requireApproval on individual tools', async () => {
const toolSchema = makeToolSchema({
name: 'danger-tool',
requireApproval: true,
});
const schema = minimalSchema({
tools: [toolSchema],
checkpoint: 'memory',
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: mockExecutor(),
});
// The tool should be wrapped for approval, which adds suspendSchema
const tools = agent.declaredTools;
expect(tools).toHaveLength(1);
expect(tools[0].suspendSchema).toBeDefined();
});
it('reconstructs MCP servers by evaluating configSource', async () => {
const executor = mockExecutor();
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
name: 'browser',
url: 'http://localhost:9222/mcp',
transport: 'streamableHttp',
});
const schema = minimalSchema({
mcp: [
{
name: 'browser',
configSource:
'({ name: "browser", url: "http://localhost:9222/mcp", transport: "streamableHttp" })',
},
],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
expect(executor.evaluateExpression).toHaveBeenCalledWith(
'({ name: "browser", url: "http://localhost:9222/mcp", transport: "streamableHttp" })',
);
const described = agent.describe();
expect(described.mcp).toHaveLength(1);
expect(described.mcp![0].name).toBe('browser');
});
it('reconstructs multiple MCP servers', async () => {
const executor = mockExecutor();
(executor.evaluateExpression as jest.Mock)
.mockResolvedValueOnce({
name: 'browser',
url: 'http://localhost:9222/mcp',
transport: 'streamableHttp',
})
.mockResolvedValueOnce({
name: 'fs',
command: 'npx',
args: ['@anthropic/mcp-fs', '/tmp'],
});
const schema = minimalSchema({
mcp: [
{ name: 'browser', configSource: 'browserConfig' },
{ name: 'fs', configSource: 'fsConfig' },
],
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const described = agent.describe();
expect(described.mcp).toHaveLength(2);
expect(described.mcp![0].name).toBe('browser');
expect(described.mcp![1].name).toBe('fs');
});
it('skips MCP servers with empty configSource', async () => {
const schema = minimalSchema({
mcp: [{ name: 'browser', configSource: '' }],
});
const executor = mockExecutor();
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
expect(executor.evaluateExpression).not.toHaveBeenCalled();
// No MCP configs evaluated means no client is added
const described = agent.describe();
expect(described.mcp).toBeNull();
});
it('reconstructs telemetry by evaluating source', async () => {
const executor = mockExecutor();
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
enabled: true,
functionId: 'my-agent',
recordInputs: true,
recordOutputs: true,
integrations: [],
});
const schema = minimalSchema({
telemetry: { source: 'new Telemetry().functionId("my-agent").build()' },
});
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
expect(executor.evaluateExpression).toHaveBeenCalledWith(
'new Telemetry().functionId("my-agent").build()',
);
const described = agent.describe();
expect(described.telemetry).not.toBeNull();
});
it('does not set telemetry when schema has no telemetry', async () => {
const schema = minimalSchema({ telemetry: null });
const executor = mockExecutor();
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const described = agent.describe();
expect(described.telemetry).toBeNull();
expect(executor.evaluateExpression).not.toHaveBeenCalled();
});
it('evaluates suspend/resume schemas via evaluateSchema', async () => {
const suspendSchema = z.object({ reason: z.string() });
const resumeSchema = z.object({ approved: z.boolean() });
const executor = mockExecutor();
(executor.evaluateSchema as jest.Mock)
.mockResolvedValueOnce(suspendSchema)
.mockResolvedValueOnce(resumeSchema);
const toolSchema = makeToolSchema({
name: 'interruptible-tool',
hasSuspend: true,
hasResume: true,
suspendSchemaSource: 'z.object({ reason: z.string() })',
resumeSchemaSource: 'z.object({ approved: z.boolean() })',
});
const schema = minimalSchema({ tools: [toolSchema] });
const agent = await Agent.fromSchema(schema, 'test-agent', {
handlerExecutor: executor,
});
const tools = agent.declaredTools;
expect(tools).toHaveLength(1);
expect(tools[0].suspendSchema).toBe(suspendSchema);
expect(tools[0].resumeSchema).toBe(resumeSchema);
});
});

View File

@ -1,119 +0,0 @@
import { InMemoryMemory } from '../runtime/memory-store';
import type { AgentDbMessage } from '../types/sdk/message';
describe('InMemoryMemory working memory', () => {
it('returns null for unknown key', async () => {
const mem = new InMemoryMemory();
expect(await mem.getWorkingMemory({ threadId: 'thread-x', resourceId: 'unknown' })).toBeNull();
});
it('saves and retrieves working memory keyed by resourceId', async () => {
const mem = new InMemoryMemory();
await mem.saveWorkingMemory(
{ threadId: 'thread-1', resourceId: 'user-1' },
'# Context\n- Name: Alice',
);
expect(await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' })).toBe(
'# Context\n- Name: Alice',
);
});
it('overwrites on subsequent save', async () => {
const mem = new InMemoryMemory();
await mem.saveWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' }, 'v1');
await mem.saveWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' }, 'v2');
expect(await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' })).toBe('v2');
});
it('isolates by resourceId (resource scope)', async () => {
const mem = new InMemoryMemory();
await mem.saveWorkingMemory({ threadId: 'thread-a', resourceId: 'user-1' }, 'Alice data');
await mem.saveWorkingMemory({ threadId: 'thread-b', resourceId: 'user-2' }, 'Bob data');
expect(await mem.getWorkingMemory({ threadId: 'thread-a', resourceId: 'user-1' })).toBe(
'Alice data',
);
expect(await mem.getWorkingMemory({ threadId: 'thread-b', resourceId: 'user-2' })).toBe(
'Bob data',
);
});
it('returns null for unknown threadId (thread scope)', async () => {
const mem = new InMemoryMemory();
expect(await mem.getWorkingMemory({ threadId: 'unknown' })).toBeNull();
});
it('saves and retrieves working memory keyed by threadId', async () => {
const mem = new InMemoryMemory();
await mem.saveWorkingMemory({ threadId: 'thread-1' }, '# Thread Notes');
expect(await mem.getWorkingMemory({ threadId: 'thread-1' })).toBe('# Thread Notes');
});
it('isolates by threadId (thread scope)', async () => {
const mem = new InMemoryMemory();
await mem.saveWorkingMemory({ threadId: 'thread-1' }, 'data for thread 1');
await mem.saveWorkingMemory({ threadId: 'thread-2' }, 'data for thread 2');
expect(await mem.getWorkingMemory({ threadId: 'thread-1' })).toBe('data for thread 1');
expect(await mem.getWorkingMemory({ threadId: 'thread-2' })).toBe('data for thread 2');
});
});
// ---------------------------------------------------------------------------
// Message persistence — createdAt correctness
// ---------------------------------------------------------------------------
function makeDbMsg(id: string, createdAt: Date, text: string): AgentDbMessage {
return { id, createdAt, role: 'user', content: [{ type: 'text', text }] };
}
describe('InMemoryMemory — message createdAt', () => {
it('before filter uses each message createdAt, not a shared batch timestamp', async () => {
const mem = new InMemoryMemory();
// Use dates clearly in the past so the batch wall-clock time (≈ now)
// never accidentally falls inside the range we're filtering.
const t1 = new Date('2020-01-01T00:00:01.000Z');
const t2 = new Date('2020-01-01T00:00:02.000Z');
const t3 = new Date('2020-01-01T00:00:03.000Z');
await mem.saveMessages({
threadId: 't1',
messages: [
makeDbMsg('m1', t1, 'first'),
makeDbMsg('m2', t2, 'second'),
makeDbMsg('m3', t3, 'third'),
],
});
// before: t3 should return only the two earlier messages
const result = await mem.getMessages('t1', { before: t3 });
// Pre-fix: saveMessages stores StoredMessage.createdAt = new Date() (wall clock,
// much later than t3), so the before filter excludes all messages → length 0.
// Post-fix: each StoredMessage.createdAt = dbMsg.createdAt, so t1 and t2 pass.
expect(result).toHaveLength(2);
expect(result[0].id).toBe('m1');
expect(result[1].id).toBe('m2');
});
it('getMessages returns createdAt from the stored record (consistent with before filter)', async () => {
const mem = new InMemoryMemory();
const t1 = new Date('2020-06-01T10:00:00.000Z');
const t2 = new Date('2020-06-01T10:00:01.000Z');
await mem.saveMessages({
threadId: 't1',
messages: [makeDbMsg('a', t1, 'alpha'), makeDbMsg('b', t2, 'beta')],
});
const loaded = await mem.getMessages('t1');
// Pre-fix: getMessages returns s.message whose createdAt is from toDbMessage
// (correct), but StoredMessage.createdAt is 'now' — the two are inconsistent.
// Post-fix: both use the same authoritative value, so this is always consistent.
expect(loaded[0].createdAt).toBeInstanceOf(Date);
expect(loaded[0].createdAt.getTime()).toBe(t1.getTime());
expect(loaded[1].createdAt).toBeInstanceOf(Date);
expect(loaded[1].createdAt.getTime()).toBe(t2.getTime());
});
});

View File

@ -0,0 +1,327 @@
/**
* Round-trip conversion tests: toAiMessages fromAiMessages
*
* These tests exercise the message split/merge logic without making real LLM
* calls. They lock down the structural invariants that the agent runtime relies
* on, including the key interim-message ordering guarantee described in the
* plan:
*
* input: [assistant{tool-call resolved}, user{x}, assistant{y}]
* output: [assistant{tool-call}, tool{tool-result}, user{x}, assistant{y}]
*
* The tool-result is inserted right after its tool-call, regardless of what
* messages follow it in the n8n list.
*/
import { describe, it, expect } from 'vitest';
import { toAiMessages, fromAiMessages } from '../../runtime/messages';
import type { Message } from '../../types/sdk/message';
describe('toAiMessages + fromAiMessages — round-trip', () => {
it('splits a resolved tool-call into assistant + tool ModelMessages', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'add',
input: { a: 1, b: 2 },
state: 'resolved',
output: { result: 3 },
},
],
},
];
const aiMessages = toAiMessages(input);
expect(aiMessages).toHaveLength(2);
expect(aiMessages[0].role).toBe('assistant');
expect(aiMessages[1].role).toBe('tool');
const toolCallPart = (
aiMessages[0] as { role: string; content: Array<{ type: string; toolCallId: string }> }
).content[0];
expect(toolCallPart.type).toBe('tool-call');
expect(toolCallPart.toolCallId).toBe('tc-1');
const toolResultPart = (
aiMessages[1] as {
role: string;
content: Array<{
type: string;
toolCallId: string;
output: { type: string; value: unknown };
}>;
}
).content[0];
expect(toolResultPart.type).toBe('tool-result');
expect(toolResultPart.toolCallId).toBe('tc-1');
expect(toolResultPart.output.type).toBe('json');
expect(toolResultPart.output.value).toEqual({ result: 3 });
});
it('encodes rejected tool-call as error-text in the tool ModelMessage', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'do_it',
input: {},
state: 'rejected',
error: 'Error: something went wrong',
},
],
},
];
const aiMessages = toAiMessages(input);
expect(aiMessages).toHaveLength(2);
const toolResultPart = (
aiMessages[1] as { role: string; content: Array<{ output: { type: string; value: string } }> }
).content[0];
expect(toolResultPart.output.type).toBe('error-text');
expect(toolResultPart.output.value).toBe('Error: something went wrong');
});
it('drops pending tool-call blocks from both assistant and tool ModelMessages', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
{ type: 'text', text: 'Thinking...' },
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'do_it',
input: {},
state: 'pending',
},
],
},
];
const aiMessages = toAiMessages(input);
// Only the assistant text part remains; no tool-result emitted for pending
expect(aiMessages).toHaveLength(1);
expect(aiMessages[0].role).toBe('assistant');
const content = (aiMessages[0] as { role: string; content: Array<{ type: string }> }).content;
expect(content).toHaveLength(1);
expect(content[0].type).toBe('text');
});
it('emits nothing for an assistant message whose only blocks are all pending', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'do_it',
input: {},
state: 'pending',
},
{
type: 'tool-call',
toolCallId: 'tc-2',
toolName: 'do_more',
input: {},
state: 'pending',
},
],
},
];
const aiMessages = toAiMessages(input);
// No empty-content assistant message — the whole message is suppressed
expect(aiMessages).toHaveLength(0);
});
it('skips legacy tool-call blocks that have no state field and emits nothing when they are the only content', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
// Simulate a DB row written before the state field was introduced
{
type: 'tool-call',
toolCallId: 'tc-legacy',
toolName: 'old_tool',
input: {},
} as unknown as Message['content'][number],
],
},
];
const aiMessages = toAiMessages(input);
// No empty-content assistant message and no spurious error-json tool message
expect(aiMessages).toHaveLength(0);
});
it('emits one tool ModelMessage per settled block in the same assistant turn', () => {
const input: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'add',
input: { a: 1, b: 2 },
state: 'resolved',
output: { result: 3 },
},
{
type: 'tool-call',
toolCallId: 'tc-2',
toolName: 'mul',
input: { a: 4, b: 5 },
state: 'resolved',
output: { result: 20 },
},
],
},
];
const aiMessages = toAiMessages(input);
// assistant{tc-1, tc-2} + tool{tc-1} + tool{tc-2}
expect(aiMessages).toHaveLength(3);
expect(aiMessages[0].role).toBe('assistant');
const assistantContent = (
aiMessages[0] as { content: Array<{ type: string; toolCallId: string }> }
).content;
expect(assistantContent).toHaveLength(2);
expect(assistantContent[0].toolCallId).toBe('tc-1');
expect(assistantContent[1].toolCallId).toBe('tc-2');
expect(aiMessages[1].role).toBe('tool');
expect(aiMessages[2].role).toBe('tool');
});
it('merges role:tool ModelMessages into the preceding assistant tool-call block', () => {
// Simulate AI SDK output: [assistant{tool-call}, tool{tool-result}]
const aiMessages = [
{
role: 'assistant' as const,
content: [
{
type: 'tool-call' as const,
toolCallId: 'tc-1',
toolName: 'add',
input: { a: 1, b: 2 },
providerExecuted: undefined,
},
],
},
{
role: 'tool' as const,
content: [
{
type: 'tool-result' as const,
toolCallId: 'tc-1',
toolName: 'add',
output: { type: 'json' as const, value: { result: 3 } },
},
],
},
];
const n8nMessages = fromAiMessages(aiMessages);
// Should produce a single assistant message with the resolved block
expect(n8nMessages).toHaveLength(1);
expect((n8nMessages[0] as Message).role).toBe('assistant');
const block = (n8nMessages[0] as Message).content[0];
expect(block.type).toBe('tool-call');
expect((block as { state: string }).state).toBe('resolved');
expect((block as { output: unknown }).output).toEqual({ result: 3 });
});
it('round-trip is structurally equivalent for a resolved tool-call', () => {
const original: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'echo',
input: { text: 'hello' },
state: 'resolved',
output: { echoed: 'hello' },
},
],
},
];
const aiMessages = toAiMessages(original);
const roundTripped = fromAiMessages(aiMessages);
expect(roundTripped).toHaveLength(1);
expect((roundTripped[0] as Message).role).toBe('assistant');
const block = (roundTripped[0] as Message).content[0];
expect(block.type).toBe('tool-call');
expect((block as { state: string }).state).toBe('resolved');
expect((block as { output: unknown }).output).toEqual({ echoed: 'hello' });
expect((block as { toolCallId: string }).toolCallId).toBe('tc-1');
});
it('interim-message ordering: tool-result is inserted right after its tool-call', () => {
// This is the key regression test for the interim-message scenario.
// Input n8n list: [assistant{tool-call resolved}, user{x}, assistant{y}]
// Expected AI SDK output: [assistant{tc}, tool{tr}, user{x}, assistant{y}]
const input: Message[] = [
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: 'tc-1',
toolName: 'delete_file',
input: { path: 'foo.txt' },
state: 'resolved',
output: { deleted: true },
},
],
},
{
role: 'user',
content: [{ type: 'text', text: 'Actually, what is 2+2?' }],
},
{
role: 'assistant',
content: [{ type: 'text', text: 'It is 4.' }],
},
];
const aiMessages = toAiMessages(input);
// 4 messages: assistant{tool-call}, tool{tool-result}, user, assistant
expect(aiMessages).toHaveLength(4);
expect(aiMessages[0].role).toBe('assistant');
expect(aiMessages[1].role).toBe('tool');
expect(aiMessages[2].role).toBe('user');
expect(aiMessages[3].role).toBe('assistant');
// tool-result is immediately after the assistant tool-call message
const toolResultContent = (aiMessages[1] as { content: Array<{ toolCallId: string }> })
.content[0];
expect(toolResultContent.toolCallId).toBe('tc-1');
// user interim message is after the tool-result
const userContent = (aiMessages[2] as { content: Array<{ type: string; text: string }> })
.content[0];
expect(userContent.text).toBe('Actually, what is 2+2?');
});
});

View File

@ -106,7 +106,7 @@ describe('batched tool execution integration', () => {
const resumedStream = await agent.resume(
'stream',
{ approved: true },
{ runId: next.runId!, toolCallId: next.toolCallId! },
{ runId: next.runId, toolCallId: next.toolCallId },
);
const resumedChunks = await collectStreamChunks(resumedStream.stream);

View File

@ -8,7 +8,7 @@ import {
createAgentWithConcurrentMixedTools,
collectTextDeltas,
} from './helpers';
import { isLlmMessage, type StreamChunk } from '../../index';
import type { StreamChunk } from '../../index';
const describe = describeIf('anthropic');
@ -120,7 +120,7 @@ describe('concurrent tool execution integration', () => {
const resumedStream = await agent.resume(
'stream',
{ approved: true },
{ runId: next.runId!, toolCallId: next.toolCallId! },
{ runId: next.runId, toolCallId: next.toolCallId },
);
const resumedChunks = await collectStreamChunks(resumedStream.stream);
@ -147,13 +147,8 @@ describe('concurrent tool execution integration', () => {
const chunks = await collectStreamChunks(fullStream);
// list_files should auto-execute — its result should appear as a message chunk
const toolResultChunks = chunks.filter(
(c) =>
c.type === 'message' &&
isLlmMessage(c.message) &&
c.message.content.some((p) => p.type === 'tool-result'),
);
// list_files should auto-execute — its result should appear as a discrete tool-result chunk
const toolResultChunks = chunksOfType(chunks, 'tool-result');
// delete_file should be suspended
const suspendedChunks = chunksOfType(chunks, 'tool-call-suspended');
@ -170,12 +165,7 @@ describe('concurrent tool execution integration', () => {
);
// list_files result should be present even though delete_file suspended
const listResult = toolResultChunks.find(
(c) =>
c.type === 'message' &&
isLlmMessage(c.message) &&
c.message.content.some((p) => p.type === 'tool-result' && p.toolName === 'list_files'),
);
const listResult = toolResultChunks.find((c) => c.toolName === 'list_files');
expect(listResult).toBeDefined();
}
});
@ -204,7 +194,7 @@ describe('concurrent tool execution integration', () => {
'content' in m
? m.content
.filter((c) => c.type === 'text')
.map((c) => ({ type: 'text-delta' as const, delta: c.text }))
.map((c) => ({ type: 'text-delta' as const, id: '', delta: c.text }))
: [],
),
);

View File

@ -175,42 +175,53 @@ describe('event system — stream', () => {
});
// ---------------------------------------------------------------------------
// result.getState()
// getState()
// ---------------------------------------------------------------------------
describe('result.getState()', () => {
it('generate() result reports success after a successful run', async () => {
describe('getState()', () => {
it('returns idle before first run', () => {
const agent = createSimpleAgent();
const result = await agent.generate('Say hello');
expect(result.getState().status).toBe('success');
const state = agent.getState();
expect(state.status).toBe('idle');
expect(state.messageList.messages).toHaveLength(0);
});
it('stream() result reports success after the stream is fully consumed', async () => {
it('returns success after a successful generate()', async () => {
const agent = createSimpleAgent();
const { stream, getState } = await agent.stream('Say hello');
await agent.generate('Say hello');
const state = agent.getState();
expect(state.status).toBe('success');
});
it('returns success after a completed stream()', async () => {
const agent = createSimpleAgent();
const { stream } = await agent.stream('Say hello');
await collectStreamChunks(stream);
expect(getState().status).toBe('success');
const state = agent.getState();
expect(state.status).toBe('success');
});
it('stream() getState() is running while the stream is being consumed', async () => {
it('state is running during the generate loop (observed via event)', async () => {
const agent = createSimpleAgent();
const { stream, getState } = await agent.stream('Say hello');
// State is running before the stream is consumed
expect(getState().status).toBe('running');
let stateWhileRunning: string | undefined;
agent.on(AgentEvent.TurnStart, () => {
stateWhileRunning = agent.getState().status;
});
await collectStreamChunks(stream);
await agent.generate('Say hello');
expect(getState().status).toBe('success');
expect(stateWhileRunning).toBe('running');
});
it('generate() result reflects resourceId and threadId from RunOptions', async () => {
it('reflects resourceId and threadId from RunOptions', async () => {
const agent = createSimpleAgent();
const result = await agent.generate('Say hello', {
await agent.generate('Say hello', {
persistence: { resourceId: 'user-123', threadId: 'thread-abc' },
});
expect(result.getState().persistence?.resourceId).toBe('user-123');
expect(result.getState().persistence?.threadId).toBe('thread-abc');
const state = agent.getState();
expect(state.persistence?.resourceId).toBe('user-123');
expect(state.persistence?.threadId).toBe('thread-abc');
});
});

View File

@ -1,19 +1,15 @@
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
import { describe as _describe } from 'vitest';
import { z } from 'zod';
import {
Agent,
type ContentToolCall,
type ContentToolResult,
filterLlmMessages,
Tool,
type StreamChunk,
type AgentMessage,
} from '../../index';
import { SqliteMemory } from '../../storage/sqlite-memory';
import { InMemoryMemory } from '../../runtime/memory-store';
export type { StreamChunk };
@ -404,10 +400,10 @@ export const findAllToolCalls = (messages: AgentMessage[]): ContentToolCall[] =>
.map((m) => m.content.filter((c) => c.type === 'tool-call'))
.flat();
};
export const findAllToolResults = (messages: AgentMessage[]): ContentToolResult[] => {
return filterLlmMessages(messages)
.filter((m) => m.content.find((c) => c.type === 'tool-result'))
.map((m) => m.content.find((c) => c.type === 'tool-result') as ContentToolResult);
export const findAllToolResults = (messages: AgentMessage[]): ContentToolCall[] => {
return filterLlmMessages(messages).flatMap((m) =>
m.content.filter((c): c is ContentToolCall => c.type === 'tool-call' && c.state !== 'pending'),
);
};
export const collectTextDeltas = (chunks: StreamChunk[]): string => {
return chunks
@ -417,25 +413,18 @@ export const collectTextDeltas = (chunks: StreamChunk[]): string => {
};
export function createSqliteMemory(): {
memory: SqliteMemory;
memory: InMemoryMemory;
cleanup: () => void;
url: string;
} {
const dbPath = path.join(
os.tmpdir(),
`test-${Date.now()}-${Math.random().toString(36).slice(2)}.db`,
);
const url = `file:${dbPath}`;
const memory = new SqliteMemory({ url });
// In-memory backend; the `url` field is kept on the return type so existing
// integration tests that reference it (e.g. for "restart" scenarios) keep
// compiling, but it's not load-bearing — InMemoryMemory has no persistence.
return {
memory,
url,
memory: new InMemoryMemory(),
url: '',
cleanup: () => {
try {
fs.unlinkSync(dbPath);
} catch {
// File may already be removed — ignore
}
// no-op for in-memory backend
},
};
}

View File

@ -0,0 +1,214 @@
/**
* Regression test: interim user message while a tool-call is suspended.
*
* Old architecture bug: if a user sent a new message between a tool-call
* suspension and its eventual resume, the message list would contain:
*
* assistant{tool-call} user{interim} tool{tool-result}
*
* This order is invalid for AI SDK providers (tool-result must immediately
* follow its tool-call). The new architecture stores the result ON the
* tool-call block, so toAiMessages always emits:
*
* assistant{tool-call} tool{tool-result} user{interim} assistant{reply}
*
* The tool-result is always adjacent to its tool-call regardless of what n8n
* messages come after it in the list.
*
* This test drives the full scenario end-to-end and asserts that:
* 1. The final result has finishReason 'stop' (no provider error).
* 2. The tool-call block on the originating assistant message transitions to
* state 'resolved' with the expected output.
* 3. The interim user/assistant messages are still present in memory.
*/
import { afterEach, expect, it } from 'vitest';
import { z } from 'zod';
import { describeIf, createSqliteMemory, getModel } from './helpers';
import { Agent, filterLlmMessages, Memory, Tool } from '../../index';
import type { AgentDbMessage } from '../../index';
import type { ContentToolCall, Message } from '../../types/sdk/message';
const describe = describeIf('anthropic');
describe('interim user message during tool suspension', () => {
const cleanups: Array<() => void> = [];
afterEach(() => {
for (const fn of cleanups) fn();
cleanups.length = 0;
});
function buildInterruptibleAgent(mem: Memory): Agent {
const deleteTool = new Tool('delete_file')
.description('Delete a file at the given path')
.input(z.object({ path: z.string().describe('File path to delete') }))
.output(z.object({ deleted: z.boolean(), path: z.string() }))
.suspend(z.object({ message: z.string(), severity: z.string() }))
.resume(z.object({ approved: z.boolean() }))
.handler(async ({ path }, ctx) => {
if (!ctx.resumeData) {
return await ctx.suspend({ message: `Delete "${path}"?`, severity: 'destructive' });
}
if (!ctx.resumeData.approved) return { deleted: false, path };
return { deleted: true, path };
});
return new Agent('interim-test-agent')
.model(getModel('anthropic'))
.instructions(
'You are a file manager. When asked to delete a file, use the delete_file tool. Be concise.',
)
.tool(deleteTool)
.memory(mem)
.checkpoint('memory');
}
for (const method of ['generate', 'stream'] as const) {
it(`[${method}] interim message does not break provider message ordering`, async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = `thread-interim-${method}`;
const resourceId = 'res-interim';
const persistence = { threadId, resourceId };
const mem = new Memory().storage(memory);
const agent = buildInterruptibleAgent(mem);
// ----------------------------------------------------------------
// Turn 1: trigger the tool suspension
// ----------------------------------------------------------------
const suspendResult = await agent.generate('Please delete /tmp/interim-test.txt', {
persistence,
});
expect(suspendResult.finishReason).toBe('tool-calls');
expect(suspendResult.pendingSuspend).toBeDefined();
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
// ----------------------------------------------------------------
// Interim turn: send a new message while the tool is suspended.
// Build a fresh agent instance to simulate a separate request.
// ----------------------------------------------------------------
const interimAgent = new Agent('interim-agent')
.model(getModel('anthropic'))
.instructions('You are helpful. Answer questions concisely.')
.memory(mem);
const interimResult = await interimAgent.generate('What is 1 + 1?', { persistence });
expect(interimResult.finishReason).toBe('stop');
// ----------------------------------------------------------------
// Resume turn: approve the suspended tool call
// ----------------------------------------------------------------
let resumeFinishReason: string;
if (method === 'generate') {
const result = await agent.resume(
'generate',
{ approved: true },
{
runId,
toolCallId,
},
);
resumeFinishReason = result.finishReason ?? 'stop';
} else {
const { stream } = await agent.resume(
'stream',
{ approved: true },
{
runId,
toolCallId,
},
);
// Drain the stream
const reader = stream.getReader();
let finishReason = 'stop';
while (true) {
const { done, value } = await reader.read();
if (done) break;
if ((value as { type: string }).type === 'finish') {
finishReason = (value as { finishReason?: string }).finishReason ?? 'stop';
}
}
resumeFinishReason = finishReason;
}
// ----------------------------------------------------------------
// Assertions
// ----------------------------------------------------------------
// 1. No provider error — the ordering was valid
expect(resumeFinishReason).toBe('stop');
// 2. The originating assistant message's tool-call block is resolved
const allMessages = await memory.getMessages(threadId);
const llmMessages = filterLlmMessages(allMessages);
const ourBlock = llmMessages
.flatMap((m) => m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'))
.find((b) => b.toolCallId === toolCallId);
expect(ourBlock).toBeDefined();
expect(ourBlock!.state).toBe('resolved');
// 3. The interim user/assistant exchange is present in memory
const userMessages = allMessages.filter(
(m): m is AgentDbMessage & Message => 'role' in m && m.role === 'user',
);
// Turn-1 user + interim user (at minimum)
expect(userMessages.length).toBeGreaterThanOrEqual(2);
});
}
it('preserves chronological ordering of messages in memory after resume', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = 'thread-interim-ordering';
const resourceId = 'res-ordering';
const persistence = { threadId, resourceId };
const mem = new Memory().storage(memory);
const agent = buildInterruptibleAgent(mem);
// Turn 1: suspend
const suspendResult = await agent.generate('Delete /tmp/order-test.txt', { persistence });
expect(suspendResult.finishReason).toBe('tool-calls');
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
// Interim turn
const interimAgent = new Agent('interim-ordering')
.model(getModel('anthropic'))
.instructions('Answer concisely.')
.memory(mem);
await interimAgent.generate('Say hi', { persistence });
// Resume
const resumeResult = await agent.resume(
'generate',
{ approved: true },
{
runId,
toolCallId,
},
);
expect(resumeResult.finishReason).toBe('stop');
// The tool-call is resolved
const allMessages = await memory.getMessages(threadId);
const llmMessages = filterLlmMessages(allMessages);
const ourBlock = llmMessages
.flatMap((m) => m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'))
.find((b) => b.toolCallId === toolCallId);
expect(ourBlock).toBeDefined();
expect(ourBlock!.state).toBe('resolved');
// Messages are in chronological order (createdAt ascending)
const timestamps = allMessages.map((m) => m.createdAt.getTime());
for (let i = 1; i < timestamps.length; i++) {
expect(timestamps[i]).toBeGreaterThanOrEqual(timestamps[i - 1]);
}
});
});

View File

@ -72,12 +72,12 @@ describe('JSON Schema validation — non-MCP tools with raw JSON Schema', () =>
// The handler should have been called with valid data
expect(handler).toHaveBeenCalledWith(expect.objectContaining({ age: 25 }), expect.anything());
// No tool-result should carry an error flag
// No tool-call block should have state 'rejected'
const allMessages = filterLlmMessages(result.messages);
const toolResults = allMessages.flatMap((m) =>
m.content.filter((c) => c.type === 'tool-result'),
const toolCallBlocks = allMessages.flatMap((m) =>
m.content.filter((c) => c.type === 'tool-call'),
);
expect(toolResults.every((r) => !r.isError)).toBe(true);
expect(toolCallBlocks.every((c) => (c as { state: string }).state !== 'rejected')).toBe(true);
});
it('allows the LLM to self-correct after receiving a JSON Schema validation error', async () => {
@ -105,12 +105,12 @@ describe('JSON Schema validation — non-MCP tools with raw JSON Schema', () =>
expect(result.finishReason).toBe('stop');
expect(result.error).toBeUndefined();
// There should be at least two tool-result messages: one error, one success
// There should be at least two tool-call messages: one rejected, one resolved
const allMessages = filterLlmMessages(result.messages);
const toolResultMessages = allMessages.filter((m) =>
m.content.some((c) => c.type === 'tool-result'),
const toolCallMessages = allMessages.filter((m) =>
m.content.some((c) => c.type === 'tool-call'),
);
expect(toolResultMessages.length).toBeGreaterThanOrEqual(2);
expect(toolCallMessages.length).toBeGreaterThanOrEqual(2);
// The successful handler call should have received a valid age
expect(callCount).toBeGreaterThanOrEqual(1);

View File

@ -17,7 +17,7 @@ import {
chunksOfType,
} from './helpers';
import { startSseServer, type TestServer } from './mcp-server-helpers';
import { Agent, McpClient, Tool, isLlmMessage } from '../../index';
import { Agent, McpClient, Tool } from '../../index';
// ---------------------------------------------------------------------------
// McpClient constructor validation — no MCP server required
@ -234,13 +234,10 @@ describe_llm('agent stream() with MCP tool', () => {
const { stream } = await agent.stream('Echo "stream works" using tools_echo.');
const chunks = await collectStreamChunks(stream);
const messageChunks = chunksOfType(chunks, 'message');
const messages = messageChunks.map((c) => c.message);
const hasToolCall = messages.some(
(m) => isLlmMessage(m) && m.content.some((c) => c.type === 'tool-call'),
);
expect(hasToolCall).toBe(true);
// Tool calls now ride their own discrete `tool-call` chunks rather than
// being wrapped in `message` envelopes.
const toolCallChunks = chunksOfType(chunks, 'tool-call');
expect(toolCallChunks.length).toBeGreaterThan(0);
await client.close();
});

View File

@ -8,7 +8,7 @@
import { expect, it, beforeEach } from 'vitest';
import { Agent, Memory, type AgentDbMessage } from '../../../index';
import type { BuiltMemory, Thread } from '../../../types/sdk/memory';
import type { BuiltMemory, MemoryDescriptor, Thread } from '../../../types/sdk/memory';
import { describeIf, findLastTextContent, getModel } from '../helpers';
const describe = describeIf('anthropic');
@ -17,6 +17,9 @@ const describe = describeIf('anthropic');
// Custom in-memory BuiltMemory implementation (simulates Redis, DynamoDB, etc.)
// ---------------------------------------------------------------------------
class CustomMapMemory implements BuiltMemory {
describe(): MemoryDescriptor {
throw new Error('Method not implemented.');
}
readonly threads = new Map<string, Thread>();
readonly messages = new Map<string, AgentDbMessage[]>();
readonly workingMemory = new Map<string, string>();

View File

@ -1,106 +0,0 @@
import { expect, it, afterEach } from 'vitest';
import { Agent, Memory } from '../../../index';
import { SqliteMemory } from '../../../storage/sqlite-memory';
import { describeIf, findLastTextContent, getModel, createSqliteMemory } from '../helpers';
const describe = describeIf('anthropic');
const cleanups: Array<() => void> = [];
afterEach(() => {
cleanups.forEach((fn) => fn());
cleanups.length = 0;
});
describe('freeform working memory', () => {
const template = '# User Context\n- **Name**:\n- **City**:\n- **Pet**:';
it('agent recalls info via working memory across turns', async () => {
const memory = new Memory().storage('memory').lastMessages(10).freeform(template);
const agent = new Agent('freeform-test')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(memory);
const threadId = `freeform-${Date.now()}`;
const options = { persistence: { threadId, resourceId: 'test-user' } };
await agent.generate('My name is Alice and I live in Berlin.', options);
const result = await agent.generate('What city do I live in?', options);
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('berlin');
});
it('working memory is updated when new information is provided', async () => {
const memory = new Memory().storage('memory').lastMessages(10).freeform(template);
const agent = new Agent('wm-update-test')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(memory);
const threadId = `wm-update-${Date.now()}`;
const options = { persistence: { threadId, resourceId: 'test-user' } };
const result = await agent.generate('My name is Bob.', options);
const toolCalls = result.messages.flatMap((m) =>
'content' in m ? m.content.filter((c) => c.type === 'tool-call') : [],
) as Array<{ type: 'tool-call'; toolName: string }>;
const wmToolCall = toolCalls.find((c) => c.toolName === 'updateWorkingMemory');
expect(wmToolCall).toBeDefined();
});
it('working memory persists across threads with same resourceId', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const mem = new Memory().storage(memory).lastMessages(10).freeform(template);
const agent = new Agent('cross-thread-test')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem);
const resourceId = `user-${Date.now()}`;
await agent.generate('My name is Charlie and I have a dog named Rex.', {
persistence: { threadId: `thread-1-${Date.now()}`, resourceId },
});
const result = await agent.generate("What's my dog's name?", {
persistence: { threadId: `thread-2-${Date.now()}`, resourceId },
});
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('rex');
});
it('working memory survives SqliteMemory restart', async () => {
const { memory, cleanup, url } = createSqliteMemory();
cleanups.push(cleanup);
const mem = new Memory().storage(memory).lastMessages(10).freeform(template);
const agent1 = new Agent('restart-wm-1')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem);
const resourceId = `user-${Date.now()}`;
const threadId = `restart-wm-${Date.now()}`;
await agent1.generate('My name is Diana.', { persistence: { threadId, resourceId } });
const memory2 = new SqliteMemory({ url });
const mem2 = new Memory().storage(memory2).lastMessages(10).freeform(template);
const agent2 = new Agent('restart-wm-2')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem2);
const result = await agent2.generate('What is my name?', {
persistence: { threadId: `new-thread-${Date.now()}`, resourceId },
});
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('diana');
});
});

View File

@ -61,6 +61,18 @@ afterAll(async () => {
}
}, 30_000);
/**
* Create a PostgresMemory instance backed by the test container connection string.
* Uses a simple inline CredentialProvider that returns the raw URL.
*/
function makePostgresMemory(namespace: string): PostgresMemory {
return new PostgresMemory({
type: 'connection',
connection: { connectionType: 'url', connection: { url: connectionString } },
options: { namespace },
});
}
/** describe that requires Docker — tests are no-ops without it. */
function describeWithDocker(name: string, fn: () => void) {
describe(name, () => {
@ -74,7 +86,7 @@ function describeWithDocker(name: string, fn: () => void) {
describeWithDocker('PostgresMemory saveThread upsert', () => {
it('preserves existing title and metadata when not provided', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'upsert_test' });
const mem = makePostgresMemory('upsert_test');
await mem.saveThread({
id: 'upsert-t1',
@ -95,7 +107,7 @@ describeWithDocker('PostgresMemory saveThread upsert', () => {
});
it('overwrites title and metadata when explicitly provided', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'upsert_ow' });
const mem = makePostgresMemory('upsert_ow');
await mem.saveThread({
id: 'upsert-t2',
@ -121,7 +133,7 @@ describeWithDocker('PostgresMemory saveThread upsert', () => {
describeWithDocker('PostgresMemory unit tests', () => {
it('creates tables on first use and round-trips a thread', async () => {
const mem = new PostgresMemory({ connection: connectionString });
const mem = makePostgresMemory('default');
const thread = await mem.saveThread({
id: 'thread-1',
@ -141,7 +153,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('saves and retrieves messages with limit', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'msg_test' });
const mem = makePostgresMemory('msg_test');
await mem.saveThread({ id: 't1', resourceId: 'u1' });
@ -180,7 +192,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('saves and retrieves working memory keyed by resourceId', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_test' });
const mem = makePostgresMemory('wm_test');
expect(
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' }),
@ -207,7 +219,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('saves and retrieves working memory keyed by threadId (no resourceId)', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_thread_test' });
const mem = makePostgresMemory('wm_thread_test');
expect(
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'thread' }),
@ -225,7 +237,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('isolates working memory by resourceId', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_iso_test' });
const mem = makePostgresMemory('wm_iso_test');
await mem.saveWorkingMemory(
{ threadId: 'thread-a', resourceId: 'user-a', scope: 'resource' },
@ -247,7 +259,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('stores scope=resource when resourceId is provided', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_scope_test' });
const mem = makePostgresMemory('wm_scope_test');
await mem.saveWorkingMemory(
{ threadId: 'thread-1', resourceId: 'res-1', scope: 'resource' },
@ -266,10 +278,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('stores scope=thread when only threadId is provided', async () => {
const mem = new PostgresMemory({
connection: connectionString,
namespace: 'wm_scope_thread_test',
});
const mem = makePostgresMemory('wm_scope_thread_test');
await mem.saveWorkingMemory(
{ threadId: 'thread-1', resourceId: 'user-1', scope: 'thread' },
@ -288,10 +297,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('does not mix resource-scoped and thread-scoped entries with the same key value', async () => {
const mem = new PostgresMemory({
connection: connectionString,
namespace: 'wm_scope_iso_test',
});
const mem = makePostgresMemory('wm_scope_iso_test');
const sharedKey = 'same-id';
await mem.saveWorkingMemory(
@ -318,7 +324,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('deletes thread and cascades to messages', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'del_test' });
const mem = makePostgresMemory('del_test');
await mem.saveThread({ id: 'del-t1', resourceId: 'u1' });
await mem.saveMessages({
@ -342,7 +348,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('stores and queries embeddings with pgvector', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_test' });
const mem = makePostgresMemory('vec_test');
await mem.saveThread({ id: 'vec-t1', resourceId: 'u1' });
@ -375,7 +381,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('filters embeddings by resourceId with scope=resource (default)', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_res' });
const mem = makePostgresMemory('vec_res');
await mem.saveEmbeddings({
threadId: 't1',
@ -410,7 +416,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('filters embeddings by threadId with scope=thread', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_thr' });
const mem = makePostgresMemory('vec_thr');
await mem.saveEmbeddings({
threadId: 't1',
@ -443,7 +449,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('resource scope excludes embeddings from other resources', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_iso' });
const mem = makePostgresMemory('vec_iso');
await mem.saveEmbeddings({
threadId: 't1',
@ -470,7 +476,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('stores resourceId in the embeddings table', async () => {
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_col' });
const mem = makePostgresMemory('vec_col');
await mem.saveEmbeddings({
threadId: 't1',
@ -492,8 +498,8 @@ describeWithDocker('PostgresMemory unit tests', () => {
});
it('isolates namespaces', async () => {
const mem1 = new PostgresMemory({ connection: connectionString, namespace: 'ns_a' });
const mem2 = new PostgresMemory({ connection: connectionString, namespace: 'ns_b' });
const mem1 = makePostgresMemory('ns_a');
const mem2 = makePostgresMemory('ns_b');
await mem1.saveThread({ id: 'shared-id', resourceId: 'u1', title: 'From A' });
await mem2.saveThread({ id: 'shared-id', resourceId: 'u1', title: 'From B' });
@ -520,7 +526,7 @@ function describeWithDockerAndApi(name: string, fn: () => void) {
describeWithDockerAndApi('PostgresMemory agent integration', () => {
it('recalls previous messages across turns', async () => {
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_recall' });
const store = makePostgresMemory('agent_recall');
const memory = new Memory().storage(store).lastMessages(10);
const agent = new Agent('pg-recall-test')
@ -540,7 +546,7 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
});
it('persists resource-scoped working memory via Postgres backend', async () => {
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_wm' });
const store = makePostgresMemory('agent_wm');
const memory = new Memory()
.storage(store)
.lastMessages(10)
@ -574,10 +580,7 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
});
it('persists thread-scoped working memory via Postgres backend', async () => {
const store = new PostgresMemory({
connection: connectionString,
namespace: 'agent_thread_wm',
});
const store = makePostgresMemory('agent_thread_wm');
const memory = new Memory()
.storage(store)
.lastMessages(10)
@ -617,7 +620,7 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
});
it('works with stream() path', async () => {
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_stream' });
const store = makePostgresMemory('agent_stream');
const memory = new Memory().storage(store).lastMessages(10);
const agent = new Agent('pg-stream-test')

View File

@ -1,105 +0,0 @@
import { describe as _describe, expect, it, afterEach } from 'vitest';
import { Agent, Memory } from '../../../index';
import { SqliteMemory } from '../../../storage/sqlite-memory';
import { describeIf, findLastTextContent, getModel, createSqliteMemory } from '../helpers';
const describe = describeIf('anthropic');
const cleanups: Array<() => void> = [];
afterEach(() => {
cleanups.forEach((fn) => fn());
cleanups.length = 0;
});
_describe('SqliteMemory saveThread upsert', () => {
it('preserves existing title and metadata when not provided', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
await memory.saveThread({
id: 'upsert-t1',
resourceId: 'user-1',
title: 'Original Title',
metadata: { key: 'value' },
});
// Upsert without title or metadata (simulates saveMessagesToThread)
await memory.saveThread({ id: 'upsert-t1', resourceId: 'user-1' });
const thread = await memory.getThread('upsert-t1');
expect(thread).not.toBeNull();
expect(thread!.title).toBe('Original Title');
expect(thread!.metadata).toEqual({ key: 'value' });
});
it('overwrites title and metadata when explicitly provided', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
await memory.saveThread({
id: 'upsert-t2',
resourceId: 'user-1',
title: 'Old Title',
metadata: { old: true },
});
await memory.saveThread({
id: 'upsert-t2',
resourceId: 'user-1',
title: 'New Title',
metadata: { new: true },
});
const thread = await memory.getThread('upsert-t2');
expect(thread!.title).toBe('New Title');
expect(thread!.metadata).toEqual({ new: true });
});
});
describe('SQLite memory integration', () => {
it('agent recalls info from previous turn with SqliteMemory', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const mem = new Memory().storage(memory).lastMessages(10);
const agent = new Agent('sqlite-test')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem);
const threadId = `sqlite-${Date.now()}`;
const options = { persistence: { threadId, resourceId: 'test-user' } };
await agent.generate('My favorite number is 42. Just acknowledge.', options);
const result = await agent.generate('What is my favorite number?', options);
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('42');
});
it('data survives a fresh SqliteMemory instance', async () => {
const { memory, cleanup, url } = createSqliteMemory();
cleanups.push(cleanup);
const mem1 = new Memory().storage(memory).lastMessages(10);
const agent1 = new Agent('persist-test-1')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem1);
const threadId = `persist-${Date.now()}`;
const options = { persistence: { threadId, resourceId: 'test-user' } };
await agent1.generate('My favorite animal is a dolphin. Just acknowledge.', options);
// New SqliteMemory instance, same file
const memory2 = new SqliteMemory({ url });
const mem2 = new Memory().storage(memory2).lastMessages(10);
const agent2 = new Agent('persist-test-2')
.model(getModel('anthropic'))
.instructions('You are a helpful assistant. Be concise.')
.memory(mem2);
const result = await agent2.generate('What is my favorite animal?', options);
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('dolphin');
});
});

View File

@ -0,0 +1,403 @@
import { generateText } from 'ai';
import { expect, it } from 'vitest';
import {
Agent,
type AgentDbMessage,
type BuiltObservationStore,
type CompactFn,
createModel,
Memory,
type Observation,
type ObservationCursor,
OBSERVATION_SCHEMA_VERSION,
type ObserveFn,
} from '../../../index';
import { InMemoryMemory } from '../../../runtime/memory-store';
import { describeIf, findLastTextContent, getModel } from '../helpers';
const describe = describeIf('anthropic');
const WORKING_MEMORY_TEMPLATE = [
'# User Memory',
'- **Location**:',
'- **Project codename**:',
].join('\n');
type ObservationCycleStore = BuiltObservationStore &
Pick<InMemoryMemory, 'getWorkingMemory' | 'saveWorkingMemory'>;
function uniqueId(prefix: string): string {
return `${prefix}-${crypto.randomUUID()}`;
}
function messageText(message: AgentDbMessage): string {
if (!('content' in message) || !Array.isArray(message.content)) {
return `${message.type}: ${JSON.stringify(message)}`;
}
const text = message.content
.map((part) => {
if (part.type === 'text' || part.type === 'reasoning') return part.text;
if (part.type === 'tool-call') return `[tool:${part.toolName}] ${JSON.stringify(part.input)}`;
if (part.type === 'invalid-tool-call') return `[invalid-tool:${part.name ?? 'unknown'}]`;
if (part.type === 'file') return `[file:${part.mediaType ?? 'unknown'}]`;
if (part.type === 'citation') return `[citation:${part.title ?? part.url ?? 'unknown'}]`;
if (part.type === 'provider') return JSON.stringify(part.value);
return '';
})
.filter(Boolean)
.join(' ');
return `${message.role}: ${text}`;
}
function observationText(observation: Observation): string {
const payload = observation.payload;
if (payload !== null && typeof payload === 'object' && !Array.isArray(payload)) {
const text = (payload as Record<string, unknown>).text;
if (typeof text === 'string') return text;
}
return JSON.stringify(payload);
}
function observeWithModel(model: string): ObserveFn {
return async ({ deltaMessages, threadId, now }) => {
const transcript = deltaMessages.map(messageText).join('\n');
const { text } = await generateText({
model: createModel(model),
temperature: 0,
system: [
'Extract durable user facts from the transcript.',
'Return one concise observation sentence.',
'Preserve exact names, places, and codes.',
'If there are no durable facts, return NONE.',
].join(' '),
prompt: transcript,
});
const content = text.trim();
if (content.toUpperCase() === 'NONE') return [];
return [
{
scopeKind: 'thread',
scopeId: threadId,
kind: 'user-fact',
payload: { text: content },
durationMs: null,
schemaVersion: OBSERVATION_SCHEMA_VERSION,
createdAt: now,
},
];
};
}
function compactWithModel(model: string): CompactFn {
return async ({ observations, currentWorkingMemory, workingMemoryTemplate }) => {
const observationList = observations.map((observation) => `- ${observationText(observation)}`);
const { text } = await generateText({
model: createModel(model),
temperature: 0,
system: [
'You maintain a concise working-memory document.',
'Return the complete updated document only.',
'Preserve exact names, places, and codes.',
].join(' '),
prompt: [
'Template:',
workingMemoryTemplate,
'',
'Current working memory:',
currentWorkingMemory ?? workingMemoryTemplate,
'',
'New observations:',
observationList.join('\n'),
].join('\n'),
});
return { content: text.trim() };
};
}
async function runObservationCycleForTest({
store,
threadId,
resourceId,
model,
}: {
store: ObservationCycleStore;
threadId: string;
resourceId: string;
model: string;
}): Promise<{
deltaMessages: AgentDbMessage[];
cursorAfter: ObservationCursor | null;
}> {
const handle = await store.acquireObservationLock('thread', threadId, {
holderId: 'observational-memory-integration-test',
ttlMs: 30_000,
});
expect(handle).not.toBeNull();
if (!handle) throw new Error('Failed to acquire observation lock');
try {
const cursor = await store.getCursor('thread', threadId);
const deltaMessages = await store.getMessagesForScope('thread', threadId, {
...(cursor && {
since: {
sinceCreatedAt: cursor.lastObservedAt,
sinceMessageId: cursor.lastObservedMessageId,
},
}),
});
expect(deltaMessages.length).toBeGreaterThan(0);
const currentWorkingMemory = await store.getWorkingMemory({
threadId,
resourceId,
scope: 'resource',
});
const now = new Date();
const observedRows = await observeWithModel(model)({
deltaMessages,
currentWorkingMemory,
cursor,
threadId,
resourceId,
now,
trigger: { type: 'per-turn' },
gap: null,
telemetry: undefined,
});
const persistedRows = await store.appendObservations(observedRows);
expect(persistedRows.length).toBeGreaterThan(0);
const lastMessage = deltaMessages[deltaMessages.length - 1];
await store.setCursor({
scopeKind: 'thread',
scopeId: threadId,
lastObservedMessageId: lastMessage.id,
lastObservedAt: lastMessage.createdAt,
updatedAt: now,
});
const queuedRows = await store.getObservations({
scopeKind: 'thread',
scopeId: threadId,
schemaVersionAtMost: OBSERVATION_SCHEMA_VERSION,
});
expect(queuedRows.length).toBeGreaterThan(0);
const compacted = await compactWithModel(model)({
observations: queuedRows,
currentWorkingMemory,
workingMemoryTemplate: WORKING_MEMORY_TEMPLATE,
structured: false,
threadId,
resourceId,
model,
compactorPrompt: 'Compact thread-scoped observations into resource-scoped working memory.',
telemetry: undefined,
});
await store.saveWorkingMemory({ threadId, resourceId, scope: 'resource' }, compacted.content);
await store.deleteObservations(queuedRows.map((row) => row.id));
const remainingRows = await store.getObservations({
scopeKind: 'thread',
scopeId: threadId,
});
expect(remainingRows).toHaveLength(0);
return {
deltaMessages,
cursorAfter: await store.getCursor('thread', threadId),
};
} finally {
await store.releaseObservationLock(handle);
}
}
function createWriterAgent(model: string, store: InMemoryMemory): Agent {
return new Agent('observational-memory-writer')
.model(model)
.instructions('You are a helpful assistant. Acknowledge briefly, and do not repeat user facts.')
.memory(new Memory().storage(store).lastMessages(10));
}
function createReaderAgent(model: string, store: InMemoryMemory): Agent {
return new Agent('observational-memory-reader')
.model(model)
.instructions('Answer only from working memory. Be concise.')
.memory(
new Memory()
.storage(store)
.lastMessages(1)
.scope('resource')
.freeform(WORKING_MEMORY_TEMPLATE),
);
}
async function rememberFact(
agent: Agent,
fact: string,
options: { persistence: { threadId: string; resourceId: string } },
) {
const result = await agent.generate(`${fact} Reply with "noted".`, options);
expect(result.finishReason).toBe('stop');
expect(findLastTextContent(result.messages)).toBeTruthy();
}
async function addNeutralTurn(
agent: Agent,
options: { persistence: { threadId: string; resourceId: string } },
forbiddenTerms: string[],
) {
const result = await agent.generate('Reply only with "ok".', options);
expect(result.finishReason).toBe('stop');
const text = findLastTextContent(result.messages)?.toLowerCase() ?? '';
expect(text).toContain('ok');
for (const term of forbiddenTerms) {
expect(text).not.toContain(term);
}
}
function expectTextToContain(text: string | null | undefined, expectedTerms: string[]) {
const normalized = text?.toLowerCase() ?? '';
for (const term of expectedTerms) {
expect(normalized).toContain(term);
}
}
describe('observational memory integration', () => {
it('compacts observed thread facts into resource working memory for another thread', async () => {
const store = new InMemoryMemory();
const model = getModel('anthropic');
const resourceId = uniqueId('obs-resource');
const sourceThreadId = uniqueId('obs-source');
const readerThreadId = uniqueId('obs-reader');
const writer = createWriterAgent(model, store);
await rememberFact(writer, 'Please remember this for later: I live in Reykjavik.', {
persistence: { threadId: sourceThreadId, resourceId },
});
await runObservationCycleForTest({
store,
threadId: sourceThreadId,
resourceId,
model,
});
const reader = createReaderAgent(model, store);
const result = await reader.generate('From memory only, where do I live?', {
persistence: {
threadId: readerThreadId,
resourceId,
},
});
expectTextToContain(findLastTextContent(result.messages), ['reykjavik']);
});
it('uses compacted working memory inside the observed thread after the fact leaves chat history', async () => {
const store = new InMemoryMemory();
const model = getModel('anthropic');
const resourceId = uniqueId('obs-resource');
const sourceThreadId = uniqueId('obs-source');
const options = {
persistence: { threadId: sourceThreadId, resourceId },
};
const writer = createWriterAgent(model, store);
await rememberFact(
writer,
'Please remember this for later: I live in Reykjavik, and my project codename is Aurora-17.',
options,
);
await addNeutralTurn(writer, options, ['reykjavik', 'aurora-17']);
await runObservationCycleForTest({
store,
threadId: sourceThreadId,
resourceId,
model,
});
const workingMemory = await store.getWorkingMemory({
threadId: sourceThreadId,
resourceId,
scope: 'resource',
});
expectTextToContain(workingMemory, ['reykjavik', 'aurora-17']);
const reader = createReaderAgent(model, store);
const result = await reader.generate(
'From memory only, where do I live and what is my project codename?',
options,
);
expectTextToContain(findLastTextContent(result.messages), ['reykjavik', 'aurora-17']);
});
it('folds later turns from the same thread into existing working memory', async () => {
const store = new InMemoryMemory();
const model = getModel('anthropic');
const resourceId = uniqueId('obs-resource');
const sourceThreadId = uniqueId('obs-source');
const options = {
persistence: { threadId: sourceThreadId, resourceId },
};
const writer = createWriterAgent(model, store);
await rememberFact(
writer,
'Please remember this for later: I live in Reykjavik, and my project codename is Aurora-17.',
options,
);
await addNeutralTurn(writer, options, ['reykjavik', 'aurora-17']);
const firstCycle = await runObservationCycleForTest({
store,
threadId: sourceThreadId,
resourceId,
model,
});
await rememberFact(writer, 'Also remember that my editor theme is Solarized Dawn.', options);
await addNeutralTurn(writer, options, ['solarized', 'dawn']);
const secondCycle = await runObservationCycleForTest({
store,
threadId: sourceThreadId,
resourceId,
model,
});
expect(firstCycle.cursorAfter).not.toBeNull();
expect(secondCycle.cursorAfter?.lastObservedAt.getTime()).toBeGreaterThan(
firstCycle.cursorAfter!.lastObservedAt.getTime(),
);
const workingMemory = await store.getWorkingMemory({
threadId: sourceThreadId,
resourceId,
scope: 'resource',
});
expectTextToContain(workingMemory, ['reykjavik', 'aurora-17', 'solarized dawn']);
const reader = createReaderAgent(model, store);
const result = await reader.generate(
'From memory only, where do I live, what is my project codename, and what is my editor theme?',
options,
);
expectTextToContain(findLastTextContent(result.messages), [
'reykjavik',
'aurora-17',
'solarized',
'dawn',
]);
});
});

View File

@ -6,7 +6,6 @@ import {
collectStreamChunks,
getModel,
chunksOfType,
findAllToolResults,
collectTextDeltas,
} from './helpers';
import { Agent, Tool } from '../../index';
@ -43,15 +42,14 @@ describe('multi-tool-calls integration', () => {
);
const chunks = await collectStreamChunks(fullStream);
const messageChunks = chunksOfType(chunks, 'message');
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
const toolCallResults = chunksOfType(chunks, 'tool-result');
// Should have called the tool multiple times
const priceCalls = toolCallResults.filter((tc) => tc.toolName === 'lookup_price');
expect(priceCalls.length).toBeGreaterThanOrEqual(2);
// Each call should have its own correct output (not all pointing to the first result)
const outputs = priceCalls.map((tc) => tc.result as { product: string; price: number });
const outputs = priceCalls.map((tc) => tc.output as { product: string; price: number });
// Verify that different products got different prices (index-based merging works)
const uniquePrices = new Set(outputs.map((o) => o.price));
@ -90,8 +88,7 @@ describe('multi-tool-calls integration', () => {
const { stream: fullStream } = await agent.stream('What is 3 + 4 and also what is 5 * 6?');
const chunks = await collectStreamChunks(fullStream);
const messageChunks = chunksOfType(chunks, 'message');
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
const toolCallResults = chunksOfType(chunks, 'tool-result');
const toolCalls = toolCallResults.filter(
(tc) => tc.toolName === 'add' || tc.toolName === 'multiply',
@ -104,8 +101,8 @@ describe('multi-tool-calls integration', () => {
expect(addCall).toBeDefined();
expect(multiplyCall).toBeDefined();
expect((addCall!.result as { result: number }).result).toBe(7);
expect((multiplyCall!.result as { result: number }).result).toBe(30);
expect((addCall!.output as { result: number }).result).toBe(7);
expect((multiplyCall!.output as { result: number }).result).toBe(30);
});
it('correctly merges results via the run() path', async () => {
@ -126,15 +123,14 @@ describe('multi-tool-calls integration', () => {
'What are the lengths of "hello" and "world"? Look up each one separately.',
);
const chunks = await collectStreamChunks(fullStream);
const messageChunks = chunksOfType(chunks, 'message');
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
const toolCallResults = chunksOfType(chunks, 'tool-result');
const lengthCalls = toolCallResults.filter((tc) => tc.toolName === 'get_length');
expect(lengthCalls.length).toBeGreaterThanOrEqual(2);
// Each should have correct output
for (const call of lengthCalls) {
const output = call.result as { text: string; length: number };
const output = call.output as { text: string; length: number };
expect(output.length).toBe(output.text.length);
}
});

View File

@ -28,95 +28,92 @@ describe('orphaned tool messages in memory', () => {
}
/**
* Seed memory with a conversation that has tool-call / tool-result pairs
* surrounded by plain user/assistant exchanges.
* Seed memory with a conversation that has settled tool-call blocks
* (state: 'resolved') surrounded by plain user/assistant exchanges.
*
* Message layout (indices 07):
* 0: user "How many widgets?"
* 1: assistant text + tool-call(call_1)
* 2: tool tool-result(call_1)
* 3: assistant "There are 10 widgets"
* 4: user "What about gadgets?"
* 5: assistant text + tool-call(call_2)
* 6: tool tool-result(call_2)
* 7: assistant "There are 5 gadgets"
* Message layout (indices 05):
* 0: user "How many widgets?"
* 1: assistant text + tool-call(call_1, state:'resolved', output:{count:10})
* 2: assistant "There are 10 widgets"
* 3: user "What about gadgets?"
* 4: assistant text + tool-call(call_2, state:'resolved', output:{count:5})
* 5: assistant "There are 5 gadgets"
*/
function buildSeedMessages(): AgentDbMessage[] {
const now = Date.now();
return [
{
id: 'm1',
createdAt: new Date(),
createdAt: new Date(now),
role: 'user',
content: [{ type: 'text', text: 'How many widgets do we have?' }],
},
{
id: 'm2',
createdAt: new Date(),
createdAt: new Date(now + 1),
role: 'assistant',
content: [
{ type: 'text', text: 'Let me look that up.' },
{ type: 'tool-call', toolCallId: 'call_1', toolName: 'lookup', input: { id: 'widgets' } },
{
type: 'tool-call',
toolCallId: 'call_1',
toolName: 'lookup',
input: { id: 'widgets' },
state: 'resolved',
output: { count: 10 },
},
],
},
{
id: 'm3',
createdAt: new Date(),
role: 'tool',
content: [
{ type: 'tool-result', toolCallId: 'call_1', toolName: 'lookup', result: { count: 10 } },
],
},
{
id: 'm4',
createdAt: new Date(),
createdAt: new Date(now + 2),
role: 'assistant',
content: [{ type: 'text', text: 'There are 10 widgets in stock.' }],
},
{
id: 'm5',
createdAt: new Date(),
id: 'm4',
createdAt: new Date(now + 3),
role: 'user',
content: [{ type: 'text', text: 'What about gadgets?' }],
},
{
id: 'm6',
createdAt: new Date(),
id: 'm5',
createdAt: new Date(now + 4),
role: 'assistant',
content: [
{ type: 'text', text: 'Let me check.' },
{ type: 'tool-call', toolCallId: 'call_2', toolName: 'lookup', input: { id: 'gadgets' } },
{
type: 'tool-call',
toolCallId: 'call_2',
toolName: 'lookup',
input: { id: 'gadgets' },
state: 'resolved',
output: { count: 5 },
},
],
},
{
id: 'm7',
createdAt: new Date(),
role: 'tool',
content: [
{ type: 'tool-result', toolCallId: 'call_2', toolName: 'lookup', result: { count: 5 } },
],
},
{
id: 'm8',
createdAt: new Date(),
id: 'm6',
createdAt: new Date(now + 5),
role: 'assistant',
content: [{ type: 'text', text: 'There are 5 gadgets in stock.' }],
},
];
}
it('handles orphaned tool results when tool-call message is truncated from history', async () => {
it('handles partial history window when earlier messages are truncated', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = 'thread-orphan-result';
// Seed 8 messages into the thread
// Seed 6 messages into the thread
await memory.saveMessages({ threadId, messages: buildSeedMessages() });
// lastMessages=6 → loads messages 27
// Message at index 2 is a tool-result for call_1, but the matching
// assistant+tool-call (index 1) is truncated. This is an orphaned tool result.
const mem = new Memory().storage(memory).lastMessages(6);
// lastMessages=4 → loads messages 25
// Each tool-call block carries its own result (state:'resolved'), so there
// are no orphan issues regardless of window boundaries.
const mem = new Memory().storage(memory).lastMessages(4);
const agent = new Agent('orphan-result-test')
.model(getModel('anthropic'))
@ -132,7 +129,7 @@ describe('orphaned tool messages in memory', () => {
expect(result.finishReason).toBe('stop');
});
it('handles orphaned tool calls when tool-result message is truncated from history', async () => {
it('handles pending tool-call blocks (interrupted turn) in history', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
@ -140,8 +137,9 @@ describe('orphaned tool messages in memory', () => {
const now = Date.now();
// Store a conversation where the last saved message is an assistant
// with a tool-call but the tool-result was never persisted (simulating
// a partial save / interrupted turn).
// with a pending tool-call block (simulating a partial save / interrupted turn).
// stripOrphanedToolMessages will drop the pending block so the LLM receives
// only the user message.
const messages: AgentDbMessage[] = [
{
id: 'm1',
@ -160,6 +158,7 @@ describe('orphaned tool messages in memory', () => {
toolCallId: 'call_orphan',
toolName: 'lookup',
input: { id: 'widgets' },
state: 'pending',
},
],
},

View File

@ -183,7 +183,7 @@ describe('external abort signal', () => {
});
expect(result.finishReason).toBe('error');
expect(result.getState().status).toBe('cancelled');
expect(agent.getState().status).toBe('cancelled');
});
it('cancels a stream() call via external AbortSignal', async () => {

View File

@ -55,10 +55,8 @@ describe('provider tools integration', () => {
const lastFinish = finishChunks[finishChunks.length - 1];
expect(lastFinish?.type === 'finish' && lastFinish.finishReason).toBe('stop');
// Collect tool calls from message chunks
const messageChunks = chunksOfType(chunks, 'message');
const allMessages = messageChunks.map((c) => c.message);
const toolCalls = findAllToolCalls(allMessages);
// Tool calls now ride their own discrete `tool-call` chunks
const toolCalls = chunksOfType(chunks, 'tool-call');
const webSearchCall = toolCalls.find((tc) => tc.toolName.includes('web_search'));
expect(webSearchCall).toBeDefined();
@ -104,9 +102,8 @@ describe('provider tools integration', () => {
expect(suspended.runId).toBeTruthy();
expect(suspended.toolCallId).toBeTruthy();
// The web search provider tool call should appear in the message history
const messageChunks = chunksOfType(chunks, 'message');
const toolCalls = findAllToolCalls(messageChunks.map((c) => c.message));
// The web search provider tool call should appear as a discrete tool-call chunk
const toolCalls = chunksOfType(chunks, 'tool-call');
const webSearchCall = toolCalls.find((tc) => tc.toolName.includes('web_search'));
expect(webSearchCall).toBeDefined();
@ -115,8 +112,8 @@ describe('provider tools integration', () => {
'stream',
{ approved: true },
{
runId: suspended.runId!,
toolCallId: suspended.toolCallId!,
runId: suspended.runId,
toolCallId: suspended.toolCallId,
},
);
const resumeChunks = await collectStreamChunks(resumeStream.stream);

View File

@ -155,16 +155,8 @@ describe('state restore after suspension', () => {
const errorChunks = resumedChunks.filter((c) => c.type === 'error');
expect(errorChunks).toHaveLength(0);
// Stream must contain the tool result message
const toolResultChunks = resumedChunks.filter(
(c) =>
c.type === 'message' &&
'message' in c &&
'content' in (c.message as object) &&
(c.message as { content: Array<{ type: string }> }).content.some(
(part) => part.type === 'tool-result',
),
);
// Stream must contain a discrete tool-result chunk for the resumed call
const toolResultChunks = chunksOfType(resumedChunks, 'tool-result');
expect(toolResultChunks.length).toBeGreaterThan(0);
// Stream must end with a finish chunk (not error)

View File

@ -7,7 +7,7 @@ import { Agent, Tool } from '../../index';
const describe = describeIf('anthropic');
describe('stream timing', () => {
it('tool-call-delta chunks arrive incrementally (not all buffered)', async () => {
it('tool-input-delta chunks arrive incrementally (not all buffered)', async () => {
const agent = new Agent('timing-test')
.model(getModel('anthropic'))
.instructions(
@ -31,16 +31,21 @@ describe('stream timing', () => {
const reader = result.stream.getReader();
// Track timestamps of each reader.read() that returns a tool-call-delta
// Track timestamps of each reader.read() that returns a tool-input-delta
// for the set_code tool. We seed `setCodeToolCallId` from the matching
// tool-input-start so subsequent deltas can be filtered by toolCallId.
// This measures when the reader YIELDS each chunk, not when the agent enqueues it.
const deltaReadTimes: number[] = [];
const start = Date.now();
let setCodeToolCallId: string | undefined;
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = value;
if (chunk.type === 'tool-call-delta' && (chunk as { name?: string }).name === 'set_code') {
if (chunk.type === 'tool-input-start' && chunk.toolName === 'set_code') {
setCodeToolCallId = chunk.toolCallId;
} else if (chunk.type === 'tool-input-delta' && chunk.toolCallId === setCodeToolCallId) {
deltaReadTimes.push(Date.now() - start);
}
}

View File

@ -5,10 +5,8 @@ import {
collectStreamChunks,
collectTextDeltas,
describeIf,
findAllToolResults,
getModel,
} from './helpers';
import type { StreamChunk } from '../../index';
import { Agent } from '../../index';
const describe = describeIf('anthropic');
@ -33,10 +31,7 @@ describe('sub-agent (asTool) integration', () => {
const chunks = await collectStreamChunks(fullStream);
const text = collectTextDeltas(chunks);
const messageChunks = chunksOfType(chunks, 'message') as Array<
StreamChunk & { type: 'message' }
>;
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
const toolResults = chunksOfType(chunks, 'tool-result');
// The orchestrator should have called the sub-agent tool
expect(toolResults.length).toBeGreaterThan(0);
@ -44,7 +39,7 @@ describe('sub-agent (asTool) integration', () => {
expect(mathCall).toBeDefined();
// The output should contain the sub-agent's response
expect(mathCall!.result).toBeDefined();
expect(mathCall!.output).toBeDefined();
// The final text should reference 60
expect(text).toBeTruthy();
@ -80,10 +75,7 @@ describe('sub-agent (asTool) integration', () => {
'Translate "hello" to French and then make it uppercase.',
);
const chunks = await collectStreamChunks(fullStream);
const messageChunks = chunksOfType(chunks, 'message') as Array<
StreamChunk & { type: 'message' }
>;
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
const toolResults = chunksOfType(chunks, 'tool-result');
// Should have called both tools
expect(toolResults.length).toBeGreaterThanOrEqual(2);

View File

@ -63,11 +63,12 @@ describe('toModelOutput integration', () => {
expect(rawOutput.total).toBe(3);
expect(rawOutput.records[0].data).toBe('x'.repeat(200));
// ContentToolResult in messages stores the transformed output (what the LLM saw)
// Tool-call block in messages stores the transformed output (what the LLM saw)
const toolResults = findAllToolResults(result.messages);
const searchToolResult = toolResults.find((tr) => tr.toolName === 'search_db');
expect(searchToolResult).toBeDefined();
const modelOutput = searchToolResult!.result as { summary: string };
expect(searchToolResult!.state).toBe('resolved');
const modelOutput = (searchToolResult as unknown as { output: { summary: string } }).output;
expect(modelOutput.summary).toContain('Found 3 records');
expect(modelOutput.summary).toContain('Widget A');
});
@ -106,15 +107,14 @@ describe('toModelOutput integration', () => {
const { stream } = await agent.stream('Get report RPT-001');
const chunks = await collectStreamChunks(stream);
// The tool result messages in the stream contain the transformed output
const messageChunks = chunksOfType(chunks, 'message');
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
// The discrete tool-result chunks in the stream contain the transformed output
const toolResults = chunksOfType(chunks, 'tool-result');
const reportResult = toolResults.find((tr) => tr.toolName === 'fetch_report');
expect(reportResult).toBeDefined();
// The model output (transformed) should have the truncated fields
const modelOutput = reportResult!.result as { id: string; title: string; pageCount: number };
const modelOutput = reportResult!.output as { id: string; title: string; pageCount: number };
expect(modelOutput.id).toBe('RPT-001');
expect(modelOutput.title).toBe('Q4 Sales Report');
expect(modelOutput.pageCount).toBe(42);
@ -140,11 +140,14 @@ describe('toModelOutput integration', () => {
const result = await agent.generate('Echo the message "hello world"');
// Without toModelOutput, tool result in messages should have the raw output
// Without toModelOutput, tool-call block in messages has the raw output
const toolResults = findAllToolResults(result.messages);
const echoResult = toolResults.find((tr) => tr.toolName === 'echo');
expect(echoResult).toBeDefined();
expect((echoResult!.result as { echoed: string }).echoed).toBe('hello world');
expect(echoResult!.state).toBe('resolved');
expect((echoResult as unknown as { output: { echoed: string } }).output.echoed).toBe(
'hello world',
);
// And toolCalls should also have the same raw output
expect(result.toolCalls).toBeDefined();
@ -196,11 +199,14 @@ describe('toModelOutput integration', () => {
expect(multiplyEntry).toBeDefined();
expect((multiplyEntry!.output as { result: number }).result).toBe(56);
// Tool result in messages stores the transformed output for the LLM
// Tool-call block in messages stores the transformed output for the LLM
const toolResults = findAllToolResults(result.messages);
const multiplyToolResult = toolResults.find((tr) => tr.toolName === 'multiply');
expect(multiplyToolResult).toBeDefined();
const modelOutput = multiplyToolResult!.result as { answer: number; note: string };
expect(multiplyToolResult!.state).toBe('resolved');
const modelOutput = (
multiplyToolResult as unknown as { output: { answer: number; note: string } }
).output;
expect(modelOutput.answer).toBe(56);
expect(modelOutput.note).toBe('multiplication complete');

View File

@ -0,0 +1,222 @@
/**
* Upsert contract: after a HITL suspend/resume cycle backed by SqliteMemory,
* the thread must contain exactly ONE assistant message with the tool-call
* block (no duplicate rows), and that block must have state: 'resolved'.
*
* The upsert matters because on resume the runtime calls saveToMemory with
* turnDelta() which includes the now-resolved assistant message restored from
* the checkpoint. Without upsert-by-id, a second row would be inserted for
* the same message, breaking the thread ordering contract.
*
* Note: messages with state:'pending' are transient and are NOT written to
* memory during suspension they only live in the checkpoint. Memory only
* receives the final settled state after resume completes.
*/
import { afterEach, expect, it } from 'vitest';
import { z } from 'zod';
import { describeIf, createSqliteMemory, getModel } from './helpers';
import { Agent, filterLlmMessages, Memory, Tool } from '../../index';
import type { AgentDbMessage } from '../../index';
import type { ContentToolCall, Message } from '../../types/sdk/message';
const describe = describeIf('anthropic');
describe('tool-call upsert via suspend/resume (SqliteMemory)', () => {
const cleanups: Array<() => void> = [];
afterEach(() => {
for (const fn of cleanups) fn();
cleanups.length = 0;
});
function extractToolCallBlocks(messages: AgentDbMessage[]): ContentToolCall[] {
return filterLlmMessages(messages).flatMap((m) =>
m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'),
);
}
function buildInterruptibleAgent(memory: ReturnType<typeof createSqliteMemory>['memory']): Agent {
const deleteTool = new Tool('delete_file')
.description('Delete a file at the given path')
.input(z.object({ path: z.string().describe('File path to delete') }))
.output(z.object({ deleted: z.boolean(), path: z.string() }))
.suspend(z.object({ message: z.string(), severity: z.string() }))
.resume(z.object({ approved: z.boolean() }))
.handler(async ({ path }, ctx) => {
if (!ctx.resumeData) {
return await ctx.suspend({ message: `Delete "${path}"?`, severity: 'destructive' });
}
if (!ctx.resumeData.approved) return { deleted: false, path };
return { deleted: true, path };
});
return new Agent('upsert-test-agent')
.model(getModel('anthropic'))
.instructions(
'You are a file manager. When asked to delete a file, use the delete_file tool. Be concise.',
)
.tool(deleteTool)
.memory(new Memory().storage(memory))
.checkpoint('memory');
}
it('after resume, thread has exactly one resolved tool-call block (no duplicate rows)', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = 'thread-upsert-resolved';
const resourceId = 'res-1';
const persistence = { threadId, resourceId };
const agent = buildInterruptibleAgent(memory);
// Turn 1: trigger the suspend — messages with pending tool-call are
// stored in the checkpoint only, NOT in SqliteMemory yet.
const suspendResult = await agent.generate('Please delete /tmp/foo.txt', {
persistence,
});
expect(suspendResult.finishReason).toBe('tool-calls');
expect(suspendResult.pendingSuspend).toBeDefined();
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
// Before resume: no tool-call blocks in memory (pending stays in checkpoint)
const msgsBefore = await memory.getMessages(threadId);
const blocksBefore = extractToolCallBlocks(msgsBefore);
expect(blocksBefore).toHaveLength(0);
// Turn 2: resume with approval — on completion saveToMemory is called and
// the assistant message (now resolved) is written for the first time.
const resumeResult = await agent.resume(
'generate',
{ approved: true },
{
runId,
toolCallId,
},
);
expect(resumeResult.finishReason).toBe('stop');
// After resume: exactly one resolved tool-call block, no duplicate rows
const msgsAfter = await memory.getMessages(threadId);
const blocksAfter = extractToolCallBlocks(msgsAfter);
expect(blocksAfter).toHaveLength(1);
expect(blocksAfter[0].state).toBe('resolved');
expect(blocksAfter[0].toolCallId).toBe(toolCallId);
expect((blocksAfter[0] as ContentToolCall & { state: 'resolved' }).output).toMatchObject({
deleted: true,
});
// No duplicate assistant messages with tool-call blocks
const assistantMsgsWithToolCalls = filterLlmMessages(msgsAfter).filter(
(m) => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
);
expect(assistantMsgsWithToolCalls).toHaveLength(1);
});
it('after resume with denial, thread has exactly one resolved tool-call block', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = 'thread-upsert-denied';
const resourceId = 'res-2';
const persistence = { threadId, resourceId };
const agent = buildInterruptibleAgent(memory);
const suspendResult = await agent.generate('Please delete /tmp/bar.txt', {
persistence,
});
expect(suspendResult.finishReason).toBe('tool-calls');
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
// Before resume: no messages in memory
const msgsBefore = await memory.getMessages(threadId);
expect(extractToolCallBlocks(msgsBefore)).toHaveLength(0);
const resumeResult = await agent.resume(
'generate',
{ approved: false },
{
runId,
toolCallId,
},
);
expect(resumeResult.finishReason).toBe('stop');
const msgsAfter = await memory.getMessages(threadId);
const blocksAfter = extractToolCallBlocks(msgsAfter);
// Tool ran and returned {deleted: false} — still resolved, not rejected
expect(blocksAfter).toHaveLength(1);
expect(blocksAfter[0].state).toBe('resolved');
const output = (blocksAfter[0] as ContentToolCall & { state: 'resolved' }).output;
expect(output).toMatchObject({ deleted: false });
// No duplicate rows
const assistantMsgsWithToolCalls = filterLlmMessages(msgsAfter).filter(
(m) => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
);
expect(assistantMsgsWithToolCalls).toHaveLength(1);
});
it('if same thread is resumed twice (re-suspend then resume again), still no duplicate rows', async () => {
const { memory, cleanup } = createSqliteMemory();
cleanups.push(cleanup);
const threadId = 'thread-upsert-double';
const resourceId = 'res-3';
const persistence = { threadId, resourceId };
// Use a tool that always re-suspends on first call and approves on second
let callCount = 0;
const confirmTool = new Tool('confirm')
.description('Confirm an action')
.input(z.object({ action: z.string() }))
.output(z.object({ done: z.boolean() }))
.suspend(z.object({ question: z.string() }))
.resume(z.object({ yes: z.boolean() }))
.handler(async ({ action }, ctx) => {
callCount++;
if (!ctx.resumeData) {
return await ctx.suspend({ question: `Confirm: ${action}?` });
}
return { done: ctx.resumeData.yes };
});
const agent = new Agent('double-upsert-agent')
.model(getModel('anthropic'))
.instructions('Use confirm tool for every action. Be concise.')
.tool(confirmTool)
.memory(new Memory().storage(memory))
.checkpoint('memory');
// Turn 1: suspend
const r1 = await agent.generate('confirm action: foo', { persistence });
expect(r1.finishReason).toBe('tool-calls');
const { runId, toolCallId } = r1.pendingSuspend![0];
// No messages in memory yet
expect(await memory.getMessages(threadId)).toHaveLength(0);
// Resume: completes
const r2 = await agent.resume('generate', { yes: true }, { runId, toolCallId });
expect(r2.finishReason).toBe('stop');
const finalMessages = await memory.getMessages(threadId);
const toolCallBlocks = extractToolCallBlocks(finalMessages);
// Exactly one tool-call block, no duplicates
expect(toolCallBlocks).toHaveLength(1);
expect(toolCallBlocks[0].state).toBe('resolved');
// And the assistant message with the tool-call appears exactly once
const assistantMsgsWithCalls = filterLlmMessages(finalMessages).filter(
(m): m is Message => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
);
expect(assistantMsgsWithCalls).toHaveLength(1);
});
});

View File

@ -5,7 +5,6 @@ import {
collectStreamChunks,
chunksOfType,
collectTextDeltas,
findAllToolResults,
createAgentWithAlwaysErrorTool,
createAgentWithFlakyTool,
} from './helpers';
@ -55,20 +54,20 @@ describe('tool error handling integration', () => {
expect(mentionsFailure).toBe(true);
});
it('error tool-result appears in the message list', async () => {
it('error tool-result appears in the stream', async () => {
const agent = createAgentWithAlwaysErrorTool('anthropic');
const { stream } = await agent.stream('Fetch the data for id "abc123".');
const chunks = await collectStreamChunks(stream);
// There should be a tool-result message in the stream
const messageChunks = chunksOfType(chunks, 'message');
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
// There should be a discrete tool-result chunk for the failed call
const toolResults = chunksOfType(chunks, 'tool-result');
// The tool should have been called and produced a result (even if it errored)
expect(toolResults.length).toBeGreaterThan(0);
const brokenResult = toolResults.find((r) => r.toolName === 'broken_tool');
expect(brokenResult).toBeDefined();
expect(brokenResult!.isError).toBe(true);
});
it('LLM can self-correct by retrying a flaky tool', async () => {

View File

@ -8,7 +8,7 @@ import {
createAgentWithMixedTools,
createAgentWithParallelInterruptibleCalls,
} from './helpers';
import { isLlmMessage, type StreamChunk } from '../../index';
import type { StreamChunk } from '../../index';
const describe = describeIf('anthropic');
@ -36,13 +36,8 @@ describe('tool interrupt integration', () => {
);
// No tool-result should appear (tool is suspended)
const contentChunks = chunks.filter(
(c) =>
c.type === 'message' &&
'content' in c &&
(c.content as { type: string }).type === 'tool-result',
);
expect(contentChunks).toHaveLength(0);
const toolResultChunks = chunksOfType(chunks, 'tool-result');
expect(toolResultChunks).toHaveLength(0);
});
it('resumes the stream after resume with approval', async () => {
@ -58,19 +53,14 @@ describe('tool interrupt integration', () => {
const resumedStream = await agent.resume(
'stream',
{ approved: true },
{ runId: suspended.runId!, toolCallId: suspended.toolCallId! },
{ runId: suspended.runId, toolCallId: suspended.toolCallId },
);
const resumedChunks = await collectStreamChunks(resumedStream.stream);
const resumedTypes = resumedChunks.map((c) => c.type);
// After approval, tool-result should appear as content chunk
const toolResultChunks = resumedChunks.filter(
(c) =>
c.type === 'message' &&
isLlmMessage(c.message) &&
c.message.content.some((c) => c.type === 'tool-result'),
);
// After approval, a discrete tool-result chunk should appear
const toolResultChunks = chunksOfType(resumedChunks, 'tool-result');
expect(toolResultChunks.length).toBeGreaterThan(0);
expect(resumedTypes).toContain('text-delta');
@ -89,7 +79,7 @@ describe('tool interrupt integration', () => {
const resumedStream = await agent.resume(
'stream',
{ approved: false },
{ runId: suspended.runId!, toolCallId: suspended.toolCallId! },
{ runId: suspended.runId, toolCallId: suspended.toolCallId },
);
const resumedChunks = await collectStreamChunks(resumedStream.stream);
@ -119,7 +109,7 @@ describe('tool interrupt integration', () => {
const stream2 = await agent.resume(
'stream',
{ approved: true },
{ runId: suspended1.runId!, toolCallId: suspended1.toolCallId! },
{ runId: suspended1.runId, toolCallId: suspended1.toolCallId },
);
const chunks2 = await collectStreamChunks(stream2.stream);
@ -136,7 +126,7 @@ describe('tool interrupt integration', () => {
const stream3 = await agent.resume(
'stream',
{ approved: true },
{ runId: suspended2.runId!, toolCallId: suspended2.toolCallId! },
{ runId: suspended2.runId, toolCallId: suspended2.toolCallId },
);
const chunks3 = await collectStreamChunks(stream3.stream);
@ -162,13 +152,8 @@ describe('tool interrupt integration', () => {
const chunks = await collectStreamChunks(fullStream);
// list_files should auto-execute — its result should appear as content
const toolResultChunks = chunks.filter(
(c) =>
c.type === 'message' &&
isLlmMessage(c.message) &&
c.message.content.some((c) => c.type === 'tool-result'),
);
// list_files should auto-execute — its result should appear as a discrete tool-result chunk
const toolResultChunks = chunksOfType(chunks, 'tool-result');
expect(toolResultChunks.length).toBeGreaterThan(0);
// delete_file should be suspended

View File

@ -69,7 +69,10 @@ describe('workspace agent integration', () => {
const readResult = toolResults.find((tr) => tr.toolName === 'workspace_read_file');
expect(readResult).toBeDefined();
expect((readResult!.result as { content: string }).content).toContain('Hello from n8n!');
expect(readResult!.state).toBe('resolved');
expect((readResult as unknown as { output: { content: string } }).output.content).toContain(
'Hello from n8n!',
);
expect(memFs.getFileContent('/greeting.txt')).toBe('Hello from n8n!');
});
@ -103,7 +106,8 @@ describe('workspace agent integration', () => {
const toolResults = findAllToolResults(result.messages);
const execResult = toolResults.find((tr) => tr.toolName === 'workspace_execute_command');
expect(execResult).toBeDefined();
expect((execResult!.result as { success: boolean }).success).toBe(true);
expect(execResult!.state).toBe('resolved');
expect((execResult as unknown as { output: { success: boolean } }).output.success).toBe(true);
});
it('agent uses workspace_mkdir and workspace_list_files together', async () => {
@ -130,7 +134,8 @@ describe('workspace agent integration', () => {
const toolResults = findAllToolResults(result.messages);
const listResult = toolResults.find((tr) => tr.toolName === 'workspace_list_files');
expect(listResult).toBeDefined();
const entries = (listResult!.result as unknown as { entries: FileEntry[] }).entries;
expect(listResult!.state).toBe('resolved');
const entries = (listResult as unknown as { output: { entries: FileEntry[] } }).output.entries;
const names = entries.map((e) => e.name);
expect(names).toContain('index.ts');
expect(names).toContain('README.md');
@ -201,7 +206,8 @@ describe('workspace agent integration', () => {
const toolResults = findAllToolResults(result.messages);
const statResult = toolResults.find((tr) => tr.toolName === 'workspace_file_stat');
expect(statResult).toBeDefined();
const stat = statResult!.result as { type: string; size: number };
expect(statResult!.state).toBe('resolved');
const stat = (statResult as unknown as { output: { type: string; size: number } }).output;
expect(stat.type).toBe('file');
expect(stat.size).toBe(29);
});
@ -233,7 +239,10 @@ describe('workspace agent integration', () => {
const readResult = toolResults.find((tr) => tr.toolName === 'workspace_read_file');
expect(readResult).toBeDefined();
expect((readResult!.result as { content: string }).content).toContain('export default {}');
expect(readResult!.state).toBe('resolved');
expect((readResult as unknown as { output: { content: string } }).output.content).toContain(
'export default {}',
);
expect(memFs.getFileContent('/app/config.ts')).toBe('export default {}');
});

View File

@ -45,12 +45,12 @@ describe('Zod validation errors surface to LLM and allow self-correction', () =>
expect(result.finishReason).toBe('stop');
expect(result.error).toBeUndefined();
// At least two tool-result messages: one error, one success
// At least two tool-call messages: one rejected, one resolved
const allMessages = filterLlmMessages(result.messages);
const toolResultMessages = allMessages.filter((m) =>
m.content.some((c) => c.type === 'tool-result'),
const toolCallMessages = allMessages.filter((m) =>
m.content.some((c) => c.type === 'tool-call'),
);
expect(toolResultMessages.length).toBeGreaterThanOrEqual(2);
expect(toolCallMessages.length).toBeGreaterThanOrEqual(2);
// The final response should mention a user (age 25 or similar)
const text = findLastTextContent(result.messages);

View File

@ -0,0 +1,201 @@
const mockExporterConfigs: unknown[] = [];
const mockBatchProcessorInputs: unknown[] = [];
const mockBatchProcessorInstances: Array<{
forceFlush: jest.Mock<Promise<void>, []>;
onStart: jest.Mock<void, [unknown, unknown]>;
onEnd: jest.Mock<void, [unknown]>;
shutdown: jest.Mock<Promise<void>, []>;
}> = [];
const mockProviderConfigs: unknown[] = [];
const mockAwaitPendingTraceBatches = jest.fn(async () => await Promise.resolve());
const mockTracer = { startSpan: jest.fn() };
const mockProvider = {
getTracer: jest.fn(() => mockTracer),
register: jest.fn(),
forceFlush: jest.fn(),
shutdown: jest.fn(),
};
jest.mock('langsmith/experimental/otel/exporter', () => ({
LangSmithOTLPTraceExporter: jest.fn((config: unknown) => {
mockExporterConfigs.push(config);
return { type: 'exporter' };
}),
}));
jest.mock('@opentelemetry/sdk-trace-base', () => ({
BatchSpanProcessor: jest.fn((exporter: unknown) => {
mockBatchProcessorInputs.push(exporter);
const processor = {
forceFlush: jest.fn(async () => await Promise.resolve()),
onStart: jest.fn(),
onEnd: jest.fn(),
shutdown: jest.fn(async () => await Promise.resolve()),
};
mockBatchProcessorInstances.push(processor);
return processor;
}),
}));
jest.mock('langsmith', () => ({
RunTree: {
getSharedClient: jest.fn(() => ({
awaitPendingTraceBatches: mockAwaitPendingTraceBatches,
})),
},
}));
jest.mock('@opentelemetry/sdk-trace-node', () => ({
NodeTracerProvider: jest.fn((config: unknown) => {
mockProviderConfigs.push(config);
return mockProvider;
}),
}));
import { LangSmithTelemetry } from '../integrations/langsmith';
describe('LangSmithTelemetry', () => {
const previousTracingV2 = process.env.LANGCHAIN_TRACING_V2;
beforeEach(() => {
mockExporterConfigs.length = 0;
mockBatchProcessorInputs.length = 0;
mockBatchProcessorInstances.length = 0;
mockProviderConfigs.length = 0;
mockAwaitPendingTraceBatches.mockClear();
mockProvider.getTracer.mockClear();
mockProvider.register.mockClear();
mockProvider.forceFlush.mockClear();
mockProvider.shutdown.mockClear();
delete process.env.LANGCHAIN_TRACING_V2;
});
afterAll(() => {
if (previousTracingV2 === undefined) {
delete process.env.LANGCHAIN_TRACING_V2;
} else {
process.env.LANGCHAIN_TRACING_V2 = previousTracingV2;
}
});
it('passes proxy headers and derived OTLP URL to the LangSmith exporter', async () => {
const transformExportedSpan = (span: unknown) => span;
const getHeaders = jest.fn(async () => {
await Promise.resolve();
return { Authorization: 'Bearer proxy-token' } satisfies Record<string, string>;
});
const built = await new LangSmithTelemetry({
apiKey: '-',
project: 'instance-ai',
endpoint: 'https://ai-proxy.test/langsmith',
headers: getHeaders,
transformExportedSpan,
}).build();
expect(getHeaders).toHaveBeenCalledTimes(1);
expect(mockExporterConfigs).toEqual([
{
apiKey: '-',
projectName: 'instance-ai',
headers: { Authorization: 'Bearer proxy-token' },
transformExportedSpan,
url: 'https://ai-proxy.test/langsmith/otel/v1/traces',
},
]);
expect(mockBatchProcessorInputs).toEqual([{ type: 'exporter' }]);
expect(mockProviderConfigs).toHaveLength(1);
const providerConfig = mockProviderConfigs[0] as { spanProcessors: unknown[] };
expect(providerConfig.spanProcessors).toHaveLength(1);
const spanProcessor = providerConfig.spanProcessors[0] as Record<string, unknown>;
expect(typeof spanProcessor.forceFlush).toBe('function');
expect(typeof spanProcessor.onStart).toBe('function');
expect(typeof spanProcessor.onEnd).toBe('function');
expect(typeof spanProcessor.shutdown).toBe('function');
expect(mockProvider.register).toHaveBeenCalledWith({ propagator: null });
expect(mockProvider.getTracer).toHaveBeenCalledWith('@n8n/agents');
expect(built.tracer).toBe(mockTracer);
expect(built.provider).toBe(mockProvider);
expect(process.env.LANGCHAIN_TRACING_V2).toBe('true');
});
it('does not allow endpoint overrides when using an engine-resolved key', async () => {
const telemetry = new LangSmithTelemetry({
project: 'instance-ai',
endpoint: 'https://should-not-be-used.test',
});
telemetry.resolvedApiKey = 'resolved-key';
await telemetry.build();
expect(mockExporterConfigs).toEqual([
{
apiKey: 'resolved-key',
projectName: 'instance-ai',
},
]);
});
it('filters noisy AI SDK operation wrappers while preserving provider and tool spans', async () => {
await new LangSmithTelemetry({
apiKey: 'ls-test-key',
project: 'instance-ai',
}).build();
const processor = mockProviderConfigs[0] as {
spanProcessors: Array<{
onStart(span: unknown, parentContext: unknown): void;
onEnd(span: unknown): void;
}>;
};
const filteredProcessor = processor.spanProcessors[0];
const delegate = mockBatchProcessorInstances[0];
const makeSpan = (
spanId: string,
attributes: Record<string, unknown>,
parentSpanId?: string,
) => ({
attributes,
spanContext: () => ({ traceId: 'trace-1', spanId }),
...(parentSpanId ? { parentSpanContext: { spanId: parentSpanId } } : {}),
});
const root = makeSpan('1111111111111111', { 'langsmith.traceable': 'true' });
const streamWrapper = makeSpan(
'2222222222222222',
{ 'ai.operationId': 'ai.streamText' },
'1111111111111111',
);
const providerRequest = makeSpan(
'3333333333333333',
{ 'ai.operationId': 'ai.streamText.doStream' },
'2222222222222222',
);
const toolCall = makeSpan(
'4444444444444444',
{ 'ai.operationId': 'ai.toolCall' },
'2222222222222222',
);
filteredProcessor.onStart(root, {});
filteredProcessor.onStart(streamWrapper, {});
filteredProcessor.onStart(providerRequest, {});
filteredProcessor.onStart(toolCall, {});
filteredProcessor.onEnd(toolCall);
filteredProcessor.onEnd(providerRequest);
filteredProcessor.onEnd(streamWrapper);
filteredProcessor.onEnd(root);
expect(delegate.onStart).toHaveBeenCalledTimes(3);
expect(delegate.onStart).toHaveBeenNthCalledWith(1, root, {});
expect(delegate.onStart).toHaveBeenNthCalledWith(2, providerRequest, {});
expect(delegate.onStart).toHaveBeenNthCalledWith(3, toolCall, {});
expect(providerRequest.attributes).toEqual(
expect.objectContaining({
'langsmith.span.parent_id': '00000000-0000-0000-1111-111111111111',
'langsmith.traceable_parent_otel_span_id': '1111111111111111',
}),
);
expect(delegate.onEnd).toHaveBeenCalledTimes(3);
expect(delegate.onEnd).not.toHaveBeenCalledWith(streamWrapper);
});
});

View File

@ -0,0 +1,28 @@
import type {
BuiltMemory,
MemoryConfig,
ObservationCapableMemory,
ObservationalMemoryConfig,
} from '../types';
type AssertMemoryConfig<T extends MemoryConfig> = T;
type PlainMemoryConfig = AssertMemoryConfig<{
memory: BuiltMemory;
lastMessages: 10;
}>;
type ObservationCapableMemoryConfig = AssertMemoryConfig<{
memory: ObservationCapableMemory;
lastMessages: 10;
observationalMemory: ObservationalMemoryConfig;
}>;
// @ts-expect-error Observational memory requires a backend that also implements BuiltObservationStore.
type InvalidObservationalMemoryConfig = AssertMemoryConfig<{
memory: BuiltMemory;
lastMessages: 10;
observationalMemory: ObservationalMemoryConfig;
}>;
export type { InvalidObservationalMemoryConfig, ObservationCapableMemoryConfig, PlainMemoryConfig };

View File

@ -1,133 +0,0 @@
import type { LanguageModel } from 'ai';
import { createModel } from '../runtime/model-factory';
type ProviderOpts = {
apiKey?: string;
baseURL?: string;
fetch?: typeof globalThis.fetch;
headers?: Record<string, string>;
};
jest.mock('@ai-sdk/anthropic', () => ({
createAnthropic: (opts?: ProviderOpts) => (model: string) => ({
provider: 'anthropic',
modelId: model,
apiKey: opts?.apiKey,
baseURL: opts?.baseURL,
fetch: opts?.fetch,
headers: opts?.headers,
specificationVersion: 'v3',
}),
}));
jest.mock('@ai-sdk/openai', () => ({
createOpenAI: (opts?: ProviderOpts) => (model: string) => ({
provider: 'openai',
modelId: model,
apiKey: opts?.apiKey,
baseURL: opts?.baseURL,
fetch: opts?.fetch,
headers: opts?.headers,
specificationVersion: 'v3',
}),
}));
const mockProxyAgent = jest.fn();
jest.mock('undici', () => ({
ProxyAgent: mockProxyAgent,
}));
describe('createModel', () => {
const originalEnv = process.env;
beforeEach(() => {
process.env = { ...originalEnv };
delete process.env.HTTPS_PROXY;
delete process.env.HTTP_PROXY;
mockProxyAgent.mockClear();
});
afterAll(() => {
process.env = originalEnv;
});
it('should accept a string config', () => {
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
expect(model.provider).toBe('anthropic');
expect(model.modelId).toBe('claude-sonnet-4-5');
});
it('should accept an object config with url', () => {
const model = createModel({
id: 'openai/gpt-4o',
apiKey: 'sk-test',
url: 'https://custom.endpoint.com/v1',
}) as unknown as Record<string, unknown>;
expect(model.provider).toBe('openai');
expect(model.modelId).toBe('gpt-4o');
expect(model.apiKey).toBe('sk-test');
expect(model.baseURL).toBe('https://custom.endpoint.com/v1');
});
it('should pass through a prebuilt LanguageModel', () => {
const prebuilt = {
doGenerate: jest.fn(),
doStream: jest.fn(),
specificationVersion: 'v2' as const,
modelId: 'custom-model',
provider: 'custom',
defaultObjectGenerationMode: undefined,
} as unknown as LanguageModel;
const result = createModel(prebuilt);
expect(result).toBe(prebuilt);
});
it('should handle model IDs with multiple slashes', () => {
const model = createModel('openai/ft:gpt-4o:my-org:custom:abc123') as unknown as Record<
string,
unknown
>;
expect(model.provider).toBe('openai');
expect(model.modelId).toBe('ft:gpt-4o:my-org:custom:abc123');
});
it('should not pass fetch when no proxy env vars are set', () => {
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
expect(model.fetch).toBeUndefined();
});
it('should pass proxy-aware fetch when HTTPS_PROXY is set', () => {
process.env.HTTPS_PROXY = 'http://proxy:8080';
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
expect(model.fetch).toBeInstanceOf(Function);
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:8080');
});
it('should pass proxy-aware fetch when HTTP_PROXY is set', () => {
process.env.HTTP_PROXY = 'http://proxy:9090';
const model = createModel('openai/gpt-4o') as unknown as Record<string, unknown>;
expect(model.fetch).toBeInstanceOf(Function);
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:9090');
});
it('should forward custom headers to the provider factory', () => {
const model = createModel({
id: 'anthropic/claude-sonnet-4-5',
apiKey: 'sk-test',
headers: { 'x-proxy-auth': 'Bearer abc', 'anthropic-beta': 'tools-2024' },
}) as unknown as Record<string, unknown>;
expect(model.headers).toEqual({
'x-proxy-auth': 'Bearer abc',
'anthropic-beta': 'tools-2024',
});
});
it('should prefer HTTPS_PROXY over HTTP_PROXY', () => {
process.env.HTTPS_PROXY = 'http://https-proxy:8080';
process.env.HTTP_PROXY = 'http://http-proxy:9090';
createModel('anthropic/claude-sonnet-4-5');
expect(mockProxyAgent).toHaveBeenCalledWith('http://https-proxy:8080');
});
});

Some files were not shown because too many files have changed in this diff Show More