mirror of
https://github.com/n8n-io/n8n.git
synced 2026-05-13 00:20:27 +02:00
Compare commits
15 Commits
master
...
n8n@2.20.5
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6707c67f50 | ||
|
|
4d9a9f8079 | ||
|
|
ce016859cb | ||
|
|
8b1103bfa6 | ||
|
|
3bcb6378c5 | ||
|
|
c9b1463c58 | ||
|
|
ae115d199f | ||
|
|
2f31aca2dc | ||
|
|
71d4122438 | ||
|
|
98004c6269 | ||
|
|
ad6e890d85 | ||
|
|
513f7cd3dc | ||
|
|
2c19035590 | ||
|
|
9dd7ce9486 | ||
|
|
642fc2e18e |
|
|
@ -1,150 +0,0 @@
|
|||
---
|
||||
description: >-
|
||||
Checks if a community pull request is ready for human review. Verifies CLA
|
||||
signature, PR title format, description completeness, test coverage, and
|
||||
cubic-dev-ai issues. Use when given a PR number or branch name to review,
|
||||
or when the user says /community-pr-review, /pr-review, or asks to check if
|
||||
a PR is ready for review.
|
||||
allowed-tools: Bash(gh:*), Bash(git:*), Read, Glob, Grep
|
||||
---
|
||||
|
||||
# Community PR Review
|
||||
|
||||
Given a PR number or branch name, determine whether it is ready for human review.
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Resolve the PR
|
||||
|
||||
If given a branch name, find the PR number first:
|
||||
```bash
|
||||
gh pr view <branch> --repo n8n-io/n8n --json number --jq .number
|
||||
```
|
||||
|
||||
### 2. Fetch PR data
|
||||
|
||||
```bash
|
||||
gh pr view <number> --repo n8n-io/n8n \
|
||||
--json number,title,body,author,headRefName,headRefOid,files,isDraft,state
|
||||
```
|
||||
|
||||
Fetch in parallel:
|
||||
|
||||
```bash
|
||||
# CLA commit status (primary signal) — statuses are newest-first; use the first returned entry
|
||||
gh api --paginate "repos/n8n-io/n8n/commits/<headRefOid>/statuses" \
|
||||
--jq '[.[] | select(.context == "license/cla") | {state, description}] | first'
|
||||
|
||||
# CLAassistant issue comment (fallback when no commit status) — use the last returned entry
|
||||
gh api --paginate "repos/n8n-io/n8n/issues/<number>/comments" \
|
||||
--jq '[.[] | select(.user.login == "CLAassistant") | .body] | last'
|
||||
|
||||
# cubic-dev-ai PR review comments (streamed so results concatenate cleanly across pages)
|
||||
gh api --paginate "repos/n8n-io/n8n/pulls/<number>/comments" \
|
||||
--jq '.[] | select(.user.login == "cubic-dev-ai[bot]") | {body: .body, path: .path}'
|
||||
```
|
||||
|
||||
### 3. Run the five checks
|
||||
|
||||
#### A. CLA signed
|
||||
|
||||
Check the `license/cla` commit status first; fall back to the CLAassistant comment if no status exists.
|
||||
|
||||
**Commit status** (`context == "license/cla"`):
|
||||
- `state: "success"` → ✅ signed
|
||||
- `state: "failure"` or `state: "error"` → ❌ not signed
|
||||
- `state: "pending"` → ⏳ pending
|
||||
- Not present → fall back to comment
|
||||
|
||||
**CLAassistant issue comment** (fallback):
|
||||
- Body contains `"All committers have signed the CLA."` → ✅ signed
|
||||
- Body contains `"not signed"` or a link to sign → ❌ not signed
|
||||
- No comment → ❌ treat as not signed
|
||||
|
||||
#### B. PR title format
|
||||
|
||||
For all types except `revert`, the title must match:
|
||||
```
|
||||
^(feat|fix|perf|test|docs|refactor|build|ci|chore)(\([a-zA-Z0-9 ]+( Node)?\))?!?: [A-Z].+[^.]$
|
||||
```
|
||||
|
||||
For `revert` titles, the summary is the original commit header (which starts with a lowercase type), so capitalization is not enforced:
|
||||
```
|
||||
^revert(\([a-zA-Z0-9 ]+( Node)?\))?!?: .+[^.]$
|
||||
```
|
||||
|
||||
- Type must be one of: `feat fix perf test docs refactor build ci chore revert`
|
||||
- Scope is optional, in parentheses e.g. `(editor)` or `(Slack Node)`
|
||||
- Breaking changes: `!` before the colon
|
||||
- Summary: starts with capital letter (lowercase allowed for `revert:`), no trailing period
|
||||
- No Linear ticket IDs in the title (e.g. `N8N-1234`)
|
||||
|
||||
#### C. PR description completeness
|
||||
|
||||
1. **Summary** (`## Summary`) — must have non-empty content below the heading (not just the HTML comment).
|
||||
2. **Related tickets** (`## Related Linear tickets, Github issues, and Community forum posts`) — acceptable content: a URL (`http`), a GitHub closing keyword (`closes #N`, `fixes #N`, `resolves #N`, etc.), or empty. Only flag if the section heading is missing entirely.
|
||||
3. **Checklist** (`## Review / Merge checklist`) — all four items must be present. Unchecked checkboxes are expected for community PRs; do **not** flag them as missing.
|
||||
|
||||
#### D. Tests
|
||||
|
||||
Skip this check if the PR type (from the title) is `docs`, `ci`, `chore`, or `build`.
|
||||
|
||||
Otherwise:
|
||||
1. Identify source files changed: non-test files under `packages/` from the `files` list.
|
||||
2. If there are source file changes, check out the PR in a temporary worktree:
|
||||
|
||||
```bash
|
||||
git fetch origin pull/<number>/head:pr/<number>
|
||||
git worktree add /tmp/pr-<number>-review pr/<number>
|
||||
```
|
||||
|
||||
3. Read the changed source files from the worktree to understand whether the changes introduce logic that warrants tests (new functions, bug fixes, behaviour changes, data transformations). Pure config changes, type-only changes, and trivial renames do not require tests.
|
||||
4. Look for matching test files (`*.test.ts`, `*.spec.ts`, files inside `__tests__/`) among the changed files.
|
||||
5. **Always clean up the worktree**, even if a previous check failed:
|
||||
|
||||
```bash
|
||||
git worktree remove /tmp/pr-<number>-review --force
|
||||
git branch -D pr/<number>
|
||||
```
|
||||
|
||||
Report:
|
||||
- ✅ Tests present, or change does not require tests
|
||||
- ❌ Source logic changed but no test files found
|
||||
|
||||
#### E. cubic-dev-ai issues
|
||||
|
||||
Review the PR review comments fetched in step 2. `cubic-dev-ai[bot]` leaves comments for every issue it finds.
|
||||
|
||||
- No comments from `cubic-dev-ai[bot]`, or every comment explicitly states no issues were found → ✅
|
||||
- Any other comment → ❌ report the total count and priority breakdown (e.g. "3 issues: 1× P1, 1× P2, 1× P3")
|
||||
|
||||
### 4. Output
|
||||
|
||||
Always output valid JSON in this exact shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"readyForReview": <true if all passing checks allow merge, false otherwise>,
|
||||
"messageForUser": "<Human-readable summary of what needs to change, written as if posted directly to the PR contributor. 'N/A' if nothing is needed.>",
|
||||
"checks": {
|
||||
"CLA": <true if signed, false if not signed or pending>,
|
||||
"Title": <true if title matches convention, false otherwise>,
|
||||
"Description": <true if all three template sections are complete, false otherwise>,
|
||||
"TestsNeeded": <true if the code changes require tests, false if not applicable>,
|
||||
"TestsIncluded": <true if test files are present in the PR, false otherwise>,
|
||||
"CubicIssues": <true if cubic-dev-ai raised issues, false if no issues>
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`readyForReview` is `true` only when: `CLA`, `Title`, and `Description` are all `true`; `CubicIssues` is `false`; and either `TestsNeeded` is `false` or `TestsIncluded` is `true`.
|
||||
|
||||
`messageForUser` should be a short, friendly message directed at the contributor listing exactly what they need to address. If `readyForReview` is `true`, set it to `"N/A"`.
|
||||
|
||||
Output nothing other than the JSON block.
|
||||
|
||||
## Notes
|
||||
|
||||
- Draft PRs — report all findings but note the PR is a draft.
|
||||
- If the PR is already merged or closed, say so and skip the checks.
|
||||
- Always remove the worktree even if earlier checks failed.
|
||||
|
|
@ -1,12 +1,32 @@
|
|||
{
|
||||
"version": 1,
|
||||
"generated": "2026-05-12T09:37:31.489Z",
|
||||
"totalViolations": 82,
|
||||
"generated": "2026-04-23T08:42:21.615Z",
|
||||
"totalViolations": 102,
|
||||
"violations": {
|
||||
"packages/@n8n/agents/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 40,
|
||||
"message": "langsmith@>=0.3.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "193bb785d0b4"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 27,
|
||||
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "b58f03d0d5c1"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 41,
|
||||
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "a77ced903cdf"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/ai-workflow-builder.ee/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 73,
|
||||
"line": 72,
|
||||
"message": "langsmith@^0.4.6 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "6ee5e003d795"
|
||||
},
|
||||
|
|
@ -19,110 +39,154 @@
|
|||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 70,
|
||||
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "94f80b083b76"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 71,
|
||||
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9c770d66baf2"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 77,
|
||||
"line": 76,
|
||||
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "85c311d87491"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 83,
|
||||
"line": 82,
|
||||
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "407c8d1b3428"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/cli/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 79,
|
||||
"message": "@types/node@24.10.1 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "a5a872807ede"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 74,
|
||||
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "733c3960022e"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/eslint-config/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 56,
|
||||
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "82841e89293f"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/eslint-plugin-community-nodes/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "46d3130cf108"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 47,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "589f90baeece"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/json-schema-to-zod/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 63,
|
||||
"message": "zod@^3.25.76 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "0e18482e8781"
|
||||
"message": "zod@^3.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "436de7cbc5ea"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 76,
|
||||
"message": "eslint@>= 9 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "1b5deae544ea"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 52,
|
||||
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "da74ed210d07"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 51,
|
||||
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9711a9b00bf9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 55,
|
||||
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "6a9e12780943"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 59,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "d536f5a9c3f8"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/nodes-langchain/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 292,
|
||||
"message": "openai@^6.34.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "3c1f53f0afe3"
|
||||
"line": 289,
|
||||
"message": "openai@^6.9.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "b9b214e61fdc"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 299,
|
||||
"message": "zod-to-json-schema@3.23.3 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "081b5d0b5ca5"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 296,
|
||||
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "88d67e2ef747"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 259,
|
||||
"line": 254,
|
||||
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "69d6fa7e46f9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 274,
|
||||
"line": 270,
|
||||
"message": "cheerio appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "8cd029bb871e"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 284,
|
||||
"line": 280,
|
||||
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "26f20ebea4b1"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 289,
|
||||
"line": 286,
|
||||
"message": "mongodb appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "46cb48884e22"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 293,
|
||||
"line": 290,
|
||||
"message": "pdf-parse appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "0c7d44a9c2e4"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/tournament/package.json": [
|
||||
"packages/testing/janitor/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "@types/node@^18.13.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "6368b5d3b924"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 52,
|
||||
"message": "typescript@^5.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "f668021a144e"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 55,
|
||||
"message": "ast-types appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "27edcbb2b4f8"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 56,
|
||||
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "75058f9a4d30"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 57,
|
||||
"message": "recast appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "5f2b50fef19d"
|
||||
"line": 39,
|
||||
"message": "ts-morph@>=20.0.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "4a2907301983"
|
||||
}
|
||||
],
|
||||
"packages/frontend/@n8n/chat/package.json": [
|
||||
|
|
@ -131,6 +195,12 @@
|
|||
"line": 56,
|
||||
"message": "unplugin-icons@^0.19.0 should use \"catalog:frontend\" (exists in pnpm-workspace.yaml [frontend])",
|
||||
"hash": "a0d24d761026"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 59,
|
||||
"message": "vite-plugin-dts@^4.5.3 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "37ac4b34bc06"
|
||||
}
|
||||
],
|
||||
"packages/frontend/@n8n/design-system/package.json": [
|
||||
|
|
@ -141,128 +211,268 @@
|
|||
"hash": "237e9d17c4ba"
|
||||
}
|
||||
],
|
||||
"packages/frontend/@n8n/storybook/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 31,
|
||||
"message": "@types/node@^24.10.1 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "50fb70481f8f"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/declarative/custom/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 40,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "c55e0c75d586"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "999c932ac3ae"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "2f772d0b5a09"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 41,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "6ded3ee6fafe"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/declarative/github-issues/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "c3815ab2677d"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "11608ee90ba9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 49,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "4514689aef5c"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ce8e04a67c4c"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/programmatic/example/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 40,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "cd90d70b3ce4"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "d0998542352d"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "fd2577d9c87b"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 41,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "a931f101c8a0"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/memory-custom/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 41,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "298daa052478"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "9d70bb26b233"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 47,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "42aefb6c9989"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 42,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "cf4f2ca88b59"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-ai-custom/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "3c8b4977fd8a"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "9d31f8f7537c"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 49,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "e1734c74601d"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "2a2dea670608"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-ai-custom-example/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "91ea1dbe7d4e"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "72d08eab5625"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 49,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "91b58c718e73"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "83b610ec607a"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/src/template/templates/programmatic/ai/model-openai-compatible/template/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 43,
|
||||
"message": "eslint@9.32.0 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "082bc9c01097"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 46,
|
||||
"message": "typescript@5.9.2 should use \"catalog:\" (exists in pnpm-workspace.yaml)",
|
||||
"hash": "1b9d2910ce91"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 49,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "6b5e714159dc"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 44,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ba672d26d64d"
|
||||
}
|
||||
],
|
||||
"packages/cli/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 98,
|
||||
"line": 97,
|
||||
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "1e3686e1923b"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 139,
|
||||
"message": "@opentelemetry/sdk-trace-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "1cf7f6bcf5d1"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 140,
|
||||
"line": 132,
|
||||
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "a3dad0b8dc21"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 150,
|
||||
"line": 142,
|
||||
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "949e802528f7"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 202,
|
||||
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "3cab98902302"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 209,
|
||||
"line": 193,
|
||||
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "5b7e9b03fb10"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 217,
|
||||
"line": 200,
|
||||
"message": "undici appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "91c29775e961"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 220,
|
||||
"line": 203,
|
||||
"message": "ws appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "cd07242e8163"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 75,
|
||||
"message": "@types/psl appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "6e62e0076b0a"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/agents/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 28,
|
||||
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "b58f03d0d5c1"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 50,
|
||||
"message": "@opentelemetry/sdk-trace-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "c5c495ac3508"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 51,
|
||||
"message": "@opentelemetry/sdk-trace-node appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "a77ced903cdf"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/instance-ai/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 80,
|
||||
"line": 56,
|
||||
"message": "@ai-sdk/anthropic appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "5b2153508e47"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 86,
|
||||
"message": "@types/psl appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "56dabb51b433"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 56,
|
||||
"line": 37,
|
||||
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "8fa6b9a8fc91"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 64,
|
||||
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "8f082fc2e8b6"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 71,
|
||||
"line": 47,
|
||||
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9a9d97065952"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 87,
|
||||
"line": 59,
|
||||
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "12e346c47b39"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 50,
|
||||
"line": 31,
|
||||
"message": "@joplin/turndown-plugin-gfm appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "a3cf1504b5c2"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 68,
|
||||
"line": 46,
|
||||
"message": "pdf-parse appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "283fa9114c03"
|
||||
}
|
||||
|
|
@ -290,91 +500,59 @@
|
|||
"packages/nodes-base/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 911,
|
||||
"line": 908,
|
||||
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "2d1fab7a5b05"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 961,
|
||||
"line": 958,
|
||||
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "2daf37aa14e4"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 966,
|
||||
"line": 963,
|
||||
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "3f93c404ae9c"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 900,
|
||||
"line": 897,
|
||||
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ca4ac788adc6"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 912,
|
||||
"line": 909,
|
||||
"message": "cheerio appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "1a1b5bbc50c9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 915,
|
||||
"message": "csv-parse appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "781db4a1e068"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 917,
|
||||
"line": 914,
|
||||
"message": "eventsource appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9795e6c6d9e9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 930,
|
||||
"line": 927,
|
||||
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "02341f2b5e3e"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 941,
|
||||
"line": 938,
|
||||
"message": "mongodb appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "f688907d087a"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 892,
|
||||
"line": 889,
|
||||
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ac254baa61f9"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/node-cli/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 52,
|
||||
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "da74ed210d07"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 59,
|
||||
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "188baf266f61"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 51,
|
||||
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9711a9b00bf9"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 55,
|
||||
"message": "eslint-plugin-n8n-nodes-base appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "6a9e12780943"
|
||||
}
|
||||
],
|
||||
"packages/frontend/editor-ui/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
|
|
@ -382,12 +560,6 @@
|
|||
"message": "change-case appears in 5 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "bd9a2eeb072b"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 90,
|
||||
"message": "prettier appears in 3 packages with 3 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9e9c7ec09a0b"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 92,
|
||||
|
|
@ -396,15 +568,15 @@
|
|||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 77,
|
||||
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "62156c2613b2"
|
||||
"line": 90,
|
||||
"message": "prettier appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "8a66e00b94fa"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/scan-community-package/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 20,
|
||||
"line": 15,
|
||||
"message": "semver appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ac0e4301d694"
|
||||
}
|
||||
|
|
@ -412,57 +584,57 @@
|
|||
"packages/@n8n/ai-utilities/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 69,
|
||||
"line": 57,
|
||||
"message": "undici appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "c14cd05614e8"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 65,
|
||||
"line": 53,
|
||||
"message": "tmp-promise appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "884a45bdbcf2"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 72,
|
||||
"message": "n8n-workflow appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "ea4fbfff30ba"
|
||||
"line": 60,
|
||||
"message": "n8n-workflow appears in 9 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "717de3a58c50"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/mcp-browser/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 36,
|
||||
"line": 37,
|
||||
"message": "ws appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "9650c1b55f3c"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 28,
|
||||
"line": 31,
|
||||
"message": "@mozilla/readability appears in 5 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "0c97891a24f4"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 30,
|
||||
"line": 32,
|
||||
"message": "jsdom appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "8466b03b1044"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 35,
|
||||
"line": 36,
|
||||
"message": "turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "f23a9d3d7aa2"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 42,
|
||||
"line": 44,
|
||||
"message": "@types/turndown appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "3f9e46e56803"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 26,
|
||||
"line": 29,
|
||||
"message": "@joplin/turndown-plugin-gfm appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "743e3a7dbb32"
|
||||
}
|
||||
|
|
@ -483,50 +655,14 @@
|
|||
"hash": "67f9d81d9528"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/cli/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 74,
|
||||
"message": "@oclif/core appears in 4 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "733c3960022e"
|
||||
}
|
||||
],
|
||||
"packages/workflow/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 58,
|
||||
"message": "ast-types appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "1c7d7cf0b0fe"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 60,
|
||||
"message": "esprima-next appears in 3 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "627a716b5d23"
|
||||
},
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 68,
|
||||
"message": "recast appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "b660317b5f6f"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/computer-use/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 47,
|
||||
"line": 44,
|
||||
"message": "eventsource appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "f50c1eee2ed6"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/eslint-plugin-community-nodes/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
"line": 47,
|
||||
"message": "n8n-workflow appears in 2 packages with 2 different versions — add to pnpm-workspace.yaml catalog",
|
||||
"hash": "c5830b76ff8e"
|
||||
}
|
||||
],
|
||||
"packages/@n8n/stylelint-config/package.json": [
|
||||
{
|
||||
"rule": "catalog-violations",
|
||||
|
|
|
|||
|
|
@ -38,4 +38,3 @@
|
|||
!packages/@n8n/benchmark/**
|
||||
!packages/@n8n/typescript-config
|
||||
!packages/@n8n/typescript-config/**
|
||||
|
||||
|
|
|
|||
9
.github/CODEOWNERS
vendored
9
.github/CODEOWNERS
vendored
|
|
@ -1,5 +1,6 @@
|
|||
packages/@n8n/db/src/migrations/ @n8n-io/migrations-review
|
||||
.github/workflows @n8n-io/qa-dx
|
||||
.github/scripts @n8n-io/qa-dx
|
||||
.github/actions @n8n-io/qa-dx
|
||||
.github/poutine-rules @n8n-io/qa-dx
|
||||
.github/workflows @n8n-io/ci-admins
|
||||
.github/scripts @n8n-io/ci-admins
|
||||
.github/actions @n8n-io/ci-admins
|
||||
.github/poutine-rules @n8n-io/ci-admins
|
||||
|
||||
|
|
|
|||
232
.github/OWNERS
vendored
232
.github/OWNERS
vendored
|
|
@ -1,232 +0,0 @@
|
|||
# n8n CODEOWNERS
|
||||
#
|
||||
# Last-match-wins: specific rules MUST come AFTER general rules.
|
||||
|
||||
# Default catch-all (ensures every file gets at least one reviewer)
|
||||
* @n8n-io/catalysts
|
||||
|
||||
# Catalysts
|
||||
|
||||
packages/core/ @n8n-io/catalysts
|
||||
packages/workflow/ @n8n-io/catalysts
|
||||
packages/@n8n/config/ @n8n-io/catalysts
|
||||
packages/@n8n/backend-common/ @n8n-io/catalysts
|
||||
packages/@n8n/backend-test-utils/ @n8n-io/catalysts
|
||||
packages/@n8n/di/ @n8n-io/catalysts
|
||||
packages/@n8n/errors/ @n8n-io/catalysts
|
||||
packages/@n8n/constants/ @n8n-io/catalysts
|
||||
packages/@n8n/utils/ @n8n-io/catalysts
|
||||
packages/@n8n/api-types/ @n8n-io/catalysts
|
||||
packages/@n8n/workflow-sdk/ @n8n-io/instance-ai
|
||||
packages/@n8n/task-runner/ @n8n-io/catalysts
|
||||
packages/@n8n/task-runner-python/ @n8n-io/catalysts
|
||||
packages/@n8n/expression-runtime/ @n8n-io/catalysts
|
||||
packages/@n8n/db/ @n8n-io/catalysts
|
||||
packages/@n8n/json-schema-to-zod/ @n8n-io/catalysts
|
||||
packages/@n8n/crdt/ @n8n-io/catalysts
|
||||
packages/@n8n/extension-sdk/ @n8n-io/catalysts
|
||||
packages/@n8n/eslint-config/ @n8n-io/qa-dx
|
||||
packages/@n8n/typescript-config/ @n8n-io/qa-dx
|
||||
|
||||
packages/@n8n/db/src/migrations/ @n8n-io/migrations-review
|
||||
|
||||
# Top-level paths
|
||||
scripts/ @n8n-io/qa-dx
|
||||
patches/ @n8n-io/qa-dx
|
||||
assets/ @n8n-io/adore
|
||||
security/ @n8n-io/qa-dx
|
||||
|
||||
# @n8n/cli
|
||||
packages/@n8n/cli/ @n8n-io/adore
|
||||
packages/@n8n/cli/src/commands/credential/ @n8n-io/iam
|
||||
packages/@n8n/cli/src/commands/user/ @n8n-io/iam
|
||||
packages/@n8n/cli/src/commands/data-table/ @n8n-io/adore
|
||||
packages/@n8n/cli/src/commands/tag/ @n8n-io/adore
|
||||
packages/@n8n/cli/src/commands/project/ @n8n-io/ligo
|
||||
packages/@n8n/cli/src/commands/source-control/ @n8n-io/ligo
|
||||
packages/@n8n/cli/src/commands/variable/ @n8n-io/ligo
|
||||
packages/@n8n/cli/src/commands/skill/ @n8n-io/ai
|
||||
|
||||
# packages/cli
|
||||
packages/cli/ @n8n-io/catalysts
|
||||
packages/cli/src/scaling/ @n8n-io/catalysts
|
||||
packages/cli/src/concurrency/ @n8n-io/catalysts
|
||||
packages/cli/src/execution-lifecycle/ @n8n-io/catalysts
|
||||
packages/cli/src/executions/ @n8n-io/catalysts
|
||||
packages/cli/src/task-runners/ @n8n-io/catalysts
|
||||
packages/cli/src/webhooks/ @n8n-io/catalysts
|
||||
packages/cli/src/push/ @n8n-io/catalysts
|
||||
packages/cli/src/commands/ @n8n-io/catalysts
|
||||
packages/cli/src/config/ @n8n-io/catalysts
|
||||
packages/cli/src/eventbus/ @n8n-io/catalysts
|
||||
packages/cli/src/events/ @n8n-io/catalysts
|
||||
packages/cli/src/security-audit/ @n8n-io/catalysts
|
||||
packages/cli/src/modules/workflow-index/ @n8n-io/catalysts
|
||||
packages/cli/src/modules/breaking-changes/ @n8n-io/catalysts
|
||||
packages/cli/src/modules/otel/ @n8n-io/ligo
|
||||
|
||||
packages/cli/src/auth/ @n8n-io/iam
|
||||
packages/cli/src/credentials/ @n8n-io/iam
|
||||
packages/cli/src/mfa/ @n8n-io/iam
|
||||
packages/cli/src/oauth/ @n8n-io/iam
|
||||
packages/cli/src/permissions.ee/ @n8n-io/iam
|
||||
packages/cli/src/sso.ee/ @n8n-io/iam
|
||||
packages/cli/src/user-management/ @n8n-io/iam
|
||||
packages/cli/src/license/ @n8n-io/iam
|
||||
packages/cli/src/modules/ldap.ee/ @n8n-io/iam
|
||||
packages/cli/src/modules/log-streaming.ee/ @n8n-io/iam
|
||||
packages/cli/src/modules/sso-oidc/ @n8n-io/iam
|
||||
packages/cli/src/modules/sso-saml/ @n8n-io/iam
|
||||
packages/cli/src/modules/provisioning.ee/ @n8n-io/iam
|
||||
packages/cli/src/modules/dynamic-credentials.ee/ @n8n-io/iam
|
||||
packages/cli/src/modules/redaction/ @n8n-io/iam
|
||||
packages/cli/src/modules/instance-registry/ @n8n-io/iam
|
||||
packages/cli/src/modules/token-exchange/ @n8n-io/iam
|
||||
|
||||
packages/cli/src/environments.ee/ @n8n-io/ligo
|
||||
packages/cli/src/public-api/ @n8n-io/ligo
|
||||
packages/cli/src/modules/source-control.ee/ @n8n-io/ligo
|
||||
packages/cli/src/modules/external-secrets.ee/ @n8n-io/ligo
|
||||
packages/cli/src/modules/insights/ @n8n-io/ligo
|
||||
|
||||
packages/cli/src/collaboration/ @n8n-io/catalysts
|
||||
packages/cli/src/binary-data/ @n8n-io/catalysts
|
||||
packages/cli/src/posthog/ @n8n-io/adore
|
||||
packages/cli/src/modules/data-table/ @n8n-io/adore
|
||||
|
||||
packages/cli/src/evaluation.ee/ @n8n-io/ai
|
||||
packages/cli/src/chat/ @n8n-io/ai
|
||||
packages/cli/src/tool-generation/ @n8n-io/ai
|
||||
packages/cli/src/modules/workflow-builder/ @n8n-io/ai
|
||||
packages/cli/src/modules/mcp/ @n8n-io/ai
|
||||
packages/cli/src/modules/quick-connect/ @n8n-io/ai
|
||||
packages/cli/src/modules/chat-hub/ @n8n-io/ai
|
||||
packages/cli/src/modules/instance-ai/ @n8n-io/instance-ai
|
||||
|
||||
packages/cli/src/modules/community-packages/ @n8n-io/nodes
|
||||
|
||||
# CLI controllers
|
||||
packages/cli/src/controllers/auth.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/invitation.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/me.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/mfa.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/owner.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/password-reset.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/role.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/users.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/user-settings.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/api-keys.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/security-settings.controller.ts @n8n-io/iam
|
||||
packages/cli/src/controllers/oauth/ @n8n-io/iam
|
||||
packages/cli/src/controllers/ai.controller.ts @n8n-io/ai
|
||||
packages/cli/src/controllers/annotation-tags.controller.ee.ts @n8n-io/ai
|
||||
packages/cli/src/controllers/cta.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/folder.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/tags.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/binary-data.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/dynamic-templates.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/posthog.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/translation.controller.ts @n8n-io/adore
|
||||
packages/cli/src/controllers/project.controller.ts @n8n-io/ligo
|
||||
packages/cli/src/controllers/workflow-statistics.controller.ts @n8n-io/ligo
|
||||
packages/cli/src/controllers/node-types.controller.ts @n8n-io/nodes
|
||||
packages/cli/src/controllers/dynamic-node-parameters.controller.ts @n8n-io/nodes
|
||||
packages/cli/src/controllers/e2e.controller.ts @n8n-io/qa-dx
|
||||
|
||||
# CLI services
|
||||
packages/cli/src/services/jwt.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/user.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/role.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/role-cache.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/password.utility.ts @n8n-io/iam
|
||||
packages/cli/src/services/public-api-key.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/security-settings.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/ssrf/ @n8n-io/catalysts
|
||||
packages/cli/src/services/static-auth-service.ts @n8n-io/iam
|
||||
packages/cli/src/services/access.service.ts @n8n-io/iam
|
||||
packages/cli/src/services/ai.service.ts @n8n-io/ai
|
||||
packages/cli/src/services/ai-usage.service.ts @n8n-io/ai
|
||||
packages/cli/src/services/ai-workflow-builder.service.ts @n8n-io/ai
|
||||
packages/cli/src/services/annotation-tag.service.ee.ts @n8n-io/ai
|
||||
packages/cli/src/services/folder.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/tag.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/cta.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/dynamic-templates.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/frontend.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/banner.service.ts @n8n-io/adore
|
||||
packages/cli/src/services/project.service.ee.ts @n8n-io/ligo
|
||||
packages/cli/src/services/workflow-statistics.service.ts @n8n-io/ligo
|
||||
packages/cli/src/services/export.service.ts @n8n-io/ligo
|
||||
packages/cli/src/services/import.service.ts @n8n-io/ligo
|
||||
packages/cli/src/services/ownership.service.ts @n8n-io/ligo
|
||||
packages/cli/src/services/dynamic-node-parameters.service.ts @n8n-io/nodes
|
||||
|
||||
# Adore
|
||||
|
||||
packages/frontend/editor-ui/ @n8n-io/frontend
|
||||
packages/frontend/editor-ui/src/features/ai/ @n8n-io/ai
|
||||
packages/frontend/editor-ui/src/features/credentials/ @n8n-io/iam
|
||||
packages/frontend/editor-ui/src/features/execution/ @n8n-io/ligo
|
||||
packages/frontend/editor-ui/src/features/project-roles/ @n8n-io/iam
|
||||
packages/frontend/editor-ui/src/features/integrations/ @n8n-io/nodes
|
||||
|
||||
packages/frontend/@n8n/design-system/ @n8n-io/design
|
||||
packages/frontend/@n8n/stores/ @n8n-io/frontend
|
||||
packages/frontend/@n8n/composables/ @n8n-io/frontend
|
||||
packages/frontend/@n8n/rest-api-client/ @n8n-io/frontend
|
||||
packages/frontend/@n8n/storybook/ @n8n-io/design
|
||||
packages/frontend/@n8n/i18n/ @n8n-io/frontend
|
||||
packages/@n8n/stylelint-config/ @n8n-io/qa-dx
|
||||
|
||||
# AI
|
||||
|
||||
packages/@n8n/instance-ai/ @n8n-io/instance-ai
|
||||
packages/@n8n/nodes-langchain/ @n8n-io/ai
|
||||
packages/@n8n/ai-utilities/ @n8n-io/ai
|
||||
packages/@n8n/ai-node-sdk/ @n8n-io/ai
|
||||
packages/@n8n/ai-workflow-builder.ee/ @n8n-io/ai
|
||||
packages/@n8n/agents/ @n8n-io/ai
|
||||
packages/frontend/@n8n/chat/ @n8n-io/ai
|
||||
|
||||
# Chat
|
||||
|
||||
packages/@n8n/chat-hub/ @n8n-io/ai
|
||||
|
||||
# Nodes
|
||||
|
||||
packages/@n8n/codemirror-lang/ @n8n-io/nodes
|
||||
packages/@n8n/codemirror-lang-html/ @n8n-io/nodes
|
||||
packages/@n8n/codemirror-lang-sql/ @n8n-io/nodes
|
||||
packages/nodes-base/ @n8n-io/nodes
|
||||
packages/@n8n/decorators/ @n8n-io/catalysts
|
||||
packages/node-dev/ @n8n-io/nodes
|
||||
packages/@n8n/create-node/ @n8n-io/nodes
|
||||
packages/@n8n/node-cli/ @n8n-io/nodes
|
||||
packages/@n8n/imap/ @n8n-io/iam
|
||||
packages/@n8n/syslog-client/ @n8n-io/iam
|
||||
packages/@n8n/scan-community-package/ @n8n-io/nodes
|
||||
packages/@n8n/eslint-plugin-community-nodes/ @n8n-io/nodes
|
||||
packages/@n8n/computer-use/ @n8n-io/nodes
|
||||
packages/@n8n/local-gateway/ @n8n-io/nodes
|
||||
packages/@n8n/mcp-browser/ @n8n-io/nodes
|
||||
packages/@n8n/mcp-browser-extension/ @n8n-io/nodes
|
||||
|
||||
# IAM
|
||||
|
||||
packages/@n8n/permissions/ @n8n-io/iam
|
||||
packages/@n8n/client-oauth2/ @n8n-io/iam
|
||||
|
||||
# LiGo
|
||||
|
||||
packages/extensions/insights/ @n8n-io/ligo
|
||||
|
||||
# CI/CD
|
||||
|
||||
.github/ @n8n-io/qa-dx
|
||||
docker/ @n8n-io/qa-dx
|
||||
|
||||
# QA
|
||||
|
||||
packages/testing/ @n8n-io/qa-dx
|
||||
packages/@n8n/benchmark/ @n8n-io/qa-dx
|
||||
packages/@n8n/vitest-config/ @n8n-io/qa-dx
|
||||
4
.github/WORKFLOWS.md
vendored
4
.github/WORKFLOWS.md
vendored
|
|
@ -487,7 +487,7 @@ Team ownership mappings in `CODEOWNERS`:
|
|||
| `ubuntu-latest` | 2 | Simple jobs, fork PR E2E |
|
||||
| `blacksmith-2vcpu-ubuntu-2204` | 2 | Standard builds, E2E shards |
|
||||
| `blacksmith-4vcpu-ubuntu-2204` | 4 | Unit tests, typecheck, lint |
|
||||
| `blacksmith-8vcpu-ubuntu-2204` | 8 | Heavy parallel workloads |
|
||||
| `blacksmith-8vcpu-ubuntu-2204` | 8 | E2E coverage (weekly) |
|
||||
| `blacksmith-4vcpu-ubuntu-2204-arm` | 4 | ARM64 Docker builds |
|
||||
|
||||
### Selection Guidelines
|
||||
|
|
@ -500,7 +500,7 @@ Team ownership mappings in `CODEOWNERS`:
|
|||
|
||||
**`blacksmith-4vcpu-ubuntu-2204`** - Unit tests (parallelized), linting (parallel file processing), typechecking (CPU-intensive), E2E test shards
|
||||
|
||||
**`blacksmith-8vcpu-ubuntu-2204`** - Heavy parallel workloads
|
||||
**`blacksmith-8vcpu-ubuntu-2204`** - Heavy parallel workloads, full E2E coverage runs
|
||||
|
||||
### Runner Provider Toggle
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,6 @@
|
|||
import { describe, it, before, after } from 'node:test';
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { execFileSync } from 'node:child_process';
|
||||
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs';
|
||||
import { tmpdir } from 'node:os';
|
||||
import { join } from 'node:path';
|
||||
import { matchGlob, parseFilters, evaluateFilter, runValidate, getChangedFiles, getMergeBase } from '../ci-filter.mjs';
|
||||
import { matchGlob, parseFilters, evaluateFilter, runValidate } from '../ci-filter.mjs';
|
||||
|
||||
// --- matchGlob ---
|
||||
|
||||
|
|
@ -176,70 +172,6 @@ describe('evaluateFilter', () => {
|
|||
});
|
||||
});
|
||||
|
||||
// --- getChangedFiles + getMergeBase (integration, exercises real git) ---
|
||||
|
||||
describe('getChangedFiles', () => {
|
||||
const repoDir = mkdtempSync(join(tmpdir(), 'ci-filter-'));
|
||||
const remoteDir = mkdtempSync(join(tmpdir(), 'ci-filter-remote-'));
|
||||
const originalCwd = process.cwd();
|
||||
const git = (args: string[], cwd: string = repoDir) =>
|
||||
execFileSync('git', args, { cwd, stdio: 'pipe' }).toString().trim();
|
||||
|
||||
before(() => {
|
||||
// Bare remote so the action's `git fetch origin <ref>` works
|
||||
execFileSync('git', ['init', '--bare', '-b', 'main', remoteDir], { stdio: 'pipe' });
|
||||
git(['init', '-b', 'main'], repoDir);
|
||||
git(['config', 'user.email', 'test@test.local']);
|
||||
git(['config', 'user.name', 'test']);
|
||||
git(['remote', 'add', 'origin', remoteDir]);
|
||||
|
||||
// Common ancestor commit
|
||||
writeFileSync(join(repoDir, 'shared.ts'), 'shared\n');
|
||||
git(['add', '.']);
|
||||
git(['commit', '-m', 'root']);
|
||||
git(['push', 'origin', 'main']);
|
||||
|
||||
// PR branches off main, adds a file
|
||||
git(['checkout', '-b', 'pr-branch']);
|
||||
writeFileSync(join(repoDir, 'pr-only.ts'), 'pr\n');
|
||||
git(['add', '.']);
|
||||
git(['commit', '-m', 'PR change']);
|
||||
|
||||
// Master drifts forward, modifying shared.ts (the pre-fix bug surface)
|
||||
git(['checkout', 'main']);
|
||||
writeFileSync(join(repoDir, 'shared.ts'), 'shared\ndrift-from-master\n');
|
||||
git(['commit', '-am', 'master moves']);
|
||||
git(['push', 'origin', 'main']);
|
||||
|
||||
// Sit on the PR branch as if running CI
|
||||
git(['checkout', 'pr-branch']);
|
||||
process.chdir(repoDir);
|
||||
});
|
||||
|
||||
after(() => {
|
||||
process.chdir(originalCwd);
|
||||
rmSync(repoDir, { recursive: true, force: true });
|
||||
rmSync(remoteDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('returns only PR-introduced files (master drift does not pollute)', () => {
|
||||
const changed = getChangedFiles('main');
|
||||
assert.deepEqual(changed, ['pr-only.ts']);
|
||||
});
|
||||
|
||||
it('getMergeBase returns the common ancestor commit', () => {
|
||||
const mergeBase = getMergeBase();
|
||||
assert.match(mergeBase, /^[a-f0-9]{40}$/);
|
||||
const expected = git(['merge-base', 'FETCH_HEAD', 'HEAD']);
|
||||
assert.equal(mergeBase, expected);
|
||||
});
|
||||
|
||||
it('rejects unsafe base refs', () => {
|
||||
assert.throws(() => getChangedFiles('main; rm -rf /'), /Unsafe/);
|
||||
assert.throws(() => getChangedFiles('main$evil'), /Unsafe/);
|
||||
});
|
||||
});
|
||||
|
||||
// --- runValidate ---
|
||||
|
||||
describe('runValidate', () => {
|
||||
|
|
|
|||
3
.github/actions/ci-filter/action.yml
vendored
3
.github/actions/ci-filter/action.yml
vendored
|
|
@ -30,9 +30,6 @@ outputs:
|
|||
base-ref:
|
||||
description: 'Resolved base ref used for the diff (filter mode only)'
|
||||
value: ${{ steps.run.outputs.base-ref }}
|
||||
merge-base:
|
||||
description: 'Merge-base SHA between FETCH_HEAD and HEAD (filter mode only)'
|
||||
value: ${{ steps.run.outputs.merge-base }}
|
||||
|
||||
runs:
|
||||
using: 'composite'
|
||||
|
|
|
|||
23
.github/actions/ci-filter/ci-filter.mjs
vendored
23
.github/actions/ci-filter/ci-filter.mjs
vendored
|
|
@ -98,30 +98,14 @@ export function getChangedFiles(baseRef) {
|
|||
if (!SAFE_REF.test(baseRef)) {
|
||||
throw new Error(`Unsafe base ref: "${baseRef}"`);
|
||||
}
|
||||
// Deepen the fetch so the merge base is reachable from this shallow clone.
|
||||
// A 2-dot diff (FETCH_HEAD HEAD) reports anything that differs in either
|
||||
// direction, so files added to base-branch after the PR diverged show up as
|
||||
// "changed" — spuriously triggering path-filtered jobs. The merge base
|
||||
// scopes the diff to PR-only changes.
|
||||
execSync(`git fetch --no-tags --prune --deepen=200 origin ${baseRef}`, { stdio: 'pipe' });
|
||||
const output = execSync('git diff --name-only --merge-base FETCH_HEAD HEAD', {
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
execSync(`git fetch --depth=1 origin ${baseRef}`, { stdio: 'pipe' });
|
||||
const output = execSync('git diff --name-only FETCH_HEAD HEAD', { encoding: 'utf-8' });
|
||||
return output
|
||||
.split('\n')
|
||||
.map((f) => f.trim())
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the merge-base SHA between FETCH_HEAD and HEAD.
|
||||
* Used to give downstream tools (e.g. janitor's AST diff) a stable, PR-only
|
||||
* comparison point that doesn't drift when the base branch moves forward.
|
||||
*/
|
||||
export function getMergeBase() {
|
||||
return execSync('git merge-base FETCH_HEAD HEAD', { encoding: 'utf-8' }).trim();
|
||||
}
|
||||
|
||||
// --- Filter evaluation ---
|
||||
|
||||
/**
|
||||
|
|
@ -171,9 +155,7 @@ export function runFilter() {
|
|||
|
||||
const filters = parseFilters(filtersInput);
|
||||
const changedFiles = getChangedFiles(baseRef);
|
||||
const mergeBase = getMergeBase();
|
||||
|
||||
console.log(`Merge base: ${mergeBase}`);
|
||||
console.log(`Changed files (${changedFiles.length}):`);
|
||||
for (const f of changedFiles) {
|
||||
console.log(` ${f}`);
|
||||
|
|
@ -190,7 +172,6 @@ export function runFilter() {
|
|||
setOutput('results', JSON.stringify(results));
|
||||
setOutput('changed-files', changedFiles.join('\n'));
|
||||
setOutput('base-ref', baseRef);
|
||||
setOutput('merge-base', mergeBase);
|
||||
}
|
||||
|
||||
// --- Mode: validate ---
|
||||
|
|
|
|||
11
.github/actions/setup-nodejs/action.yml
vendored
11
.github/actions/setup-nodejs/action.yml
vendored
|
|
@ -45,13 +45,6 @@ runs:
|
|||
mkdir -p "$PNPM_STORE_PATH"
|
||||
fi
|
||||
|
||||
- name: Configure SafeChain
|
||||
shell: bash
|
||||
run: |
|
||||
# SafeChain only reads configs from this directory https://github.com/AikidoSec/safe-chain#configuration-options-1
|
||||
mkdir -p "$HOME/.safe-chain"
|
||||
cp "${{ github.action_path }}/safe-chain.config.json" "$HOME/.safe-chain/config.json"
|
||||
|
||||
- name: Install Aikido SafeChain
|
||||
run: |
|
||||
VERSION="1.5.1"
|
||||
|
|
@ -61,6 +54,10 @@ runs:
|
|||
echo "${EXPECTED_SHA256} install-safe-chain.sh" | sha256sum -c -
|
||||
sh install-safe-chain.sh --ci
|
||||
rm install-safe-chain.sh
|
||||
# Exclude first-party @n8n/* packages from SafeChain's minimum-package-age
|
||||
# filter so freshly-published versions stay visible to every subsequent
|
||||
# step in the job (install, build, and publish).
|
||||
echo "SAFE_CHAIN_MINIMUM_PACKAGE_AGE_EXCLUSIONS=@n8n/*,n8n,n8n-containers,n8n-core,n8n-editor-ui,n8n-node-dev,n8n-nodes-base,n8n-playwright,n8n-workflow" >> "$GITHUB_ENV"
|
||||
shell: bash
|
||||
|
||||
- name: Install Dependencies
|
||||
|
|
|
|||
|
|
@ -1,16 +0,0 @@
|
|||
{
|
||||
"npm": {
|
||||
"minimumPackageAgeExclusions": [
|
||||
"@n8n/*",
|
||||
"@n8n_io/*",
|
||||
"n8n",
|
||||
"n8n-containers",
|
||||
"n8n-core",
|
||||
"n8n-editor-ui",
|
||||
"n8n-node-dev",
|
||||
"n8n-nodes-base",
|
||||
"n8n-playwright",
|
||||
"n8n-workflow"
|
||||
]
|
||||
}
|
||||
}
|
||||
367
.github/scripts/bump-versions.mjs
vendored
367
.github/scripts/bump-versions.mjs
vendored
|
|
@ -11,7 +11,7 @@ const exec = promisify(child_process.exec);
|
|||
/**
|
||||
* @param {string | semver.SemVer} currentVersion
|
||||
*/
|
||||
export function generateExperimentalVersion(currentVersion) {
|
||||
function generateExperimentalVersion(currentVersion) {
|
||||
const parsed = semver.parse(currentVersion);
|
||||
if (!parsed) throw new Error(`Invalid version: ${currentVersion}`);
|
||||
|
||||
|
|
@ -28,31 +28,84 @@ export function generateExperimentalVersion(currentVersion) {
|
|||
return `${parsed.major}.${parsed.minor}.${parsed.patch}-exp.0`;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {{ pnpm?: { overrides?: Record<string, string> }, overrides?: Record<string, string> }} pkg
|
||||
* @returns {Record<string, string>}
|
||||
*/
|
||||
export function getOverrides(pkg) {
|
||||
return { ...pkg.pnpm?.overrides, ...pkg.overrides };
|
||||
const rootDir = process.cwd();
|
||||
|
||||
const releaseType = /** @type { import('semver').ReleaseType | "experimental" } */ (
|
||||
process.env.RELEASE_TYPE
|
||||
);
|
||||
assert.match(releaseType, /^(patch|minor|major|experimental|premajor)$/, 'Invalid RELEASE_TYPE');
|
||||
|
||||
// TODO: if releaseType is `auto` determine release type based on the changelog
|
||||
|
||||
const lastTag = (await exec('git describe --tags --match "n8n@*" --abbrev=0')).stdout.trim();
|
||||
const packages = JSON.parse(
|
||||
(
|
||||
await exec(
|
||||
`pnpm ls -r --only-projects --json | jq -r '[.[] | { name: .name, version: .version, path: .path, private: .private}]'`,
|
||||
)
|
||||
).stdout,
|
||||
);
|
||||
|
||||
const packageMap = {};
|
||||
for (let { name, path, version, private: isPrivate } of packages) {
|
||||
if (isPrivate && path !== rootDir) {
|
||||
continue;
|
||||
}
|
||||
if (path === rootDir) {
|
||||
name = 'monorepo-root';
|
||||
}
|
||||
|
||||
const isDirty = await exec(`git diff --quiet HEAD ${lastTag} -- ${path}`)
|
||||
.then(() => false)
|
||||
.catch((error) => true);
|
||||
|
||||
packageMap[name] = { path, isDirty, version };
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {string} content
|
||||
* @returns {Record<string, unknown>}
|
||||
*/
|
||||
export function parseWorkspaceYaml(content) {
|
||||
assert.ok(
|
||||
Object.values(packageMap).some(({ isDirty }) => isDirty),
|
||||
'No changes found since the last release',
|
||||
);
|
||||
|
||||
// Propagate isDirty transitively: if a package's dependency will be bumped,
|
||||
// that package also needs a bump (e.g. design-system → editor-ui → cli).
|
||||
|
||||
// Detect root-level changes that affect resolved dep versions without touching individual
|
||||
// package.json files: pnpm.overrides (applies to all specifiers)
|
||||
// and pnpm-workspace.yaml catalog entries (applies only to deps using a "catalog:…" specifier).
|
||||
|
||||
const rootPkgJson = JSON.parse(await readFile(resolve(rootDir, 'package.json'), 'utf-8'));
|
||||
const rootPkgJsonAtTag = await exec(`git show ${lastTag}:package.json`)
|
||||
.then(({ stdout }) => JSON.parse(stdout))
|
||||
.catch(() => ({}));
|
||||
|
||||
const getOverrides = (pkg) => ({ ...pkg.pnpm?.overrides, ...pkg.overrides });
|
||||
|
||||
const currentOverrides = getOverrides(rootPkgJson);
|
||||
const previousOverrides = getOverrides(rootPkgJsonAtTag);
|
||||
|
||||
const changedOverrides = new Set(
|
||||
Object.keys({ ...currentOverrides, ...previousOverrides }).filter(
|
||||
(k) => currentOverrides[k] !== previousOverrides[k],
|
||||
),
|
||||
);
|
||||
|
||||
const parseWorkspaceYaml = (content) => {
|
||||
try {
|
||||
return /** @type {Record<string, unknown>} */ (parse(content) ?? {});
|
||||
} catch {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {Record<string, unknown>} ws
|
||||
* @returns {Map<string, Record<string, string>>}
|
||||
*/
|
||||
export function getCatalogs(ws) {
|
||||
};
|
||||
const workspaceYaml = parseWorkspaceYaml(
|
||||
await readFile(resolve(rootDir, 'pnpm-workspace.yaml'), 'utf-8').catch(() => ''),
|
||||
);
|
||||
const workspaceYamlAtTag = parseWorkspaceYaml(
|
||||
await exec(`git show ${lastTag}:pnpm-workspace.yaml`)
|
||||
.then(({ stdout }) => stdout)
|
||||
.catch(() => ''),
|
||||
);
|
||||
const getCatalogs = (ws) => {
|
||||
const result = new Map();
|
||||
if (ws.catalog) {
|
||||
result.set('default', /** @type {Record<string,string>} */ (ws.catalog));
|
||||
|
|
@ -63,232 +116,98 @@ export function getCatalogs(ws) {
|
|||
}
|
||||
|
||||
return result;
|
||||
};
|
||||
// changedCatalogEntries: Map<catalogName, Set<depName>>
|
||||
const currentCatalogs = getCatalogs(workspaceYaml);
|
||||
const previousCatalogs = getCatalogs(workspaceYamlAtTag);
|
||||
const changedCatalogEntries = new Map();
|
||||
for (const catalogName of new Set([...currentCatalogs.keys(), ...previousCatalogs.keys()])) {
|
||||
const current = currentCatalogs.get(catalogName) ?? {};
|
||||
const previous = previousCatalogs.get(catalogName) ?? {};
|
||||
const changedDeps = new Set(
|
||||
Object.keys({ ...current, ...previous }).filter((dep) => current[dep] !== previous[dep]),
|
||||
);
|
||||
if (changedDeps.size > 0) {
|
||||
changedCatalogEntries.set(catalogName, changedDeps);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {Record<string, string>} currentOverrides
|
||||
* @param {Record<string, string>} previousOverrides
|
||||
* @returns {Set<string>}
|
||||
*/
|
||||
export function computeChangedOverrides(currentOverrides, previousOverrides) {
|
||||
return new Set(
|
||||
Object.keys({ ...currentOverrides, ...previousOverrides }).filter(
|
||||
(k) => currentOverrides[k] !== previousOverrides[k],
|
||||
),
|
||||
// Store full dep objects (with specifiers) so we can inspect "catalog:…" values below.
|
||||
const depsByPackage = {};
|
||||
for (const packageName in packageMap) {
|
||||
const packageFile = resolve(packageMap[packageName].path, 'package.json');
|
||||
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
|
||||
depsByPackage[packageName] = /** @type {Record<string,string>} */ (
|
||||
packageJson.dependencies ?? {}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {Map<string, Record<string, string>>} currentCatalogs
|
||||
* @param {Map<string, Record<string, string>>} previousCatalogs
|
||||
* @returns {Map<string, Set<string>>}
|
||||
*/
|
||||
export function computeChangedCatalogEntries(currentCatalogs, previousCatalogs) {
|
||||
const changedCatalogEntries = new Map();
|
||||
for (const catalogName of new Set([...currentCatalogs.keys(), ...previousCatalogs.keys()])) {
|
||||
const current = currentCatalogs.get(catalogName) ?? {};
|
||||
const previous = previousCatalogs.get(catalogName) ?? {};
|
||||
const changedDeps = new Set(
|
||||
Object.keys({ ...current, ...previous }).filter((dep) => current[dep] !== previous[dep]),
|
||||
);
|
||||
if (changedDeps.size > 0) {
|
||||
changedCatalogEntries.set(catalogName, changedDeps);
|
||||
// Mark packages dirty if any dep had a root-level override or catalog version change.
|
||||
for (const [packageName, deps] of Object.entries(depsByPackage)) {
|
||||
if (packageMap[packageName].isDirty) continue;
|
||||
for (const [dep, specifier] of Object.entries(deps)) {
|
||||
if (changedOverrides.has(dep)) {
|
||||
packageMap[packageName].isDirty = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return changedCatalogEntries;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark packages as dirty if any dep had a root-level override or catalog version change.
|
||||
* Mutates packageMap in place.
|
||||
*
|
||||
* @param {Record<string, { isDirty: boolean }>} packageMap
|
||||
* @param {Record<string, Record<string, string>>} depsByPackage
|
||||
* @param {Set<string>} changedOverrides
|
||||
* @param {Map<string, Set<string>>} changedCatalogEntries
|
||||
*/
|
||||
export function markDirtyByRootChanges(
|
||||
packageMap,
|
||||
depsByPackage,
|
||||
changedOverrides,
|
||||
changedCatalogEntries,
|
||||
) {
|
||||
for (const [packageName, deps] of Object.entries(depsByPackage)) {
|
||||
if (packageMap[packageName].isDirty) continue;
|
||||
for (const [dep, specifier] of Object.entries(deps)) {
|
||||
if (changedOverrides.has(dep)) {
|
||||
if (typeof specifier === 'string' && specifier.startsWith('catalog:')) {
|
||||
const catalogName = specifier === 'catalog:' ? 'default' : specifier.slice(8);
|
||||
if (changedCatalogEntries.get(catalogName)?.has(dep)) {
|
||||
packageMap[packageName].isDirty = true;
|
||||
break;
|
||||
}
|
||||
if (typeof specifier === 'string' && specifier.startsWith('catalog:')) {
|
||||
const catalogName = specifier === 'catalog:' ? 'default' : specifier.slice(8);
|
||||
if (changedCatalogEntries.get(catalogName)?.has(dep)) {
|
||||
packageMap[packageName].isDirty = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Propagate isDirty transitively: if a package's dependency will be bumped,
|
||||
* that package also needs a bump. Mutates packageMap in place.
|
||||
*
|
||||
* @param {Record<string, { isDirty: boolean }>} packageMap
|
||||
* @param {Record<string, Record<string, string>>} depsByPackage
|
||||
*/
|
||||
export function propagateDirtyTransitively(packageMap, depsByPackage) {
|
||||
let changed = true;
|
||||
while (changed) {
|
||||
changed = false;
|
||||
for (const packageName in packageMap) {
|
||||
if (packageMap[packageName].isDirty) continue;
|
||||
if (Object.keys(depsByPackage[packageName]).some((dep) => packageMap[dep]?.isDirty)) {
|
||||
packageMap[packageName].isDirty = true;
|
||||
changed = true;
|
||||
}
|
||||
let changed = true;
|
||||
while (changed) {
|
||||
changed = false;
|
||||
for (const packageName in packageMap) {
|
||||
if (packageMap[packageName].isDirty) continue;
|
||||
if (Object.keys(depsByPackage[packageName]).some((dep) => packageMap[dep]?.isDirty)) {
|
||||
packageMap[packageName].isDirty = true;
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {string} version
|
||||
* @param {import('semver').ReleaseType | 'experimental'} releaseType
|
||||
* @returns {string}
|
||||
*/
|
||||
export function computeNewVersion(version, releaseType) {
|
||||
switch (releaseType) {
|
||||
case 'experimental':
|
||||
return generateExperimentalVersion(version);
|
||||
case 'premajor':
|
||||
return /** @type {string} */ (
|
||||
semver.inc(
|
||||
// Keep the monorepo version up to date with the released version
|
||||
packageMap['monorepo-root'].version = packageMap['n8n'].version;
|
||||
|
||||
for (const packageName in packageMap) {
|
||||
const { path, version, isDirty } = packageMap[packageName];
|
||||
const packageFile = resolve(path, 'package.json');
|
||||
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
|
||||
|
||||
const dependencyIsDirty = Object.keys(packageJson.dependencies || {}).some(
|
||||
(dependencyName) => packageMap[dependencyName]?.isDirty,
|
||||
);
|
||||
|
||||
let newVersion = version;
|
||||
|
||||
if (isDirty || dependencyIsDirty) {
|
||||
switch (releaseType) {
|
||||
case 'experimental':
|
||||
newVersion = generateExperimentalVersion(version);
|
||||
break;
|
||||
case 'premajor':
|
||||
newVersion = semver.inc(
|
||||
version,
|
||||
version.includes('-rc.') ? 'prerelease' : 'premajor',
|
||||
undefined,
|
||||
'rc',
|
||||
)
|
||||
);
|
||||
default:
|
||||
return /** @type {string} */ (semver.inc(version, releaseType));
|
||||
);
|
||||
break;
|
||||
default:
|
||||
newVersion = semver.inc(version, releaseType);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
packageJson.version = packageMap[packageName].nextVersion = newVersion;
|
||||
|
||||
await writeFile(packageFile, JSON.stringify(packageJson, null, 2) + '\n');
|
||||
}
|
||||
|
||||
async function bumpVersions() {
|
||||
const rootDir = process.cwd();
|
||||
|
||||
const releaseType = /** @type { import('semver').ReleaseType | "experimental" } */ (
|
||||
process.env.RELEASE_TYPE
|
||||
);
|
||||
assert.match(releaseType, /^(patch|minor|major|experimental|premajor)$/, 'Invalid RELEASE_TYPE');
|
||||
|
||||
// TODO: if releaseType is `auto` determine release type based on the changelog
|
||||
|
||||
const lastTag = (await exec('git describe --tags --match "n8n@*" --abbrev=0')).stdout.trim();
|
||||
const packages = JSON.parse(
|
||||
(
|
||||
await exec(
|
||||
`pnpm ls -r --only-projects --json | jq -r '[.[] | { name: .name, version: .version, path: .path, private: .private}]'`,
|
||||
)
|
||||
).stdout,
|
||||
);
|
||||
|
||||
/** @type {Record<string, { path: string, isDirty: boolean, version: string, nextVersion?: string }>} */
|
||||
const packageMap = {};
|
||||
for (let { name, path, version, private: isPrivate } of packages) {
|
||||
if (isPrivate && path !== rootDir) {
|
||||
continue;
|
||||
}
|
||||
if (path === rootDir) {
|
||||
name = 'monorepo-root';
|
||||
}
|
||||
|
||||
const isDirty = await exec(`git diff --quiet HEAD ${lastTag} -- ${path}`)
|
||||
.then(() => false)
|
||||
.catch(() => true);
|
||||
|
||||
packageMap[name] = { path, isDirty, version };
|
||||
}
|
||||
|
||||
assert.ok(
|
||||
Object.values(packageMap).some(({ isDirty }) => isDirty),
|
||||
'No changes found since the last release',
|
||||
);
|
||||
|
||||
// Propagate isDirty transitively: if a package's dependency will be bumped,
|
||||
// that package also needs a bump (e.g. design-system → editor-ui → cli).
|
||||
|
||||
// Detect root-level changes that affect resolved dep versions without touching individual
|
||||
// package.json files: pnpm.overrides (applies to all specifiers)
|
||||
// and pnpm-workspace.yaml catalog entries (applies only to deps using a "catalog:…" specifier).
|
||||
|
||||
const rootPkgJson = JSON.parse(await readFile(resolve(rootDir, 'package.json'), 'utf-8'));
|
||||
const rootPkgJsonAtTag = await exec(`git show ${lastTag}:package.json`)
|
||||
.then(({ stdout }) => JSON.parse(stdout))
|
||||
.catch(() => ({}));
|
||||
|
||||
const changedOverrides = computeChangedOverrides(
|
||||
getOverrides(rootPkgJson),
|
||||
getOverrides(rootPkgJsonAtTag),
|
||||
);
|
||||
|
||||
const workspaceYaml = parseWorkspaceYaml(
|
||||
await readFile(resolve(rootDir, 'pnpm-workspace.yaml'), 'utf-8').catch(() => ''),
|
||||
);
|
||||
const workspaceYamlAtTag = parseWorkspaceYaml(
|
||||
await exec(`git show ${lastTag}:pnpm-workspace.yaml`)
|
||||
.then(({ stdout }) => stdout)
|
||||
.catch(() => ''),
|
||||
);
|
||||
const changedCatalogEntries = computeChangedCatalogEntries(
|
||||
getCatalogs(workspaceYaml),
|
||||
getCatalogs(workspaceYamlAtTag),
|
||||
);
|
||||
|
||||
// Store full dep objects (with specifiers) so we can inspect "catalog:…" values below.
|
||||
/** @type {Record<string, Record<string, string>>} */
|
||||
const depsByPackage = {};
|
||||
for (const packageName in packageMap) {
|
||||
const packageFile = resolve(packageMap[packageName].path, 'package.json');
|
||||
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
|
||||
depsByPackage[packageName] = /** @type {Record<string,string>} */ (
|
||||
packageJson.dependencies ?? {}
|
||||
);
|
||||
}
|
||||
|
||||
// Mark packages dirty if any dep had a root-level override or catalog version change.
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, changedOverrides, changedCatalogEntries);
|
||||
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
|
||||
// Keep the monorepo version up to date with the released version
|
||||
packageMap['monorepo-root'].version = packageMap['n8n'].version;
|
||||
|
||||
for (const packageName in packageMap) {
|
||||
const { path, version, isDirty } = packageMap[packageName];
|
||||
const packageFile = resolve(path, 'package.json');
|
||||
const packageJson = JSON.parse(await readFile(packageFile, 'utf-8'));
|
||||
|
||||
const dependencyIsDirty = Object.keys(packageJson.dependencies || {}).some(
|
||||
(dependencyName) => packageMap[dependencyName]?.isDirty,
|
||||
);
|
||||
|
||||
let newVersion = version;
|
||||
|
||||
if (isDirty || dependencyIsDirty) {
|
||||
newVersion = computeNewVersion(version, releaseType);
|
||||
}
|
||||
|
||||
packageJson.version = packageMap[packageName].nextVersion = newVersion;
|
||||
|
||||
await writeFile(packageFile, JSON.stringify(packageJson, null, 2) + '\n');
|
||||
}
|
||||
|
||||
console.log(packageMap['n8n'].nextVersion);
|
||||
}
|
||||
|
||||
// only run when executed directly, not when imported by tests
|
||||
if (import.meta.url === `file://${process.argv[1]}`) {
|
||||
bumpVersions();
|
||||
}
|
||||
console.log(packageMap['n8n'].nextVersion);
|
||||
|
|
|
|||
380
.github/scripts/bump-versions.test.mjs
vendored
380
.github/scripts/bump-versions.test.mjs
vendored
|
|
@ -1,380 +0,0 @@
|
|||
/**
|
||||
* Run these tests with:
|
||||
*
|
||||
* node --test ./.github/scripts/bump-versions.test.mjs
|
||||
*/
|
||||
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import {
|
||||
generateExperimentalVersion,
|
||||
getOverrides,
|
||||
parseWorkspaceYaml,
|
||||
getCatalogs,
|
||||
computeChangedOverrides,
|
||||
computeChangedCatalogEntries,
|
||||
markDirtyByRootChanges,
|
||||
propagateDirtyTransitively,
|
||||
computeNewVersion,
|
||||
} from './bump-versions.mjs';
|
||||
|
||||
describe('generateExperimentalVersion', () => {
|
||||
it('creates -exp.0 from a stable version', () => {
|
||||
assert.equal(generateExperimentalVersion('1.2.3'), '1.2.3-exp.0');
|
||||
});
|
||||
|
||||
it('increments exp minor when already at exp.0', () => {
|
||||
assert.equal(generateExperimentalVersion('1.2.3-exp.0'), '1.2.3-exp.1');
|
||||
});
|
||||
|
||||
it('increments exp minor when already at exp.5', () => {
|
||||
assert.equal(generateExperimentalVersion('1.2.3-exp.5'), '1.2.3-exp.6');
|
||||
});
|
||||
|
||||
it('creates -exp.0 from a version with a different pre-release tag', () => {
|
||||
assert.equal(generateExperimentalVersion('1.2.3-beta.1'), '1.2.3-exp.0');
|
||||
});
|
||||
|
||||
it('handles multi-digit version numbers', () => {
|
||||
assert.equal(generateExperimentalVersion('10.20.30'), '10.20.30-exp.0');
|
||||
});
|
||||
|
||||
it('throws on an invalid version string', () => {
|
||||
assert.throws(() => generateExperimentalVersion('not-a-version'), /Invalid version/);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getOverrides', () => {
|
||||
it('returns empty object when no overrides exist', () => {
|
||||
assert.deepEqual(getOverrides({}), {});
|
||||
});
|
||||
|
||||
it('returns pnpm.overrides when only pnpm.overrides is set', () => {
|
||||
assert.deepEqual(getOverrides({ pnpm: { overrides: { lodash: '^4.0.0' } } }), {
|
||||
lodash: '^4.0.0',
|
||||
});
|
||||
});
|
||||
|
||||
it('returns overrides when only top-level overrides is set', () => {
|
||||
assert.deepEqual(getOverrides({ overrides: { lodash: '^4.0.0' } }), { lodash: '^4.0.0' });
|
||||
});
|
||||
|
||||
it('merges both fields with top-level overrides taking precedence for the same key', () => {
|
||||
assert.deepEqual(
|
||||
getOverrides({
|
||||
pnpm: { overrides: { lodash: '^3.0.0', underscore: '^1.0.0' } },
|
||||
overrides: { lodash: '^4.0.0' },
|
||||
}),
|
||||
{ lodash: '^4.0.0', underscore: '^1.0.0' },
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseWorkspaceYaml', () => {
|
||||
it('parses valid YAML into an object', () => {
|
||||
assert.deepEqual(parseWorkspaceYaml('catalog:\n lodash: "^4.0.0"'), {
|
||||
catalog: { lodash: '^4.0.0' },
|
||||
});
|
||||
});
|
||||
|
||||
it('returns empty object for an empty string', () => {
|
||||
assert.deepEqual(parseWorkspaceYaml(''), {});
|
||||
});
|
||||
|
||||
it('returns empty object for invalid YAML', () => {
|
||||
assert.deepEqual(parseWorkspaceYaml(': - invalid: [yaml}'), {});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getCatalogs', () => {
|
||||
it('returns empty map when no catalog or catalogs field exists', () => {
|
||||
assert.equal(getCatalogs({}).size, 0);
|
||||
});
|
||||
|
||||
it('returns a "default" entry for the top-level catalog field', () => {
|
||||
const result = getCatalogs({ catalog: { lodash: '^4.0.0' } });
|
||||
assert.equal(result.size, 1);
|
||||
assert.deepEqual(result.get('default'), { lodash: '^4.0.0' });
|
||||
});
|
||||
|
||||
it('returns named entries from the catalogs field', () => {
|
||||
const result = getCatalogs({ catalogs: { react18: { react: '^18.0.0' } } });
|
||||
assert.equal(result.size, 1);
|
||||
assert.deepEqual(result.get('react18'), { react: '^18.0.0' });
|
||||
});
|
||||
|
||||
it('returns both default and named catalog entries when both fields are present', () => {
|
||||
const result = getCatalogs({
|
||||
catalog: { lodash: '^4.0.0' },
|
||||
catalogs: { react18: { react: '^18.0.0' } },
|
||||
});
|
||||
assert.equal(result.size, 2);
|
||||
assert.deepEqual(result.get('default'), { lodash: '^4.0.0' });
|
||||
assert.deepEqual(result.get('react18'), { react: '^18.0.0' });
|
||||
});
|
||||
});
|
||||
|
||||
describe('computeChangedOverrides', () => {
|
||||
it('returns empty set when nothing changed', () => {
|
||||
assert.equal(computeChangedOverrides({ lodash: '^4' }, { lodash: '^4' }).size, 0);
|
||||
});
|
||||
|
||||
it('detects an added override', () => {
|
||||
const result = computeChangedOverrides({ lodash: '^4' }, {});
|
||||
assert.ok(result.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects a removed override', () => {
|
||||
const result = computeChangedOverrides({}, { lodash: '^4' });
|
||||
assert.ok(result.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects a changed override value', () => {
|
||||
const result = computeChangedOverrides({ lodash: '^4' }, { lodash: '^3' });
|
||||
assert.ok(result.has('lodash'));
|
||||
});
|
||||
|
||||
it('does not include unchanged overrides', () => {
|
||||
const result = computeChangedOverrides(
|
||||
{ lodash: '^4', underscore: '^1' },
|
||||
{ lodash: '^4', underscore: '^1' },
|
||||
);
|
||||
assert.equal(result.size, 0);
|
||||
});
|
||||
|
||||
it('handles mixed changed and unchanged overrides', () => {
|
||||
const result = computeChangedOverrides(
|
||||
{ lodash: '^4', underscore: '^2' },
|
||||
{ lodash: '^4', underscore: '^1' },
|
||||
);
|
||||
assert.equal(result.size, 1);
|
||||
assert.ok(result.has('underscore'));
|
||||
assert.ok(!result.has('lodash'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('computeChangedCatalogEntries', () => {
|
||||
it('returns empty map when nothing changed', () => {
|
||||
const current = new Map([['default', { lodash: '^4' }]]);
|
||||
const previous = new Map([['default', { lodash: '^4' }]]);
|
||||
assert.equal(computeChangedCatalogEntries(current, previous).size, 0);
|
||||
});
|
||||
|
||||
it('detects an added dep in a catalog', () => {
|
||||
const current = new Map([['default', { lodash: '^4' }]]);
|
||||
const previous = new Map([['default', {}]]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('default')?.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects a removed dep from a catalog', () => {
|
||||
const current = new Map([['default', {}]]);
|
||||
const previous = new Map([['default', { lodash: '^4' }]]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('default')?.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects a changed dep version in a catalog', () => {
|
||||
const current = new Map([['default', { lodash: '^4' }]]);
|
||||
const previous = new Map([['default', { lodash: '^3' }]]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('default')?.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects changes in a named catalog', () => {
|
||||
const current = new Map([['react18', { react: '^18' }]]);
|
||||
const previous = new Map([['react18', { react: '^17' }]]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('react18')?.has('react'));
|
||||
});
|
||||
|
||||
it('detects a newly added catalog', () => {
|
||||
const current = new Map([['newCatalog', { lodash: '^4' }]]);
|
||||
const previous = new Map();
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('newCatalog')?.has('lodash'));
|
||||
});
|
||||
|
||||
it('detects a removed catalog', () => {
|
||||
const current = new Map();
|
||||
const previous = new Map([['oldCatalog', { lodash: '^4' }]]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.get('oldCatalog')?.has('lodash'));
|
||||
});
|
||||
|
||||
it('does not include a catalog that has no changed entries', () => {
|
||||
const current = new Map([
|
||||
['default', { lodash: '^4' }],
|
||||
['react18', { react: '^18' }],
|
||||
]);
|
||||
const previous = new Map([
|
||||
['default', { lodash: '^3' }],
|
||||
['react18', { react: '^18' }],
|
||||
]);
|
||||
const result = computeChangedCatalogEntries(current, previous);
|
||||
assert.ok(result.has('default'));
|
||||
assert.ok(!result.has('react18'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('markDirtyByRootChanges', () => {
|
||||
it('marks a package dirty when its dep appears in changedOverrides', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['lodash']), new Map());
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('skips already-dirty packages', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: true } };
|
||||
// No deps, but package is already dirty — should not throw or change state
|
||||
const depsByPackage = { 'pkg-a': {} };
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['lodash']), new Map());
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('marks a package dirty when its dep uses "catalog:" (default catalog) and that entry changed', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { lodash: 'catalog:' } };
|
||||
const changedCatalogEntries = new Map([['default', new Set(['lodash'])]]);
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('marks a package dirty when its dep uses "catalog:<name>" and that named catalog entry changed', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { react: 'catalog:react18' } };
|
||||
const changedCatalogEntries = new Map([['react18', new Set(['react'])]]);
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('does not mark a package dirty when none of its deps changed', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(['underscore']), new Map());
|
||||
assert.ok(!packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('does not mark a package dirty when a catalog: dep is in a catalog with no changes', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { lodash: 'catalog:' } };
|
||||
const changedCatalogEntries = new Map([['default', new Set(['underscore'])]]);
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
|
||||
assert.ok(!packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('does not mark a package dirty when a catalog: dep is in a different catalog than the one that changed', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { react: 'catalog:react18' } };
|
||||
const changedCatalogEntries = new Map([['default', new Set(['react'])]]);
|
||||
markDirtyByRootChanges(packageMap, depsByPackage, new Set(), changedCatalogEntries);
|
||||
assert.ok(!packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
});
|
||||
|
||||
describe('propagateDirtyTransitively', () => {
|
||||
it('does nothing when no packages are dirty', () => {
|
||||
const packageMap = {
|
||||
'pkg-a': { isDirty: false },
|
||||
'pkg-b': { isDirty: false },
|
||||
};
|
||||
const depsByPackage = {
|
||||
'pkg-a': { 'pkg-b': 'workspace:*' },
|
||||
'pkg-b': {},
|
||||
};
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
assert.ok(!packageMap['pkg-a'].isDirty);
|
||||
assert.ok(!packageMap['pkg-b'].isDirty);
|
||||
});
|
||||
|
||||
it('propagates dirty state one level up the dependency chain', () => {
|
||||
const packageMap = {
|
||||
'pkg-a': { isDirty: false },
|
||||
'pkg-b': { isDirty: true },
|
||||
};
|
||||
const depsByPackage = {
|
||||
'pkg-a': { 'pkg-b': 'workspace:*' },
|
||||
'pkg-b': {},
|
||||
};
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('propagates dirty state through multiple levels', () => {
|
||||
const packageMap = {
|
||||
'pkg-a': { isDirty: false },
|
||||
'pkg-b': { isDirty: false },
|
||||
'pkg-c': { isDirty: true },
|
||||
};
|
||||
const depsByPackage = {
|
||||
'pkg-a': { 'pkg-b': 'workspace:*' },
|
||||
'pkg-b': { 'pkg-c': 'workspace:*' },
|
||||
'pkg-c': {},
|
||||
};
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
assert.ok(packageMap['pkg-b'].isDirty, 'pkg-b should be dirty (depends on dirty pkg-c)');
|
||||
assert.ok(packageMap['pkg-a'].isDirty, 'pkg-a should be dirty (depends on dirty pkg-b)');
|
||||
});
|
||||
|
||||
it('does not mark packages dirty when their deps are external (not in packageMap)', () => {
|
||||
const packageMap = { 'pkg-a': { isDirty: false } };
|
||||
const depsByPackage = { 'pkg-a': { lodash: '^4' } };
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
assert.ok(!packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
|
||||
it('handles diamond dependency graphs without infinite loops', () => {
|
||||
// pkg-a depends on pkg-b and pkg-c; both depend on pkg-d (dirty)
|
||||
const packageMap = {
|
||||
'pkg-a': { isDirty: false },
|
||||
'pkg-b': { isDirty: false },
|
||||
'pkg-c': { isDirty: false },
|
||||
'pkg-d': { isDirty: true },
|
||||
};
|
||||
const depsByPackage = {
|
||||
'pkg-a': { 'pkg-b': 'workspace:*', 'pkg-c': 'workspace:*' },
|
||||
'pkg-b': { 'pkg-d': 'workspace:*' },
|
||||
'pkg-c': { 'pkg-d': 'workspace:*' },
|
||||
'pkg-d': {},
|
||||
};
|
||||
propagateDirtyTransitively(packageMap, depsByPackage);
|
||||
assert.ok(packageMap['pkg-b'].isDirty);
|
||||
assert.ok(packageMap['pkg-c'].isDirty);
|
||||
assert.ok(packageMap['pkg-a'].isDirty);
|
||||
});
|
||||
});
|
||||
|
||||
describe('computeNewVersion', () => {
|
||||
it('increments patch version', () => {
|
||||
assert.equal(computeNewVersion('1.2.3', 'patch'), '1.2.4');
|
||||
});
|
||||
|
||||
it('increments minor version (resets patch)', () => {
|
||||
assert.equal(computeNewVersion('1.2.3', 'minor'), '1.3.0');
|
||||
});
|
||||
|
||||
it('increments major version (resets minor and patch)', () => {
|
||||
assert.equal(computeNewVersion('1.2.3', 'major'), '2.0.0');
|
||||
});
|
||||
|
||||
it('creates -exp.0 from a stable version for experimental', () => {
|
||||
assert.equal(computeNewVersion('1.2.3', 'experimental'), '1.2.3-exp.0');
|
||||
});
|
||||
|
||||
it('increments exp minor for experimental when already an exp version', () => {
|
||||
assert.equal(computeNewVersion('1.2.3-exp.0', 'experimental'), '1.2.3-exp.1');
|
||||
});
|
||||
|
||||
it('creates a premajor rc version from a stable version', () => {
|
||||
assert.equal(computeNewVersion('1.2.3', 'premajor'), '2.0.0-rc.0');
|
||||
});
|
||||
|
||||
it('increments the rc prerelease number for premajor when already an rc version', () => {
|
||||
assert.equal(computeNewVersion('2.0.0-rc.0', 'premajor'), '2.0.0-rc.1');
|
||||
});
|
||||
|
||||
it('increments rc correctly across multiple premajor calls', () => {
|
||||
assert.equal(computeNewVersion('2.0.0-rc.4', 'premajor'), '2.0.0-rc.5');
|
||||
});
|
||||
});
|
||||
114
.github/scripts/cla/check-signatures.mjs
vendored
114
.github/scripts/cla/check-signatures.mjs
vendored
|
|
@ -1,114 +0,0 @@
|
|||
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
|
||||
//
|
||||
// Collects unique commit authors for the PR (or for the commits a merge
|
||||
// queue is about to land) and asks the n8n CLA service whether each one
|
||||
// has signed. Surfaces three buckets to subsequent steps:
|
||||
// - signed : verified contributors
|
||||
// - unsigned : verified non-contributors (block the merge)
|
||||
// - errored : CLA lookup failed (block the merge — fail-closed so we
|
||||
// never green-light an unverified contribution)
|
||||
//
|
||||
// Commits whose author email is not linked to a GitHub account can't be
|
||||
// looked up by login; they're surfaced separately as `unlinked`.
|
||||
|
||||
/**
|
||||
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
|
||||
* @typedef { import("@actions/github/lib/context").Context } Context
|
||||
* @typedef { typeof import("@actions/core") } Core
|
||||
*/
|
||||
|
||||
/**
|
||||
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
|
||||
*/
|
||||
export default async function checkSignatures ({ github, context, core }) {
|
||||
const { owner, repo } = context.repo;
|
||||
const prNumber = process.env.PR_NUMBER;
|
||||
const headSha = process.env.HEAD_SHA;
|
||||
const baseSha = process.env.BASE_SHA;
|
||||
const isMergeGroup = process.env.IS_MERGE_GROUP === 'true';
|
||||
|
||||
/** @type {Set<string>} */
|
||||
const authors = new Set();
|
||||
/** @type {Array<{sha: string, name: string, email: string}>} */
|
||||
const unlinkedCommits = [];
|
||||
|
||||
/**
|
||||
* @param {Array<any>} commits
|
||||
*/
|
||||
const collect = (commits) => {
|
||||
for (const c of commits) {
|
||||
// Bot-authored commits don't need a CLA; skip before the linked/unlinked split
|
||||
// so they don't fall through to `unlinkedCommits` and fail `all_signed`.
|
||||
if (c.author && c.author.type === 'Bot') continue;
|
||||
|
||||
if (c.author && c.author.login) {
|
||||
authors.add(c.author.login);
|
||||
} else if (c.commit && c.commit.author) {
|
||||
unlinkedCommits.push({
|
||||
sha: c.sha,
|
||||
name: c.commit.author.name,
|
||||
email: c.commit.author.email,
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
if (isMergeGroup) {
|
||||
const { data: comparison } = await github.rest.repos.compareCommitsWithBasehead({
|
||||
owner,
|
||||
repo,
|
||||
basehead: `${baseSha}...${headSha}`,
|
||||
});
|
||||
collect(comparison.commits || []);
|
||||
} else if (prNumber) {
|
||||
const commits = await github.paginate(github.rest.pulls.listCommits, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: Number(prNumber),
|
||||
per_page: 100,
|
||||
});
|
||||
collect(commits);
|
||||
}
|
||||
|
||||
const loginList = [...authors];
|
||||
core.info(`Contributors to check: ${loginList.join(', ') || '(none)'}`);
|
||||
if (unlinkedCommits.length > 0) {
|
||||
core.warning(
|
||||
`${unlinkedCommits.length} commit(s) have an author email not linked to a GitHub account ` +
|
||||
'and cannot be verified against the CLA service.',
|
||||
);
|
||||
}
|
||||
|
||||
/** @type {string[]} */
|
||||
const signed = [];
|
||||
/** @type {string[]} */
|
||||
const unsigned = [];
|
||||
/** @type {string[]} */
|
||||
const errored = [];
|
||||
|
||||
for (const login of loginList) {
|
||||
const url = `${process.env.CLA_API}?checkContributor=${encodeURIComponent(login)}`;
|
||||
try {
|
||||
const res = await fetch(url);
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}`);
|
||||
const data = await res.json();
|
||||
if (data && data.isContributor === true) {
|
||||
signed.push(login);
|
||||
} else {
|
||||
unsigned.push(login);
|
||||
}
|
||||
} catch (e) {
|
||||
core.warning(`CLA lookup failed for @${login}: ${e instanceof Error ? e.message : String(e)}`);
|
||||
errored.push(login);
|
||||
}
|
||||
}
|
||||
|
||||
const blocking = [...unsigned, ...errored];
|
||||
const allSigned = blocking.length === 0 && unlinkedCommits.length === 0;
|
||||
|
||||
core.setOutput('signed', signed.join(','));
|
||||
core.setOutput('unsigned', unsigned.join(','));
|
||||
core.setOutput('errored', errored.join(','));
|
||||
core.setOutput('unlinked', JSON.stringify(unlinkedCommits));
|
||||
core.setOutput('all_signed', String(allSigned));
|
||||
}
|
||||
83
.github/scripts/cla/manage-label.mjs
vendored
83
.github/scripts/cla/manage-label.mjs
vendored
|
|
@ -1,83 +0,0 @@
|
|||
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
|
||||
//
|
||||
// Adds the `cla-signed` label when every contributor has signed, and
|
||||
// removes it otherwise. Idempotent: re-runs safely without duplicating
|
||||
// the label or erroring if it's already in the desired state. Creates
|
||||
// the label on first use so the workflow is self-contained.
|
||||
|
||||
/**
|
||||
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
|
||||
* @typedef { import("@actions/github/lib/context").Context } Context
|
||||
* @typedef { typeof import("@actions/core") } Core
|
||||
*/
|
||||
|
||||
const LABEL_NAME = 'cla-signed';
|
||||
const LABEL_COLOR = '0e8a16'; // GitHub's standard green
|
||||
const LABEL_DESCRIPTION = 'All contributors on this PR have signed the CLA';
|
||||
|
||||
/**
|
||||
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
|
||||
*/
|
||||
export default async function manageClaLabel({ github, context, core }) {
|
||||
const { owner, repo } = context.repo;
|
||||
const issue_number = Number(process.env.PR_NUMBER);
|
||||
const allSigned = process.env.ALL_SIGNED === 'true';
|
||||
|
||||
if (allSigned) {
|
||||
// Make sure the label exists before trying to apply it — addLabels
|
||||
// errors if the label is missing from the repo.
|
||||
try {
|
||||
await github.rest.issues.getLabel({ owner, repo, name: LABEL_NAME });
|
||||
} catch (e) {
|
||||
if (errorStatus(e) === 404) {
|
||||
try {
|
||||
await github.rest.issues.createLabel({
|
||||
owner,
|
||||
repo,
|
||||
name: LABEL_NAME,
|
||||
color: LABEL_COLOR,
|
||||
description: LABEL_DESCRIPTION,
|
||||
});
|
||||
} catch (createErr) {
|
||||
// 422 = race with a parallel run that just created it. Fine.
|
||||
if (errorStatus(createErr) !== 422) throw createErr;
|
||||
}
|
||||
} else {
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
await github.rest.issues.addLabels({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
labels: [LABEL_NAME],
|
||||
});
|
||||
core.info(`Applied "${LABEL_NAME}" label to PR #${issue_number}`);
|
||||
} else {
|
||||
// 404 just means the label wasn't on the PR — nothing to undo.
|
||||
try {
|
||||
await github.rest.issues.removeLabel({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
name: LABEL_NAME,
|
||||
});
|
||||
core.info(`Removed "${LABEL_NAME}" label from PR #${issue_number}`);
|
||||
} catch (e) {
|
||||
if (errorStatus(e) !== 404) throw e;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Octokit's request errors carry an HTTP `status` field, but TypeScript
|
||||
* sees catch parameters as `unknown`. This guard narrows safely.
|
||||
* @param {unknown} e
|
||||
* @returns {number | undefined}
|
||||
*/
|
||||
function errorStatus(e) {
|
||||
return typeof e === 'object' && e !== null && 'status' in e && typeof e.status === 'number'
|
||||
? e.status
|
||||
: undefined;
|
||||
}
|
||||
66
.github/scripts/cla/post-final-status.mjs
vendored
66
.github/scripts/cla/post-final-status.mjs
vendored
|
|
@ -1,66 +0,0 @@
|
|||
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
|
||||
//
|
||||
// Translates the buckets emitted by check-signatures.mjs into a single
|
||||
// commit status on the head SHA. The status `context` name is what a
|
||||
// repository ruleset gates on; description and target_url are best-effort
|
||||
// human signals.
|
||||
//
|
||||
// State mapping:
|
||||
// - success: every contributor is signed and every commit author is linked
|
||||
// - error : only failures were API lookup errors (transient)
|
||||
// - failure: at least one contributor is verified unsigned, or commits
|
||||
// have author emails not linked to a GitHub account
|
||||
|
||||
/**
|
||||
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
|
||||
* @typedef { import("@actions/github/lib/context").Context } Context
|
||||
* @typedef { typeof import("@actions/core") } Core
|
||||
*/
|
||||
|
||||
/**
|
||||
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
|
||||
*/
|
||||
export default async function postFinalClaStatus({ github, context }) {
|
||||
const allSigned = process.env.ALL_SIGNED === 'true';
|
||||
const unsigned = (process.env.UNSIGNED ?? '').split(',').filter(Boolean);
|
||||
const errored = (process.env.ERRORED ?? '').split(',').filter(Boolean);
|
||||
const unlinked = JSON.parse(process.env.UNLINKED || '[]');
|
||||
|
||||
/** @type {'success' | 'failure' | 'error' | 'pending'} */
|
||||
let state;
|
||||
let description;
|
||||
if (allSigned) {
|
||||
state = 'success';
|
||||
description = 'All contributors have signed the CLA';
|
||||
} else if (errored.length > 0 && unsigned.length === 0 && unlinked.length === 0) {
|
||||
state = 'error';
|
||||
description = `Could not verify: ${errored.join(', ')}`;
|
||||
} else {
|
||||
state = 'failure';
|
||||
const parts = [];
|
||||
if (unsigned.length > 0) parts.push(`unsigned: ${unsigned.join(', ')}`);
|
||||
if (errored.length > 0) parts.push(`errored: ${errored.join(', ')}`);
|
||||
if (unlinked.length > 0) parts.push(`${unlinked.length} unlinked commit(s)`);
|
||||
description = parts.join(' | ');
|
||||
}
|
||||
|
||||
// GitHub commit status description is capped at 140 chars.
|
||||
if (description.length > 140) {
|
||||
description = description.slice(0, 137) + '…';
|
||||
}
|
||||
|
||||
const prNumber = process.env.PR_NUMBER;
|
||||
const target_url = prNumber
|
||||
? `${context.payload.repository?.html_url}/pull/${prNumber}`
|
||||
: process.env.CLA_SIGN_URL;
|
||||
|
||||
await github.rest.repos.createCommitStatus({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
sha: /** @type {string} */ (process.env.HEAD_SHA),
|
||||
state,
|
||||
context: /** @type {string} */ (process.env.STATUS_CONTEXT),
|
||||
description,
|
||||
target_url,
|
||||
});
|
||||
}
|
||||
76
.github/scripts/cla/resolve-context.mjs
vendored
76
.github/scripts/cla/resolve-context.mjs
vendored
|
|
@ -1,76 +0,0 @@
|
|||
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
|
||||
//
|
||||
// Reads the triggering event (pull_request_target, issue_comment, or
|
||||
// merge_group) and emits the head/base SHA and PR number that the rest of
|
||||
// the workflow needs. For /cla-check comments, also leaves an "eyes"
|
||||
// reaction so the commenter sees we picked it up.
|
||||
|
||||
/**
|
||||
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
|
||||
* @typedef { import("@actions/github/lib/context").Context } Context
|
||||
* @typedef { typeof import("@actions/core") } Core
|
||||
*/
|
||||
|
||||
/**
|
||||
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
|
||||
*/
|
||||
export default async function resolveClaContext({ github, context, core }) {
|
||||
const { owner, repo } = context.repo;
|
||||
const event = context.eventName;
|
||||
|
||||
let prNumber = '';
|
||||
let headSha = '';
|
||||
let baseSha = '';
|
||||
let isMergeGroup = false;
|
||||
|
||||
if (event === 'pull_request_target' && context.payload.pull_request) {
|
||||
const pr = context.payload.pull_request;
|
||||
prNumber = String(pr.number);
|
||||
headSha = pr.head.sha;
|
||||
baseSha = pr.base.sha;
|
||||
} else if (event === 'issue_comment' && context.payload.issue) {
|
||||
prNumber = String(context.payload.issue.number);
|
||||
const { data: pr } = await github.rest.pulls.get({
|
||||
owner,
|
||||
repo,
|
||||
pull_number: Number(prNumber),
|
||||
});
|
||||
headSha = pr.head.sha;
|
||||
baseSha = pr.base.sha;
|
||||
|
||||
// Acknowledge the command so the commenter sees we received it.
|
||||
try {
|
||||
await github.rest.reactions.createForIssueComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: context.payload.comment?.id || -1,
|
||||
content: 'eyes',
|
||||
});
|
||||
} catch (e) {
|
||||
core.info(`Could not react to comment: ${e instanceof Error ? e.message : String(e)}`);
|
||||
}
|
||||
} else if (event === 'merge_group') {
|
||||
isMergeGroup = true;
|
||||
headSha = context.payload.merge_group.head_sha;
|
||||
baseSha = context.payload.merge_group.base_sha;
|
||||
} else if (event === 'workflow_dispatch') {
|
||||
const input = context.payload.inputs?.pr_number;
|
||||
if (!input) {
|
||||
core.setFailed('workflow_dispatch requires the pr_number input');
|
||||
return;
|
||||
}
|
||||
prNumber = String(input);
|
||||
const { data: pr } = await github.rest.pulls.get({
|
||||
owner,
|
||||
repo,
|
||||
pull_number: Number(prNumber),
|
||||
});
|
||||
headSha = pr.head.sha;
|
||||
baseSha = pr.base.sha;
|
||||
}
|
||||
|
||||
core.setOutput('pr_number', prNumber);
|
||||
core.setOutput('head_sha', headSha);
|
||||
core.setOutput('base_sha', baseSha);
|
||||
core.setOutput('is_merge_group', String(isMergeGroup));
|
||||
}
|
||||
104
.github/scripts/cla/update-pr-comment.mjs
vendored
104
.github/scripts/cla/update-pr-comment.mjs
vendored
|
|
@ -1,104 +0,0 @@
|
|||
// Invoked from .github/workflows/ci-cla-check.yml via actions/github-script.
|
||||
//
|
||||
// Maintains a single CLA comment per PR, keyed by an HTML marker so the
|
||||
// same comment is edited in place across re-runs instead of spammed.
|
||||
// A clean PR that has never been flagged gets no comment at all — only
|
||||
// PRs that needed a nudge get the eventual "thanks" follow-up.
|
||||
|
||||
/**
|
||||
* @typedef { InstanceType<typeof import("@actions/github/lib/utils").GitHub> } GitHubInstance
|
||||
* @typedef { import("@actions/github/lib/context").Context } Context
|
||||
* @typedef { typeof import("@actions/core") } Core
|
||||
*/
|
||||
|
||||
/**
|
||||
* @param {{ github: GitHubInstance, context: Context, core: Core }} params
|
||||
*/
|
||||
export default async function updatePRComment({ github, context }) {
|
||||
const { owner, repo } = context.repo;
|
||||
const issue_number = Number(process.env.PR_NUMBER);
|
||||
const allSigned = process.env.ALL_SIGNED === 'true';
|
||||
const unsigned = (process.env.UNSIGNED ?? '').split(',').filter(Boolean);
|
||||
const errored = (process.env.ERRORED ?? '').split(',').filter(Boolean);
|
||||
const unlinked = JSON.parse(process.env.UNLINKED || '[]');
|
||||
const MARKER = /** @type {string} */ (process.env.COMMENT_MARKER);
|
||||
|
||||
const comments = await github.paginate(github.rest.issues.listComments, {
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
per_page: 100,
|
||||
});
|
||||
// Only adopt the comment as ours if it's bot-authored — otherwise a user
|
||||
// who copies our marker into their own comment would either hijack the
|
||||
// thread or make updateComment 403 with insufficient permissions.
|
||||
const existing = comments.find(
|
||||
(c) => c.body && c.body.includes(MARKER) && c.user && c.user.type === 'Bot',
|
||||
);
|
||||
|
||||
let body;
|
||||
if (allSigned) {
|
||||
// Only leave a "thanks" trail if we already nudged once. Avoids
|
||||
// pinging every clean PR with a CLA comment.
|
||||
if (!existing) {
|
||||
return;
|
||||
}
|
||||
|
||||
body = [
|
||||
MARKER,
|
||||
'✅ **CLA Check passed.** All contributors on this PR have signed the n8n CLA — thank you!',
|
||||
].join('\n');
|
||||
} else {
|
||||
const lines = [MARKER, '## CLA signatures required', ''];
|
||||
lines.push(`Thank you for your submission! We really appreciate it.
|
||||
Like many open source projects, we ask that you sign our [Contributor License Agreement](${process.env.CLA_SIGN_URL}) before we can accept your contribution.`);
|
||||
lines.push('');
|
||||
|
||||
if (unsigned.length > 0) {
|
||||
lines.push('**Contributors who still need to sign:**');
|
||||
for (const u of unsigned) {
|
||||
lines.push(`- @${u}`);
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
if (errored.length > 0) {
|
||||
lines.push('**Could not verify (will retry on next push):**');
|
||||
for (const u of errored) {
|
||||
lines.push(`- @${u}`);
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
if (unlinked.length > 0) {
|
||||
lines.push('**Commits authored by an email not linked to a GitHub account:**');
|
||||
for (const c of unlinked) {
|
||||
lines.push(`- \`${c.sha.slice(0, 7)}\` — ${c.name} <${c.email}>`);
|
||||
}
|
||||
lines.push('');
|
||||
lines.push(
|
||||
'Add the email to your GitHub account ' +
|
||||
'([instructions](https://docs.github.com/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-email-preferences/adding-an-email-address-to-your-github-account)) ' +
|
||||
'or amend the commits to use a linked email, then push again.',
|
||||
);
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
lines.push('Once signed, comment `/cla-check` on this PR to re-run verification.');
|
||||
body = lines.join('\n');
|
||||
}
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
body,
|
||||
});
|
||||
}
|
||||
}
|
||||
17
.github/test-metrics/quarantine.json
vendored
17
.github/test-metrics/quarantine.json
vendored
|
|
@ -1,24 +1,19 @@
|
|||
{
|
||||
"updatedAt": "2026-05-11T14:16:56.139Z",
|
||||
"updatedAt": "2026-04-23T14:38:52.015Z",
|
||||
"source": "currents",
|
||||
"projectId": "LRxcNt",
|
||||
"quarantined": [
|
||||
"Canvas Actions > Node hover actions > should execute node",
|
||||
"Chat user role @capability:proxy > use chat as chat user @auth:chat",
|
||||
"Code node > Code editor > should execute the placeholder successfully in both modes",
|
||||
"Data Mapping > maps expressions to updated fields correctly @fixme",
|
||||
"Data pinning > Advanced pinning scenarios > should be able to reference paired items in node before pinned data",
|
||||
"Debug mode > should enter debug mode for failed executions",
|
||||
"Executions Filter > should reset filter and remove badge",
|
||||
"HITL for Tools @capability:proxy > should add a HITL tool node and run it",
|
||||
"Inject previous execution > can map keys from previous execution",
|
||||
"Instance AI remediation guard @capability:proxy > should preserve a submitted workflow when mocked credential verification needs setup",
|
||||
"Instance AI sidebar @capability:proxy > should delete thread via action menu",
|
||||
"Instance AI workflow setup actions @capability:proxy > should apply parameter and credential edits and persist them to the workflow",
|
||||
"Instance AI workflow setup actions @capability:proxy > should partially apply completed cards when Later is clicked on the last step",
|
||||
"Langchain Integration @capability:proxy > Advanced Workflow Features > should render runItems for sub-nodes and allow switching between them",
|
||||
"Loads template setup modal correctly",
|
||||
"NDV Data Display > Schema View > should not display pagination for schema",
|
||||
"Settings @capability:proxy > set global credentials for a provider",
|
||||
"Resource Locator > should retrieve list options when other params throw errors",
|
||||
"Tools usage @capability:proxy > use web search tool in conversation",
|
||||
"Workflow Executions > when new workflow is not saved > should open executions tab",
|
||||
"Workflow agent @capability:proxy > sharing workflow agent with project chat user",
|
||||
"can configure, connect, and sync secrets from LocalStack",
|
||||
"can create a connection pointing to LocalStack",
|
||||
"manage workflow agents @auth:admin",
|
||||
|
|
|
|||
184
.github/workflows/ci-cla-check.yml
vendored
184
.github/workflows/ci-cla-check.yml
vendored
|
|
@ -1,184 +0,0 @@
|
|||
name: 'CI: CLA Check'
|
||||
|
||||
# In-house replacement for the GitHub App "CLA Bot".
|
||||
#
|
||||
# Triggers
|
||||
# - pull_request_target (opened/synchronize/reopened): re-checks signatures
|
||||
# whenever a PR is opened or new commits are pushed.
|
||||
# - issue_comment (`/cla-check` on a PR): manual re-check after a contributor
|
||||
# signs the CLA, without needing a push.
|
||||
# - merge_group: re-checks at merge-queue time so a ruleset can hard-block
|
||||
# unsigned merges even if the PR check went stale.
|
||||
#
|
||||
# Output
|
||||
# - A commit status named "CLA Check" on the head SHA. Add this name to a
|
||||
# ruleset's required-checks list to gate merges on it.
|
||||
# - A single, edited-in-place PR comment listing unsigned contributors.
|
||||
#
|
||||
# Implementation
|
||||
# The heavy lifting lives in .github/scripts/cla/*.mjs. Each step below
|
||||
# loads its corresponding module and invokes its default export.
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, synchronize, reopened]
|
||||
issue_comment:
|
||||
types: [created]
|
||||
merge_group:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
pr_number:
|
||||
description: 'Pull request number to re-verify'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
statuses: write
|
||||
|
||||
concurrency:
|
||||
group: cla-check-${{ github.event.pull_request.number || github.event.issue.number || github.event.merge_group.head_sha || github.event.inputs.pr_number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
STATUS_CONTEXT: 'CLA Check'
|
||||
CLA_API: 'https://cla-bot-prod.users.n8n.cloud/webhook/cla/check'
|
||||
CLA_SIGN_URL: 'https://cla-bot-prod.users.n8n.cloud/webhook/cla'
|
||||
COMMENT_MARKER: '<!-- n8n-cla-check -->'
|
||||
|
||||
jobs:
|
||||
cla-check:
|
||||
name: Verify CLA signatures
|
||||
# Skip issue_comment unless it's on a PR and the body starts with /cla-check.
|
||||
if: >-
|
||||
github.event_name != 'issue_comment' ||
|
||||
(github.event.issue.pull_request != null &&
|
||||
startsWith(github.event.comment.body, '/cla-check'))
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Generate GitHub App Token
|
||||
id: generate-token
|
||||
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
|
||||
with:
|
||||
app-id: ${{ secrets.N8N_ASSISTANT_APP_ID }}
|
||||
private-key: ${{ secrets.N8N_ASSISTANT_PRIVATE_KEY }}
|
||||
|
||||
- name: Checkout CLA scripts
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
sparse-checkout: .github/scripts/cla
|
||||
sparse-checkout-cone-mode: false
|
||||
|
||||
- name: Resolve PR context
|
||||
id: context
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
const mod = await import('${{ github.workspace }}/.github/scripts/cla/resolve-context.mjs');
|
||||
await mod.default({ github, context, core });
|
||||
|
||||
- name: Post pending commit status
|
||||
if: steps.context.outputs.head_sha != ''
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
await github.rest.repos.createCommitStatus({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
sha: process.env.HEAD_SHA,
|
||||
state: 'pending',
|
||||
context: process.env.STATUS_CONTEXT,
|
||||
description: 'Verifying CLA signatures…',
|
||||
});
|
||||
|
||||
- name: Check CLA signatures
|
||||
id: check
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
|
||||
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
|
||||
BASE_SHA: ${{ steps.context.outputs.base_sha }}
|
||||
IS_MERGE_GROUP: ${{ steps.context.outputs.is_merge_group }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
const mod = await import('${{ github.workspace }}/.github/scripts/cla/check-signatures.mjs');
|
||||
await mod.default({ github, context, core });
|
||||
|
||||
- name: Post final commit status
|
||||
if: always() && steps.context.outputs.head_sha != ''
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
HEAD_SHA: ${{ steps.context.outputs.head_sha }}
|
||||
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
|
||||
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
|
||||
UNSIGNED: ${{ steps.check.outputs.unsigned }}
|
||||
ERRORED: ${{ steps.check.outputs.errored }}
|
||||
UNLINKED: ${{ steps.check.outputs.unlinked }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
const mod = await import('${{ github.workspace }}/.github/scripts/cla/post-final-status.mjs');
|
||||
await mod.default({ github, context, core });
|
||||
|
||||
- name: Update PR comment
|
||||
# Don't comment from merge_group (no PR context) or when the check
|
||||
# failed to produce a result.
|
||||
if: >-
|
||||
always() &&
|
||||
steps.context.outputs.pr_number != '' &&
|
||||
steps.check.outputs.all_signed != ''
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
|
||||
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
|
||||
UNSIGNED: ${{ steps.check.outputs.unsigned }}
|
||||
ERRORED: ${{ steps.check.outputs.errored }}
|
||||
UNLINKED: ${{ steps.check.outputs.unlinked }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
const mod = await import('${{ github.workspace }}/.github/scripts/cla/update-pr-comment.mjs');
|
||||
await mod.default({ github, context, core });
|
||||
|
||||
- name: Manage cla-signed label
|
||||
# Skip on merge_group (no PR) and when the check produced no result.
|
||||
if: >-
|
||||
always() &&
|
||||
steps.context.outputs.pr_number != '' &&
|
||||
steps.check.outputs.all_signed != ''
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.context.outputs.pr_number }}
|
||||
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
const mod = await import('${{ github.workspace }}/.github/scripts/cla/manage-label.mjs');
|
||||
await mod.default({ github, context, core });
|
||||
|
||||
- name: React to /cla-check comment
|
||||
if: always() && github.event_name == 'issue_comment' && steps.check.outputs.all_signed != ''
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
ALL_SIGNED: ${{ steps.check.outputs.all_signed }}
|
||||
with:
|
||||
github-token: ${{ steps.generate-token.outputs.token }}
|
||||
script: |
|
||||
try {
|
||||
await github.rest.reactions.createForIssueComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: context.payload.comment.id,
|
||||
content: process.env.ALL_SIGNED === 'true' ? '+1' : '-1',
|
||||
});
|
||||
} catch (e) {
|
||||
core.info(`Could not react to comment: ${e.message}`);
|
||||
}
|
||||
23
.github/workflows/ci-codeowners-validation.yml
vendored
23
.github/workflows/ci-codeowners-validation.yml
vendored
|
|
@ -1,23 +0,0 @@
|
|||
# .github/workflows/ci-codeowners-validation.yml
|
||||
name: "CI: Validate CODEOWNERS"
|
||||
|
||||
# Only run when CODEOWNERS or packages change
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/CODEOWNERS"
|
||||
- "packages/**"
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- uses: mszostok/codeowners-validator@7f3f5e28c6d7b8dfae5731e54ce2272ca384592f #v0.7.4
|
||||
with:
|
||||
# Start with safe checks only. Add "owners" and
|
||||
# experimental_checks: "notowned" once the file has settled
|
||||
# and skip patterns are configured.
|
||||
checks: "files,duppatterns,syntax"
|
||||
github_access_token: "${{ secrets.GITHUB_TOKEN }}"
|
||||
86
.github/workflows/ci-pr-quality.yml
vendored
86
.github/workflows/ci-pr-quality.yml
vendored
|
|
@ -1,7 +1,6 @@
|
|||
name: 'CI: PR Quality Checks'
|
||||
|
||||
on:
|
||||
merge_group:
|
||||
pull_request:
|
||||
types:
|
||||
- opened
|
||||
|
|
@ -47,14 +46,11 @@ jobs:
|
|||
name: Ownership Acknowledgement
|
||||
# Checks that the author has acknowledged the ownership of their code
|
||||
# by checking the checkbox in the PR summary.
|
||||
# Skipped for bot-authored PRs (Dependabot, Renovate, github-actions, Aikido, etc.).
|
||||
# The required aggregator `required-pr-quality-checks` treats skipped as success.
|
||||
if: |
|
||||
github.event_name == 'pull_request' &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository &&
|
||||
!contains(github.event.pull_request.labels.*.name, 'automation:backport') &&
|
||||
!contains(github.event.pull_request.title, '(backport to') &&
|
||||
github.event.pull_request.user.type != 'Bot'
|
||||
!contains(github.event.pull_request.title, '(backport to')
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
|
|
@ -78,15 +74,12 @@ jobs:
|
|||
check-pr-size:
|
||||
name: PR Size Limit
|
||||
# Checks that the PR size doesn't exceed the limit (currently 1000 lines)
|
||||
# Allows for override via '/size-limit-override' comment.
|
||||
# Skipped for bot-authored PRs — dep bumps from Dependabot/Renovate/Aikido
|
||||
# routinely exceed the size limit and shouldn't be gated on it.
|
||||
# Allows for override via '/size-limit-override' comment
|
||||
if: |
|
||||
github.event_name == 'pull_request' &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository &&
|
||||
!contains(github.event.pull_request.labels.*.name, 'automation:backport') &&
|
||||
!contains(github.event.pull_request.title, '(backport to') &&
|
||||
github.event.pull_request.user.type != 'Bot'
|
||||
!contains(github.event.pull_request.title, '(backport to')
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
|
|
@ -106,76 +99,3 @@ jobs:
|
|||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: node .github/scripts/quality/check-pr-size.mjs
|
||||
|
||||
changes:
|
||||
name: Detect Changes
|
||||
if: github.event_name == 'pull_request' || github.event_name == 'merge_group'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
janitor: ${{ fromJSON(steps.filter.outputs.results).janitor == true }}
|
||||
code-health: ${{ fromJSON(steps.filter.outputs.results)['code-health'] == true }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Detect changed paths
|
||||
id: filter
|
||||
uses: ./.github/actions/ci-filter
|
||||
with:
|
||||
mode: filter
|
||||
filters: |
|
||||
janitor:
|
||||
packages/testing/playwright/**
|
||||
packages/testing/janitor/**
|
||||
code-health:
|
||||
**/package.json
|
||||
pnpm-workspace.yaml
|
||||
.code-health-baseline.json
|
||||
packages/testing/code-health/**
|
||||
|
||||
check-static-analysis:
|
||||
name: Static Analysis
|
||||
needs: changes
|
||||
if: |
|
||||
github.event_name == 'merge_group' ||
|
||||
needs.changes.outputs.code-health == 'true' ||
|
||||
needs.changes.outputs.janitor == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: ./.github/actions/setup-nodejs
|
||||
with:
|
||||
build-command: pnpm turbo run build --filter=@n8n/code-health --filter=@n8n/playwright-janitor
|
||||
|
||||
- name: Run code-health
|
||||
if: github.event_name == 'merge_group' || needs.changes.outputs.code-health == 'true'
|
||||
run: pnpm --filter=@n8n/code-health check
|
||||
|
||||
- name: Run janitor
|
||||
if: ${{ !cancelled() && (github.event_name == 'merge_group' || needs.changes.outputs.janitor == 'true') }}
|
||||
run: pnpm --filter=n8n-playwright janitor
|
||||
|
||||
required-pr-quality-checks:
|
||||
name: Required PR Quality Checks
|
||||
needs: [check-ownership-checkbox, check-pr-size, check-static-analysis]
|
||||
if: always()
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
sparse-checkout: .github/actions/ci-filter
|
||||
sparse-checkout-cone-mode: false
|
||||
- name: Validate required checks
|
||||
uses: ./.github/actions/ci-filter
|
||||
with:
|
||||
mode: validate
|
||||
job-results: ${{ toJSON(needs) }}
|
||||
|
||||
|
|
|
|||
9
.github/workflows/ci-pull-request-review.yml
vendored
9
.github/workflows/ci-pull-request-review.yml
vendored
|
|
@ -41,12 +41,7 @@ jobs:
|
|||
chromatic:
|
||||
name: Chromatic
|
||||
needs: filter
|
||||
# Skip on fork PRs — they don't have access to the Chromatic secret.
|
||||
# This job is intentionally not in `required-review-checks` needs, so it
|
||||
# is non-blocking and won't gate merging.
|
||||
if: >-
|
||||
needs.filter.outputs.design_system == 'true' &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository
|
||||
if: needs.filter.outputs.design_system == 'true'
|
||||
uses: ./.github/workflows/test-visual-chromatic.yml
|
||||
with:
|
||||
ref: ${{ needs.filter.outputs.commit_sha }}
|
||||
|
|
@ -56,7 +51,7 @@ jobs:
|
|||
# PRs cannot be merged unless this job passes.
|
||||
required-review-checks:
|
||||
name: Required Review Checks
|
||||
needs: [filter]
|
||||
needs: [filter, chromatic]
|
||||
if: always()
|
||||
runs-on: ubuntu-slim
|
||||
steps:
|
||||
|
|
|
|||
49
.github/workflows/ci-pull-requests.yml
vendored
49
.github/workflows/ci-pull-requests.yml
vendored
|
|
@ -22,7 +22,6 @@ jobs:
|
|||
ci: ${{ fromJSON(steps.ci-filter.outputs.results).ci == true }}
|
||||
unit: ${{ fromJSON(steps.ci-filter.outputs.results).unit == true }}
|
||||
e2e: ${{ fromJSON(steps.ci-filter.outputs.results).e2e == true }}
|
||||
dev_server_smoke: ${{ fromJSON(steps.ci-filter.outputs.results)['dev-server-smoke'] == true }}
|
||||
workflows: ${{ fromJSON(steps.ci-filter.outputs.results).workflows == true }}
|
||||
workflow_scripts: ${{ fromJSON(steps.ci-filter.outputs.results)['workflow-scripts'] == true }}
|
||||
db: ${{ fromJSON(steps.ci-filter.outputs.results).db == true }}
|
||||
|
|
@ -30,7 +29,6 @@ jobs:
|
|||
e2e_performance: ${{ fromJSON(steps.ci-filter.outputs.results)['e2e-performance'] == true }}
|
||||
instance_ai_workflow_eval: ${{ fromJSON(steps.ci-filter.outputs.results)['instance-ai-workflow-eval'] == true }}
|
||||
commit_sha: ${{ steps.commit-sha.outputs.sha }}
|
||||
merge_base: ${{ steps.ci-filter.outputs.merge-base }}
|
||||
matrix: ${{ steps.generate-matrix.outputs.matrix }}
|
||||
skip_tests: ${{ steps.generate-matrix.outputs.skip-tests }}
|
||||
steps:
|
||||
|
|
@ -65,15 +63,6 @@ jobs:
|
|||
.github/actions/load-n8n-docker/**
|
||||
packages/testing/playwright/**
|
||||
packages/testing/containers/**
|
||||
dev-server-smoke:
|
||||
packages/frontend/editor-ui/vite.config.mts
|
||||
pnpm-workspace.yaml
|
||||
packages/@n8n/*/package.json
|
||||
packages/testing/playwright/tests/dev-server-smoke/**
|
||||
packages/testing/playwright/playwright.config.ts
|
||||
packages/testing/playwright/playwright-projects.ts
|
||||
packages/testing/playwright/package.json
|
||||
.github/workflows/test-dev-server-smoke-reusable.yml
|
||||
workflows: .github/**
|
||||
workflow-scripts: .github/scripts/**
|
||||
performance:
|
||||
|
|
@ -92,7 +81,6 @@ jobs:
|
|||
packages/cli/src/modules/instance-ai/**
|
||||
packages/core/src/execution-engine/eval-mock-helpers.ts
|
||||
.github/workflows/test-evals-instance-ai*.yml
|
||||
.github/workflows/test-evals-discovery.yml
|
||||
db:
|
||||
packages/cli/src/databases/**
|
||||
packages/cli/src/modules/*/database/**
|
||||
|
|
@ -121,10 +109,9 @@ jobs:
|
|||
if: fromJSON(steps.ci-filter.outputs.results).ci || fromJSON(steps.ci-filter.outputs.results).e2e
|
||||
env:
|
||||
CHANGED_FILES: ${{ steps.ci-filter.outputs.changed-files }}
|
||||
MERGE_BASE: ${{ steps.ci-filter.outputs.merge-base }}
|
||||
run: |
|
||||
FILES_CSV=$(echo "$CHANGED_FILES" | tr '\n' ',' | sed 's/,$//')
|
||||
MATRIX=$(node packages/testing/playwright/scripts/distribute-tests.mjs --matrix 16 --orchestrate --impact "--files=$FILES_CSV" "--base=$MERGE_BASE")
|
||||
MATRIX=$(node packages/testing/playwright/scripts/distribute-tests.mjs --matrix 16 --orchestrate --impact "--files=$FILES_CSV" --base=FETCH_HEAD)
|
||||
echo "matrix=$MATRIX" >> "$GITHUB_OUTPUT"
|
||||
echo "skip-tests=$(node -e "process.stdout.write(JSON.parse(process.argv[1])[0]?.skip === true ? 'true' : 'false')" "$MATRIX")" >> "$GITHUB_OUTPUT"
|
||||
|
||||
|
|
@ -212,7 +199,6 @@ jobs:
|
|||
test-mode: docker-artifact
|
||||
test-command: pnpm --filter=n8n-playwright test:container:sqlite:e2e tests/e2e/building-blocks/workflow-entry-points.spec.ts
|
||||
workers: '1'
|
||||
artifact-prefix: sanity
|
||||
secrets: inherit
|
||||
|
||||
# Full e2e run. Internal PRs run multi-main (postgres + redis + caddy + 2 mains + 1 worker).
|
||||
|
|
@ -232,20 +218,7 @@ jobs:
|
|||
test-command: ${{ github.event.pull_request.head.repo.fork == true && 'pnpm --filter=n8n-playwright test:container:sqlite:e2e --grep-invert=@licensed' || 'pnpm --filter=n8n-playwright test:container:multi-main:e2e' }}
|
||||
workers: '1'
|
||||
pre-generated-matrix: ${{ needs.install-and-build.outputs.matrix }}
|
||||
artifact-prefix: e2e
|
||||
secrets: inherit
|
||||
|
||||
# Boots the editor-ui against the Vite dev server and fails on any console
|
||||
# or page error during load. Catches regressions in dev-mode module
|
||||
# resolution (missing Vite alias, broken workspace package interop) that
|
||||
# the production-bundle e2e job bundles around.
|
||||
dev-server-smoke:
|
||||
name: Dev-server boot smoke
|
||||
needs: install-and-build
|
||||
if: needs.install-and-build.outputs.dev_server_smoke == 'true' && github.event_name != 'merge_group'
|
||||
uses: ./.github/workflows/test-dev-server-smoke-reusable.yml
|
||||
with:
|
||||
ref: ${{ needs.install-and-build.outputs.commit_sha }}
|
||||
upload-failure-artifacts: ${{ github.event.pull_request.head.repo.fork == true }}
|
||||
secrets: inherit
|
||||
|
||||
db-tests:
|
||||
|
|
@ -310,23 +283,6 @@ jobs:
|
|||
branch: ${{ needs.install-and-build.outputs.commit_sha }}
|
||||
secrets: inherit
|
||||
|
||||
# In-process discovery eval — asserts the orchestrator reaches for browser/computer-use
|
||||
# tools at OAuth/screenshot moments. Lightweight (no Docker), runs in parallel with the
|
||||
# heavy workflow eval. Non-blocking initially; promote to required after stability.
|
||||
instance-ai-discovery-evals:
|
||||
name: Instance AI Discovery Evals
|
||||
needs: install-and-build
|
||||
if: >-
|
||||
!cancelled() &&
|
||||
needs.install-and-build.result == 'success' &&
|
||||
needs.install-and-build.outputs.instance_ai_workflow_eval == 'true' &&
|
||||
github.repository == 'n8n-io/n8n' &&
|
||||
(github.event_name != 'pull_request' || !github.event.pull_request.head.repo.fork)
|
||||
uses: ./.github/workflows/test-evals-discovery.yml
|
||||
with:
|
||||
branch: ${{ needs.install-and-build.outputs.commit_sha }}
|
||||
secrets: inherit
|
||||
|
||||
# This job is required by GitHub branch protection rules.
|
||||
# PRs cannot be merged unless this job passes.
|
||||
required-checks:
|
||||
|
|
@ -340,7 +296,6 @@ jobs:
|
|||
check-packaging,
|
||||
sqlite-sanity,
|
||||
e2e,
|
||||
dev-server-smoke,
|
||||
db-tests,
|
||||
performance,
|
||||
security-checks,
|
||||
|
|
|
|||
14
.github/workflows/release-publish.yml
vendored
14
.github/workflows/release-publish.yml
vendored
|
|
@ -76,9 +76,11 @@ jobs:
|
|||
cp README.md packages/cli/README.md
|
||||
sed -i "s/default: 'dev'/default: '${{ needs.determine-version-info.outputs.release_type }}'/g" packages/cli/dist/config/schema.js
|
||||
|
||||
# Publishing via `pnpm publish -r` is idempotent, as it checks if the package exists
|
||||
# and only publishes if it doesn't. This is why we do the sub-packages before the main n8n package.
|
||||
# So if anything goes wrong, we can easily re-try the run instead of abandoning the release.
|
||||
- name: Publish n8n to NPM with rc tag
|
||||
env:
|
||||
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
|
||||
run: pnpm --filter n8n publish --publish-branch "$PUBLISH_BRANCH" --access public --tag rc --no-git-checks
|
||||
|
||||
- name: Publish other packages to NPM
|
||||
env:
|
||||
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
|
||||
|
|
@ -90,12 +92,6 @@ jobs:
|
|||
fi
|
||||
pnpm publish -r --filter '!n8n' --publish-branch "$PUBLISH_BRANCH" --access public --tag "$PUBLISH_TAG" --no-git-checks
|
||||
|
||||
# If we don't use the --tag rc, all releases will default to "latest".
|
||||
- name: Publish n8n to NPM with rc tag
|
||||
env:
|
||||
PUBLISH_BRANCH: ${{ github.event.pull_request.base.ref }}
|
||||
run: pnpm --filter n8n publish --publish-branch "$PUBLISH_BRANCH" --access public --tag rc --no-git-checks
|
||||
|
||||
- name: Cleanup rc tag
|
||||
run: npm dist-tag rm n8n rc
|
||||
continue-on-error: true
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ jobs:
|
|||
output-file: sbom-source.cdx.json
|
||||
|
||||
- name: Attest SBOM for source release
|
||||
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
|
||||
uses: actions/attest-sbom@07e74fc4e78d1aad915e867f9a094073a9f71527 # v4.0.0
|
||||
with:
|
||||
subject-path: './package.json'
|
||||
sbom-path: 'sbom-source.cdx.json'
|
||||
|
|
|
|||
|
|
@ -1,49 +0,0 @@
|
|||
name: 'Test: Dev-server boot smoke'
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
ref:
|
||||
description: 'Git ref to test'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
env:
|
||||
NODE_OPTIONS: '--max-old-space-size=6144'
|
||||
PLAYWRIGHT_BROWSERS_PATH: packages/testing/playwright/.playwright-browsers
|
||||
|
||||
jobs:
|
||||
smoke:
|
||||
name: Dev-server smoke
|
||||
runs-on: ${{ vars.RUNNER_PROVIDER == 'github' && 'ubuntu-latest' || 'blacksmith-4vcpu-ubuntu-2204' }}
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
ref: ${{ inputs.ref }}
|
||||
|
||||
- name: Setup and Build
|
||||
uses: ./.github/actions/setup-nodejs
|
||||
|
||||
- name: Install Browsers
|
||||
run: pnpm turbo run install-browsers --filter=n8n-playwright
|
||||
|
||||
- name: Run dev-server smoke spec
|
||||
# Run from repo root so PLAYWRIGHT_BROWSERS_PATH (relative) resolves
|
||||
# correctly. cd-ing into the playwright package double-nests it.
|
||||
run: pnpm --filter=n8n-playwright test:dev-server-smoke --reporter=list
|
||||
|
||||
- name: Upload Failure Artifacts
|
||||
if: ${{ failure() }}
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
with:
|
||||
name: dev-server-smoke-report
|
||||
path: |
|
||||
packages/testing/playwright/test-results/
|
||||
packages/testing/playwright/playwright-report/
|
||||
retention-days: 7
|
||||
72
.github/workflows/test-e2e-coverage-weekly.yml
vendored
72
.github/workflows/test-e2e-coverage-weekly.yml
vendored
|
|
@ -5,58 +5,48 @@ on:
|
|||
- cron: '0 2 * * 1' # Every Monday at 2 AM
|
||||
workflow_dispatch: # Allow manual triggering
|
||||
|
||||
env:
|
||||
NODE_OPTIONS: --max-old-space-size=16384
|
||||
PLAYWRIGHT_WORKERS: 4
|
||||
PLAYWRIGHT_BROWSERS_PATH: packages/testing/playwright/.playwright-browsers
|
||||
|
||||
jobs:
|
||||
prepare-docker:
|
||||
name: Prepare Docker (coverage)
|
||||
uses: ./.github/workflows/prepare-docker-reusable.yml
|
||||
with:
|
||||
build-variant: coverage
|
||||
runner: blacksmith-8vcpu-ubuntu-2204
|
||||
secrets: inherit
|
||||
coverage:
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2204
|
||||
name: Coverage Tests
|
||||
|
||||
e2e:
|
||||
name: E2E (coverage)
|
||||
needs: prepare-docker
|
||||
uses: ./.github/workflows/test-e2e-reusable.yml
|
||||
with:
|
||||
test-mode: docker-artifact
|
||||
test-command: pnpm --filter=n8n-playwright test:container:coverage
|
||||
workers: '1'
|
||||
runner: blacksmith-4vcpu-ubuntu-2204
|
||||
timeout-minutes: 45
|
||||
pre-generated-matrix: '[{"shard":1,"images":""},{"shard":2,"images":""},{"shard":3,"images":""},{"shard":4,"images":""}]'
|
||||
artifact-prefix: coverage
|
||||
secrets: inherit
|
||||
|
||||
aggregate:
|
||||
name: Aggregate Coverage
|
||||
needs: e2e
|
||||
if: always() && needs.e2e.result != 'skipped' && needs.e2e.result != 'cancelled'
|
||||
runs-on: blacksmith-4vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Setup Environment
|
||||
uses: ./.github/actions/setup-nodejs
|
||||
env:
|
||||
INCLUDE_TEST_CONTROLLER: 'true'
|
||||
|
||||
- name: Download shard artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
with:
|
||||
pattern: coverage-shard-*
|
||||
path: /tmp/shards/
|
||||
- name: Build Docker Image with Coverage
|
||||
run: pnpm build:docker:coverage
|
||||
env:
|
||||
INCLUDE_TEST_CONTROLLER: 'true'
|
||||
|
||||
- name: Collect coverage JSON
|
||||
shell: bash
|
||||
- name: Install Browsers
|
||||
run: pnpm turbo run install-browsers --filter=n8n-playwright
|
||||
|
||||
- name: Run Container Coverage Tests
|
||||
id: coverage-tests
|
||||
run: |
|
||||
mkdir -p packages/testing/playwright/.nyc_output/coverage
|
||||
found=$(find /tmp/shards -path '*/.nyc_output/coverage/*.json' 2>/dev/null | wc -l)
|
||||
echo "Found $found coverage JSON files across shards"
|
||||
find /tmp/shards -path '*/.nyc_output/coverage/*.json' \
|
||||
-exec cp {} packages/testing/playwright/.nyc_output/coverage/ \;
|
||||
ls -la packages/testing/playwright/.nyc_output/coverage/ || true
|
||||
pnpm --filter n8n-playwright test:container:sqlite \
|
||||
--workers=${{ env.PLAYWRIGHT_WORKERS }}
|
||||
env:
|
||||
BUILD_WITH_COVERAGE: 'true'
|
||||
CURRENTS_RECORD_KEY: ${{ secrets.CURRENTS_RECORD_KEY }}
|
||||
CURRENTS_PROJECT_ID: 'LRxcNt'
|
||||
QA_METRICS_WEBHOOK_URL: ${{ secrets.QA_METRICS_WEBHOOK_URL }}
|
||||
QA_METRICS_WEBHOOK_USER: ${{ secrets.QA_METRICS_WEBHOOK_USER }}
|
||||
QA_METRICS_WEBHOOK_PASSWORD: ${{ secrets.QA_METRICS_WEBHOOK_PASSWORD }}
|
||||
|
||||
- name: Generate Coverage Report
|
||||
if: always() && steps.coverage-tests.outcome != 'skipped'
|
||||
run: pnpm --filter n8n-playwright coverage:report
|
||||
|
||||
- name: Upload Coverage Report Artifact
|
||||
|
|
@ -78,7 +68,7 @@ jobs:
|
|||
fail_ci_if_error: false
|
||||
|
||||
- name: Analyse Coverage Gaps
|
||||
if: always()
|
||||
if: always() && steps.coverage-tests.outcome != 'skipped'
|
||||
env:
|
||||
CODECOV_API_TOKEN: ${{ secrets.CODECOV_API_TOKEN }}
|
||||
run: |
|
||||
|
|
@ -86,7 +76,7 @@ jobs:
|
|||
--md --top=15 --out-json=coverage-gaps.json >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload Coverage Gap Report
|
||||
if: always()
|
||||
if: always() && steps.coverage-tests.outcome != 'skipped'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
with:
|
||||
name: coverage-gap-report
|
||||
|
|
|
|||
|
|
@ -23,20 +23,21 @@ jobs:
|
|||
|
||||
benchmark:
|
||||
needs: [prepare-docker]
|
||||
name: benchmarking
|
||||
name: ${{ matrix.profile }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: blacksmith-8vcpu-ubuntu-2204
|
||||
- profile: benchmark-direct
|
||||
runner: blacksmith-4vcpu-ubuntu-2204
|
||||
- profile: benchmark-queue
|
||||
runner: blacksmith-8vcpu-ubuntu-2204
|
||||
- profile: benchmark-queue-tuned
|
||||
runner: blacksmith-8vcpu-ubuntu-2204
|
||||
uses: ./.github/workflows/test-e2e-reusable.yml
|
||||
with:
|
||||
test-mode: docker-artifact
|
||||
# Runs the full benchmark suite. Each spec brings its own container via
|
||||
# `test.use({ capability })`, so workers must be 1 (one container at a time).
|
||||
test-command: 'pnpm --filter=n8n-playwright test:benchmark'
|
||||
workers: '1'
|
||||
test-command: pnpm --filter=n8n-playwright test:all --project=${{ matrix.profile }}:infrastructure --workers=1
|
||||
runner: ${{ matrix.runner }}
|
||||
timeout-minutes: 120
|
||||
artifact-prefix: benchmark
|
||||
timeout-minutes: 60
|
||||
secrets: inherit
|
||||
|
|
|
|||
|
|
@ -19,5 +19,4 @@ jobs:
|
|||
test-mode: docker-artifact
|
||||
test-command: pnpm --filter=n8n-playwright test:performance
|
||||
currents-project-id: 'O9BJaN'
|
||||
artifact-prefix: performance
|
||||
secrets: inherit
|
||||
|
|
|
|||
20
.github/workflows/test-e2e-reusable.yml
vendored
20
.github/workflows/test-e2e-reusable.yml
vendored
|
|
@ -32,6 +32,11 @@ on:
|
|||
required: false
|
||||
default: 30
|
||||
type: number
|
||||
upload-failure-artifacts:
|
||||
description: 'Upload test failure artifacts (screenshots, traces, videos). Enable for community PRs without Currents access.'
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
currents-project-id:
|
||||
description: 'Currents project ID for reporting'
|
||||
required: false
|
||||
|
|
@ -47,11 +52,6 @@ on:
|
|||
required: false
|
||||
default: ''
|
||||
type: string
|
||||
artifact-prefix:
|
||||
description: 'Prefix for uploaded shard artifacts'
|
||||
required: false
|
||||
default: 'e2e'
|
||||
type: string
|
||||
|
||||
env:
|
||||
NODE_OPTIONS: ${{ contains(inputs.runner, '2vcpu') && '--max-old-space-size=6144' || '' }}
|
||||
|
|
@ -121,17 +121,15 @@ jobs:
|
|||
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
|
||||
N8N_TEST_ENV: ${{ inputs.n8n-env }}
|
||||
|
||||
- name: Upload Shard Artifacts
|
||||
if: always()
|
||||
- name: Upload Failure Artifacts
|
||||
if: ${{ failure() && inputs.upload-failure-artifacts }}
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
with:
|
||||
name: ${{ inputs.artifact-prefix }}-shard-${{ matrix.shard }}
|
||||
name: playwright-report-shard-${{ matrix.shard }}
|
||||
path: |
|
||||
packages/testing/playwright/test-results/
|
||||
packages/testing/playwright/playwright-report/
|
||||
packages/testing/playwright/.nyc_output/
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
retention-days: 7
|
||||
|
||||
- name: Cancel Currents run if workflow is cancelled
|
||||
if: ${{ cancelled() }}
|
||||
|
|
|
|||
|
|
@ -29,7 +29,6 @@ jobs:
|
|||
workers: '1'
|
||||
pre-generated-matrix: '[{"shard":1},{"shard":2},{"shard":3},{"shard":4},{"shard":5},{"shard":6},{"shard":7},{"shard":8},{"shard":9},{"shard":10},{"shard":11},{"shard":12},{"shard":13},{"shard":14},{"shard":15},{"shard":16}]'
|
||||
n8n-env: '{"N8N_EXPRESSION_ENGINE":"vm"}'
|
||||
artifact-prefix: vm-expressions
|
||||
secrets: inherit
|
||||
|
||||
notify-on-failure:
|
||||
|
|
|
|||
123
.github/workflows/test-evals-discovery.yml
vendored
123
.github/workflows/test-evals-discovery.yml
vendored
|
|
@ -1,123 +0,0 @@
|
|||
name: 'Test: Instance AI Discovery Evals'
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
branch:
|
||||
description: 'GitHub branch to test'
|
||||
required: false
|
||||
type: string
|
||||
default: 'master'
|
||||
filter:
|
||||
description: 'Filter scenarios by id (e.g. "slack-oauth")'
|
||||
required: false
|
||||
type: string
|
||||
default: ''
|
||||
trials:
|
||||
description: 'Trials per scenario'
|
||||
required: false
|
||||
type: number
|
||||
default: 3
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
branch:
|
||||
description: 'GitHub branch to test'
|
||||
required: false
|
||||
default: 'master'
|
||||
filter:
|
||||
description: 'Filter scenarios by id (e.g. "slack-oauth")'
|
||||
required: false
|
||||
default: ''
|
||||
trials:
|
||||
description: 'Trials per scenario'
|
||||
required: false
|
||||
default: '3'
|
||||
|
||||
jobs:
|
||||
run-discovery-evals:
|
||||
name: 'Run Discovery Evals'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2204
|
||||
timeout-minutes: 15
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
ref: ${{ inputs.branch || github.ref }}
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Setup Environment
|
||||
uses: ./.github/actions/setup-nodejs
|
||||
with:
|
||||
build-command: 'pnpm build'
|
||||
|
||||
- name: Export Node Types
|
||||
run: |
|
||||
./packages/cli/bin/n8n export:nodes --output ./packages/@n8n/ai-workflow-builder.ee/evaluations/nodes.json
|
||||
|
||||
- name: Run Discovery Evals
|
||||
id: eval
|
||||
working-directory: packages/@n8n/instance-ai
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.EVALS_ANTHROPIC_KEY }}
|
||||
LANGSMITH_TRACING: 'true'
|
||||
LANGSMITH_ENDPOINT: ${{ secrets.EVALS_LANGSMITH_ENDPOINT }}
|
||||
LANGSMITH_API_KEY: ${{ secrets.EVALS_LANGSMITH_API_KEY }}
|
||||
LANGSMITH_REVISION_ID: ${{ github.sha }}
|
||||
LANGSMITH_BRANCH: ${{ github.head_ref || github.ref_name }}
|
||||
FILTER: ${{ inputs.filter }}
|
||||
TRIALS: ${{ inputs.trials || 3 }}
|
||||
run: |
|
||||
set -o pipefail
|
||||
if [ -n "$FILTER" ]; then
|
||||
pnpm eval:discovery --filter "$FILTER" --trials "$TRIALS" 2>&1 | tee discovery-eval-output.txt
|
||||
else
|
||||
pnpm eval:discovery --trials "$TRIALS" 2>&1 | tee discovery-eval-output.txt
|
||||
fi
|
||||
|
||||
- name: Post eval results to PR
|
||||
if: ${{ always() && github.event_name == 'pull_request' && hashFiles('packages/@n8n/instance-ai/discovery-eval-output.txt') != '' }}
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
EVAL_OUTCOME: ${{ steps.eval.outcome }}
|
||||
HEAD_REF: ${{ github.head_ref || github.ref_name }}
|
||||
COMMIT_SHA: ${{ github.sha }}
|
||||
run: |
|
||||
if [ "$EVAL_OUTCOME" = "success" ]; then
|
||||
STATUS_ICON="✅"
|
||||
else
|
||||
STATUS_ICON="❌"
|
||||
fi
|
||||
{
|
||||
echo "### Instance AI Discovery Eval ${STATUS_ICON}"
|
||||
echo ""
|
||||
echo "Branch: \`${HEAD_REF}\` · Commit: \`${COMMIT_SHA}\`"
|
||||
echo ""
|
||||
echo "<details><summary>Eval output</summary>"
|
||||
echo ""
|
||||
echo '```'
|
||||
cat packages/@n8n/instance-ai/discovery-eval-output.txt
|
||||
echo '```'
|
||||
echo ""
|
||||
echo "</details>"
|
||||
} > /tmp/discovery-comment.md
|
||||
|
||||
COMMENT_ID=$(gh api "repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments" \
|
||||
--jq '.[] | select(.body | startswith("### Instance AI Discovery Eval")) | .id' | tail -1)
|
||||
|
||||
if [ -n "$COMMENT_ID" ]; then
|
||||
gh api "repos/${{ github.repository }}/issues/comments/${COMMENT_ID}" -X PATCH -F body=@/tmp/discovery-comment.md
|
||||
else
|
||||
gh pr comment "${{ github.event.pull_request.number }}" --body-file /tmp/discovery-comment.md
|
||||
fi
|
||||
|
||||
- name: Upload Results
|
||||
if: ${{ always() }}
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
with:
|
||||
name: instance-ai-discovery-eval-results
|
||||
path: packages/@n8n/instance-ai/discovery-eval-output.txt
|
||||
retention-days: 14
|
||||
113
.github/workflows/test-evals-instance-ai.yml
vendored
113
.github/workflows/test-evals-instance-ai.yml
vendored
|
|
@ -69,7 +69,6 @@ jobs:
|
|||
N8N_LICENSE_ACTIVATION_KEY: ${{ secrets.N8N_LICENSE_ACTIVATION_KEY }}
|
||||
N8N_LICENSE_CERT: ${{ secrets.N8N_LICENSE_CERT }}
|
||||
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
|
||||
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
|
||||
run: |
|
||||
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
|
||||
for i in "${!PORTS[@]}"; do
|
||||
|
|
@ -80,10 +79,6 @@ jobs:
|
|||
-e N8N_AI_ENABLED=true \
|
||||
-e N8N_INSTANCE_AI_MODEL_API_KEY="$EVALS_ANTHROPIC_KEY" \
|
||||
-e N8N_AI_ASSISTANT_BASE_URL="" \
|
||||
-e N8N_INSTANCE_AI_SANDBOX_ENABLED=true \
|
||||
-e N8N_INSTANCE_AI_SANDBOX_PROVIDER=daytona \
|
||||
-e DAYTONA_API_URL=https://app.daytona.io/api \
|
||||
-e DAYTONA_API_KEY="$DAYTONA_API_KEY" \
|
||||
-e N8N_LICENSE_ACTIVATION_KEY="$N8N_LICENSE_ACTIVATION_KEY" \
|
||||
-e N8N_LICENSE_CERT="$N8N_LICENSE_CERT" \
|
||||
-e N8N_ENCRYPTION_KEY="$N8N_ENCRYPTION_KEY" \
|
||||
|
|
@ -127,36 +122,6 @@ jobs:
|
|||
}'
|
||||
done
|
||||
|
||||
# Belt-and-suspenders: env vars set sandbox config but persisted admin
|
||||
# settings can override. Per-lane assertion catches env-injection hiccups
|
||||
# or unexpected DB-side state. A single misconfigured lane would
|
||||
# silently route some builds through tool mode and pollute results.
|
||||
- name: Assert sandbox is enabled on every lane
|
||||
run: |
|
||||
IFS=',' read -ra PORTS <<< "$LANE_PORTS"
|
||||
bad=0
|
||||
for i in "${!PORTS[@]}"; do
|
||||
port="${PORTS[$i]}"
|
||||
lane="$((i+1))"
|
||||
curl -sf -X POST "http://localhost:$port/rest/login" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"emailOrLdapLoginId":"nathan@n8n.io","password":"PlaywrightTest123"}' \
|
||||
-c "/tmp/cookies-$port.txt" -o /dev/null
|
||||
cfg=$(curl -sf -b "/tmp/cookies-$port.txt" \
|
||||
"http://localhost:$port/rest/instance-ai/settings" \
|
||||
| jq -r '.data | "\(.sandboxEnabled) \(.sandboxProvider)"')
|
||||
if [ "$cfg" != "true daytona" ]; then
|
||||
echo "::error::lane $lane (port $port): expected 'true daytona', got '$cfg'"
|
||||
bad=$((bad+1))
|
||||
else
|
||||
echo " lane $lane: sandboxEnabled=true sandboxProvider=daytona ok"
|
||||
fi
|
||||
done
|
||||
if [ "$bad" -gt 0 ]; then
|
||||
echo "::error::$bad lane(s) misconfigured - eval would mix sandbox + tool-mode builds"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Run Instance AI Evals
|
||||
continue-on-error: true
|
||||
working-directory: packages/@n8n/instance-ai
|
||||
|
|
@ -178,63 +143,9 @@ jobs:
|
|||
--base-url "$BASE_URLS" \
|
||||
--concurrency 32 \
|
||||
--verbose \
|
||||
--iterations 5 \
|
||||
--iterations 3 \
|
||||
${{ inputs.filter && format('--filter "{0}"', inputs.filter) || '' }}
|
||||
|
||||
# Captures sandbox/builder/Daytona signals that surface during the eval
|
||||
# (after migrations finish). Two layers of secret-leak defense:
|
||||
#
|
||||
# 1. Filter to specific diagnostic patterns — never tail raw output.
|
||||
# The grep allowlist scopes the log surface to lines we care
|
||||
# about for debugging (sandbox lifecycle, builder, errors).
|
||||
#
|
||||
# 2. Re-register secrets via ::add-mask:: so any line that does
|
||||
# match the allowlist has the secret values replaced with ***
|
||||
# before reaching the GH Actions log. GitHub auto-masks
|
||||
# ${{ secrets.X }} references, but the masking is fragile
|
||||
# against transformed or split values; explicit registration
|
||||
# reinforces it.
|
||||
#
|
||||
# Runs even on eval failure so we have the post-mortem regardless.
|
||||
- name: Capture n8n container logs (debug)
|
||||
if: ${{ always() }}
|
||||
env:
|
||||
EVALS_ANTHROPIC_KEY: ${{ secrets.EVALS_ANTHROPIC_KEY }}
|
||||
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
|
||||
N8N_LICENSE_ACTIVATION_KEY: ${{ secrets.N8N_LICENSE_ACTIVATION_KEY }}
|
||||
N8N_LICENSE_CERT: ${{ secrets.N8N_LICENSE_CERT }}
|
||||
N8N_ENCRYPTION_KEY: ${{ secrets.N8N_ENCRYPTION_KEY }}
|
||||
run: |
|
||||
# Layer 2 — defense in depth: explicitly mask each secret's value.
|
||||
# ::add-mask:: is a single-line workflow command. Multi-line secrets
|
||||
# (e.g. N8N_LICENSE_CERT is PEM-encoded) must be masked one line at
|
||||
# a time, otherwise only the first line is registered.
|
||||
for v in "$EVALS_ANTHROPIC_KEY" "$DAYTONA_API_KEY" \
|
||||
"$N8N_LICENSE_ACTIVATION_KEY" "$N8N_LICENSE_CERT" \
|
||||
"$N8N_ENCRYPTION_KEY"; do
|
||||
[ -z "$v" ] && continue
|
||||
while IFS= read -r line; do
|
||||
[ -n "$line" ] && echo "::add-mask::$line"
|
||||
done <<< "$v"
|
||||
done
|
||||
|
||||
# Layer 1 — accuracy filter: only surface diagnostic signals.
|
||||
# `tail -100` after the filter so we get the LATEST matching lines
|
||||
# (post-eval failure signal), not the earliest startup-time ones.
|
||||
SIGNALS='sandbox|builder|daytona|instance.?ai|error|warn|reject|exception|fail'
|
||||
for c in $(docker ps -aq --filter "name=n8n-eval-"); do
|
||||
name=$(docker inspect --format '{{.Name}}' "$c" | sed 's|^/||')
|
||||
echo ""
|
||||
echo "============================================================"
|
||||
echo "=== $name (filtered diagnostic signals, last 100 lines) ==="
|
||||
echo "============================================================"
|
||||
docker logs "$c" 2>&1 \
|
||||
| grep -ivE 'migration' \
|
||||
| grep -iE "$SIGNALS" \
|
||||
| tail -100 \
|
||||
|| true
|
||||
done
|
||||
|
||||
- name: Stop n8n containers
|
||||
if: ${{ always() }}
|
||||
run: |
|
||||
|
|
@ -249,16 +160,22 @@ jobs:
|
|||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# The eval CLI writes the full PR comment as eval-pr-comment.md
|
||||
# (see comparison/format.ts:formatComparisonMarkdown). It includes
|
||||
# the alert, aggregate, comparison sections, per-test-case results
|
||||
# collapsed, and failure details collapsed. CI just relays it.
|
||||
COMMENT_FILE="packages/@n8n/instance-ai/eval-pr-comment.md"
|
||||
if [ ! -f "$COMMENT_FILE" ]; then
|
||||
echo "No PR comment file found (eval likely cancelled before writing results)"
|
||||
RESULTS_FILE="packages/@n8n/instance-ai/eval-results.json"
|
||||
if [ ! -f "$RESULTS_FILE" ]; then
|
||||
echo "No eval results file found"
|
||||
exit 0
|
||||
fi
|
||||
cp "$COMMENT_FILE" /tmp/eval-comment.md
|
||||
|
||||
# Build the full comment body with jq
|
||||
jq -r '
|
||||
"### Instance AI Workflow Eval Results\n\n" +
|
||||
"**\(.summary.built)/\(.summary.testCases) built | \(.totalRuns) run(s) | pass@\(.totalRuns): \(.summary.passAtK * 100 | floor)% | pass^\(.totalRuns): \(.summary.passHatK * 100 | floor)% | iterations: \(.summary.passRatePerIter)**\n\n" +
|
||||
"| Workflow | Build | pass@\(.totalRuns) | pass^\(.totalRuns) |\n|---|---|---|---|\n" +
|
||||
([.testCases[] as $tc | "| \($tc.name) | \($tc.buildSuccessCount)/\($tc.totalRuns) | \(([$tc.scenarios[] | .passAtK] | add) / ($tc.scenarios | length) * 100 | floor)% | \(([$tc.scenarios[] | .passHatK] | add) / ($tc.scenarios | length) * 100 | floor)% |"] | join("\n")) +
|
||||
"\n\n<details><summary>Failure details</summary>\n\n" +
|
||||
([.testCases[] as $tc | $tc.scenarios[] | select(.passHatK < 1) | "**\($tc.name) / \(.name)** — \(.passCount)/\(.totalRuns) passed" + "\n" + ([.runs[] | select(.passed == false) | "> Run\(if .failureCategory then " [\(.failureCategory)]" else "" end): \(.reasoning | .[0:200])"] | join("\n"))] | join("\n\n")) +
|
||||
"\n</details>"
|
||||
' "$RESULTS_FILE" > /tmp/eval-comment.md
|
||||
|
||||
# Find and update existing eval comment, or create new one
|
||||
COMMENT_ID=$(gh api "repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments" \
|
||||
|
|
|
|||
2
.github/workflows/test-visual-chromatic.yml
vendored
2
.github/workflows/test-visual-chromatic.yml
vendored
|
|
@ -34,4 +34,4 @@ jobs:
|
|||
skip: 'release/**'
|
||||
onlyChanged: true
|
||||
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
|
||||
exitZeroOnChanges: true
|
||||
exitZeroOnChanges: false
|
||||
|
|
|
|||
|
|
@ -31,6 +31,4 @@ jobs:
|
|||
install-command: pnpm install --frozen-lockfile --dir ./.github/scripts --ignore-workspace
|
||||
|
||||
- name: Ensure release-candidate branches
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
|
||||
run: node ./.github/scripts/ensure-release-candidate-branches.mjs
|
||||
|
|
|
|||
3
.gitignore
vendored
3
.gitignore
vendored
|
|
@ -36,8 +36,6 @@ packages/testing/playwright/playwright-report
|
|||
packages/testing/playwright/test-results
|
||||
packages/testing/playwright/eval-results.json
|
||||
packages/@n8n/instance-ai/eval-results.json
|
||||
packages/@n8n/instance-ai/.eval-output/
|
||||
packages/@n8n/instance-ai/eval-pr-comment.md
|
||||
packages/testing/playwright/.playwright-browsers
|
||||
packages/testing/playwright/.playwright-cli
|
||||
test-results/
|
||||
|
|
@ -63,7 +61,6 @@ packages/cli/src/commands/export/outputs
|
|||
.claude/settings.local.json
|
||||
.claude/plans/
|
||||
.claude/worktrees/
|
||||
.claude/specs/
|
||||
.cursor/plans/
|
||||
.superset
|
||||
.conductor
|
||||
|
|
|
|||
136
CHANGELOG.md
136
CHANGELOG.md
|
|
@ -1,135 +1,21 @@
|
|||
# [2.21.0](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.21.0) (2026-05-12)
|
||||
## [2.20.5](https://github.com/n8n-io/n8n/compare/n8n@2.20.4...n8n@2.20.5) (2026-05-07)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* Add warning to Computer Use install modal ([#30094](https://github.com/n8n-io/n8n/issues/30094)) ([ecf96ad](https://github.com/n8n-io/n8n/commit/ecf96ad30c8d29641db07cd78885ea28aff26199))
|
||||
* **ai-builder:** Allow restoring archived workflows from Instance AI ([#29813](https://github.com/n8n-io/n8n/issues/29813)) ([a33a89a](https://github.com/n8n-io/n8n/commit/a33a89a215d6cef39895858bf36c00c15abfdd9d))
|
||||
* **ai-builder:** Preserve collected planning context ([#29916](https://github.com/n8n-io/n8n/issues/29916)) ([5e3aa1a](https://github.com/n8n-io/n8n/commit/5e3aa1a726e903387344d3a4ed51e97811e4ff02))
|
||||
* **ai-builder:** Resolve HitlTool variants to base node in get_node_types ([#29731](https://github.com/n8n-io/n8n/issues/29731)) ([ed9471a](https://github.com/n8n-io/n8n/commit/ed9471a5321747bbca003bee7d6a37d54bb79cb2))
|
||||
* **Airtable Node:** Fix typecast option dropping attachment field updates ([#29556](https://github.com/n8n-io/n8n/issues/29556)) ([0cafc71](https://github.com/n8n-io/n8n/commit/0cafc717a274053f698e988d6f44a27a8b936e83))
|
||||
* Align undici override across major versions ([#30028](https://github.com/n8n-io/n8n/issues/30028)) ([6b893b4](https://github.com/n8n-io/n8n/commit/6b893b45a0d05dfb08ea7b732f775c28b6ccf801))
|
||||
* **Calendly Trigger Node:** Use API v2 for webhook subscriptions ([#29771](https://github.com/n8n-io/n8n/issues/29771)) ([0edcdcf](https://github.com/n8n-io/n8n/commit/0edcdcfe8529b6296f1a1f0d8b8af3841a14a466))
|
||||
* **core:** Activate agent chat integrations on every main ([#30029](https://github.com/n8n-io/n8n/issues/30029)) ([6f4f0a0](https://github.com/n8n-io/n8n/commit/6f4f0a0303e1f0f0cd57a5b0dab08347010b7241))
|
||||
* **core:** Add configurable retries and error details to S3 ([#28309](https://github.com/n8n-io/n8n/issues/28309)) ([e2576ca](https://github.com/n8n-io/n8n/commit/e2576ca25bc973b315bdcbff1a1b2d3309bc647d))
|
||||
* **core:** Add ESLint rule to prevent error instances in toThrow assertions ([#29889](https://github.com/n8n-io/n8n/issues/29889)) ([75ed71c](https://github.com/n8n-io/n8n/commit/75ed71c00142e8bbdfb851691d5fc3de3cfada36))
|
||||
* **core:** Add liveness timeouts for Instance AI ([#30145](https://github.com/n8n-io/n8n/issues/30145)) ([52a4bcb](https://github.com/n8n-io/n8n/commit/52a4bcb23a9398b1327acd0ec39df7a9e00b48b6))
|
||||
* **core:** Add support for context establishment hooks in webhook mode ([#29893](https://github.com/n8n-io/n8n/issues/29893)) ([04e9b25](https://github.com/n8n-io/n8n/commit/04e9b258a887c07b62774f09e3921932038a3984))
|
||||
* **core:** Add workflow structure validation ([#29699](https://github.com/n8n-io/n8n/issues/29699)) ([bec74ae](https://github.com/n8n-io/n8n/commit/bec74aeb4fda198853b3ea82ed135a1db3ba4988))
|
||||
* **core:** Advance Postgres IDENTITY sequences after entity import ([#29762](https://github.com/n8n-io/n8n/issues/29762)) ([ca33060](https://github.com/n8n-io/n8n/commit/ca33060e0bd30c6d077f8dd18ca8492d50c06a92))
|
||||
* **core:** Agent sessions correctly quoting columns in queries for Postgres ([#29999](https://github.com/n8n-io/n8n/issues/29999)) ([9f92005](https://github.com/n8n-io/n8n/commit/9f92005938a1b481b89558b4e82a198da6ec4e8c))
|
||||
* **core:** Agents called from workflows use the workflows owner/user ID for calling further workflows through the agent ([#30242](https://github.com/n8n-io/n8n/issues/30242)) ([9072ee3](https://github.com/n8n-io/n8n/commit/9072ee3beb1789f34008cb0f85f361dcac8cae26))
|
||||
* **core:** Allow GIT_SSH_COMMAND in simple-git after 3.36.0 upgrade ([#29894](https://github.com/n8n-io/n8n/issues/29894)) ([f42be90](https://github.com/n8n-io/n8n/commit/f42be9030e7f549da5ed6dc3902d058c2ebbadcb))
|
||||
* **core:** Allow profile edits when SSO is no longer active ([#29765](https://github.com/n8n-io/n8n/issues/29765)) ([2714f00](https://github.com/n8n-io/n8n/commit/2714f001218d1323233c1920c94ed02a5ce8dcf1))
|
||||
* **core:** Allow same-domain redirects in instance-ai web research (TRUST-73) ([#30107](https://github.com/n8n-io/n8n/issues/30107)) ([3123f25](https://github.com/n8n-io/n8n/commit/3123f2551be75fb282628b9106b060975fb983fc))
|
||||
* **core:** Always create instance-ai sandbox workspace dirs (TRUST-79) ([#30106](https://github.com/n8n-io/n8n/issues/30106)) ([5e88748](https://github.com/n8n-io/n8n/commit/5e887483344daad5e11bee97d3315a9b2b38d0c9))
|
||||
* **core:** Avoid MCP get_execution hang on circular references ([#30051](https://github.com/n8n-io/n8n/issues/30051)) ([60e23e1](https://github.com/n8n-io/n8n/commit/60e23e10e01f20f73fb1c61d74b5ca44a4c677f6))
|
||||
* **core:** Check npm provenance in community package scanner ([#29667](https://github.com/n8n-io/n8n/issues/29667)) ([804f51c](https://github.com/n8n-io/n8n/commit/804f51cf0d8411b4d4df6f593fdea787b97fad51))
|
||||
* **core:** Clarify 0-based indexing in workflow SDK prompts and JSDoc ([#29734](https://github.com/n8n-io/n8n/issues/29734)) ([fba873c](https://github.com/n8n-io/n8n/commit/fba873c37e76f01d28443c5276b2d92bd333602a))
|
||||
* **core:** Clarify agent builder prompt guidance ([#30127](https://github.com/n8n-io/n8n/issues/30127)) ([75646c4](https://github.com/n8n-io/n8n/commit/75646c45271831bf8d03653baf024d201d5fae6d))
|
||||
* **core:** Defer credential setup during workflow builds ([#30181](https://github.com/n8n-io/n8n/issues/30181)) ([bb73952](https://github.com/n8n-io/n8n/commit/bb73952fcc9aff4eed0af6bb99fb10f65d48df3d))
|
||||
* **core:** Emit missing auth audit events for OIDC and SSO-restricted login ([#29856](https://github.com/n8n-io/n8n/issues/29856)) ([dd812c5](https://github.com/n8n-io/n8n/commit/dd812c5010ca28ca38c238bfa8c57fe39ac816d5))
|
||||
* **core:** Export boolean CSV values as true/false for Data Tables ([#30007](https://github.com/n8n-io/n8n/issues/30007)) ([94d91e1](https://github.com/n8n-io/n8n/commit/94d91e13bfcaf360099a0a3816b0025502b145f4))
|
||||
* **core:** Filter WaitTracker to only poll waiting executions ([#29898](https://github.com/n8n-io/n8n/issues/29898)) ([5c7921f](https://github.com/n8n-io/n8n/commit/5c7921f71c95d97f6730e6b28b06947b1cfbaa23))
|
||||
* **core:** Fix duplicate task request on runner defer ([#28315](https://github.com/n8n-io/n8n/issues/28315)) ([80c8a6c](https://github.com/n8n-io/n8n/commit/80c8a6c2fdc97624c9b4b3e97b8ff20aca641552))
|
||||
* **core:** Harden axios error handling against non-string error stack ([#29100](https://github.com/n8n-io/n8n/issues/29100)) ([2dbf02e](https://github.com/n8n-io/n8n/commit/2dbf02e63e5ddee8d9e4a94f2ad3cd1f5321f2a7))
|
||||
* **core:** Improve AI chat file upload handling and error states ([#29701](https://github.com/n8n-io/n8n/issues/29701)) ([afe119b](https://github.com/n8n-io/n8n/commit/afe119be1409ac2cb198f7a41dc12ed25f5cf106))
|
||||
* **core:** Improve documentation usage in mcp tools ([#30210](https://github.com/n8n-io/n8n/issues/30210)) ([e8827cd](https://github.com/n8n-io/n8n/commit/e8827cd6e8ff3eb03ceab6965574bacf10c719d0))
|
||||
* **core:** Initialise encryption key proxy on worker and webhook instances ([#29912](https://github.com/n8n-io/n8n/issues/29912)) ([ae57e60](https://github.com/n8n-io/n8n/commit/ae57e606b4f5cf691bceb01489e5991cf31911ef))
|
||||
* **core:** Inline AI_NODE_SDK_VERSION to save memory by not loading @n8n/ai-utilities on boot ([#30113](https://github.com/n8n-io/n8n/issues/30113)) ([f709e53](https://github.com/n8n-io/n8n/commit/f709e5382448926e15e36571aa9fd32db238e36d))
|
||||
* **core:** Persist agent chat draft across modes and hide unfinished tool-approval toggle ([#30123](https://github.com/n8n-io/n8n/issues/30123)) ([7094b48](https://github.com/n8n-io/n8n/commit/7094b48c9444024af6c14b72b49b47b555db52ef))
|
||||
* **core:** Preserve node positions on AI workflow updates ([#29850](https://github.com/n8n-io/n8n/issues/29850)) ([f2764f0](https://github.com/n8n-io/n8n/commit/f2764f04c0e663268fe40737c55c8c1a0f33173b))
|
||||
* **core:** Prevent proxy layer accumulation in ObservableObject ([#30129](https://github.com/n8n-io/n8n/issues/30129)) ([0a76135](https://github.com/n8n-io/n8n/commit/0a761355c4836433c379ee8933c0198621879ae0))
|
||||
* **core:** Propagate waitTill from worker to main in scaling mode ([#30099](https://github.com/n8n-io/n8n/issues/30099)) ([3702ff8](https://github.com/n8n-io/n8n/commit/3702ff8eb31547d51e3b56b484bf6a731296f9cf))
|
||||
* **core:** Scope credential resolution ([#30156](https://github.com/n8n-io/n8n/issues/30156)) ([174f0f8](https://github.com/n8n-io/n8n/commit/174f0f805e0d5715d2d80e5c0282a94b79e9a390))
|
||||
* **core:** Simple-git update broke https connection ([#29998](https://github.com/n8n-io/n8n/issues/29998)) ([01300e9](https://github.com/n8n-io/n8n/commit/01300e9b9b7e0f80f1852c5e1e4b3df9a42404c4))
|
||||
* **core:** Simplify Slack redirect URL verification process for agents ([#30033](https://github.com/n8n-io/n8n/issues/30033)) ([8201281](https://github.com/n8n-io/n8n/commit/820128196cf550ab8cf371fbebb3457b9fd35d22))
|
||||
* **core:** Skip disabled tool nodes when mapping AI Agent tool sources ([#29460](https://github.com/n8n-io/n8n/issues/29460)) ([bd7eeb7](https://github.com/n8n-io/n8n/commit/bd7eeb7bc89032b9a0db467cb53f37bfef71647e))
|
||||
* **core:** Skip unknown fixedCollection keys instead of throwing ([#29689](https://github.com/n8n-io/n8n/issues/29689)) ([a30772c](https://github.com/n8n-io/n8n/commit/a30772c933544d06b560a3c66ec69cd4f7b8574f))
|
||||
* **core:** Stop applying node-defined sensitive output fields to runtime data ([#30198](https://github.com/n8n-io/n8n/issues/30198)) ([f4e8088](https://github.com/n8n-io/n8n/commit/f4e8088cb8df24443eec0482e2c58346c1e30016))
|
||||
* **core:** Stop logging password reset token values ([#29405](https://github.com/n8n-io/n8n/issues/29405)) ([bc8d196](https://github.com/n8n-io/n8n/commit/bc8d196931b35118ca6078a5845e8549bbba7e6b))
|
||||
* **core:** Support type filters on global credential lookups ([#30002](https://github.com/n8n-io/n8n/issues/30002)) ([8e0f37d](https://github.com/n8n-io/n8n/commit/8e0f37d100b45d4105ca168bb8f62ec2c1328cf2))
|
||||
* **core:** Throw on bare OutputSelector passed to .add()/.to() ([#29736](https://github.com/n8n-io/n8n/issues/29736)) ([60a5122](https://github.com/n8n-io/n8n/commit/60a51229e0db92a00788eb12586ea6376276645d))
|
||||
* **core:** Validate AI builder credential IDs before save ([#30070](https://github.com/n8n-io/n8n/issues/30070)) ([ceaebc6](https://github.com/n8n-io/n8n/commit/ceaebc6cbe7cde2269aee4be6966d021f136f9c6))
|
||||
* Correct connect.html path in browser extension ([#29714](https://github.com/n8n-io/n8n/issues/29714)) ([9b3b29b](https://github.com/n8n-io/n8n/commit/9b3b29b5058da42ec736c14cc8af5726b2a64e4b))
|
||||
* **EditImage Node:** Fix composite operation failing with stream empty buffer ([#30088](https://github.com/n8n-io/n8n/issues/30088)) ([0cc163b](https://github.com/n8n-io/n8n/commit/0cc163b7dcccbfa68c065faa466b2b50f21c4a97))
|
||||
* **editor:** Add expand/collapse to chat panel in Agents ([#30069](https://github.com/n8n-io/n8n/issues/30069)) ([f87094c](https://github.com/n8n-io/n8n/commit/f87094cf6e5efe7c89ef16c4253525091479b356))
|
||||
* **editor:** Disable chat during interactive agent choices ([#30111](https://github.com/n8n-io/n8n/issues/30111)) ([8171cf0](https://github.com/n8n-io/n8n/commit/8171cf0b32ee5aa74dd240bb8f99a3250e428217))
|
||||
* **editor:** Fix Agents styling issues from merge regression ([#30032](https://github.com/n8n-io/n8n/issues/30032)) ([478d499](https://github.com/n8n-io/n8n/commit/478d4998a8055a3d5f81b93120d67282546f125a))
|
||||
* **editor:** Fix collapse/expand for Chat sidebar ([#29378](https://github.com/n8n-io/n8n/issues/29378)) ([ee847d1](https://github.com/n8n-io/n8n/commit/ee847d1624636914323b8b06f145ae811101528f))
|
||||
* **editor:** Improve sidebar new resource menu UX ([#29597](https://github.com/n8n-io/n8n/issues/29597)) ([d5af542](https://github.com/n8n-io/n8n/commit/d5af542f254ba4846f3f393404e24bc5ec998283))
|
||||
* **editor:** Make sure trimmed placeholder never reaches backend ([#29842](https://github.com/n8n-io/n8n/issues/29842)) ([f7c7acc](https://github.com/n8n-io/n8n/commit/f7c7acc2441481235d81a38ea14ed637546d3b40))
|
||||
* **editor:** Match input height with mode selector in resource locator ([#30075](https://github.com/n8n-io/n8n/issues/30075)) ([277431b](https://github.com/n8n-io/n8n/commit/277431b88b195d92a32e35a7df7f8df907d9cb44))
|
||||
* **editor:** Polish encryption keys settings page ([#30008](https://github.com/n8n-io/n8n/issues/30008)) ([5cbd2dd](https://github.com/n8n-io/n8n/commit/5cbd2dd1e9a66cb1d00d89191395f2b417c7a08b))
|
||||
* **editor:** Preserve decimal suffix when duplicating a node ([#29541](https://github.com/n8n-io/n8n/issues/29541)) ([08a36d7](https://github.com/n8n-io/n8n/commit/08a36d7515eda29acd6c5e03f7968d4896465b3d))
|
||||
* **editor:** Refresh node icon when diff sidebar selection changes ([#29816](https://github.com/n8n-io/n8n/issues/29816)) ([ff41613](https://github.com/n8n-io/n8n/commit/ff41613533980f8f2a0ff7baef5fd2a63d981636))
|
||||
* **editor:** Rename canvas header dropdown action to Description ([#29719](https://github.com/n8n-io/n8n/issues/29719)) ([49e7b05](https://github.com/n8n-io/n8n/commit/49e7b056b4a21b6341ce1811a597476d37dfa42f))
|
||||
* **editor:** Rename encryption keys "Type" column to "Status" ([#29966](https://github.com/n8n-io/n8n/issues/29966)) ([e71afed](https://github.com/n8n-io/n8n/commit/e71afedfab84b3b7b88fe9c4e2a36cd31ac6206b))
|
||||
* **editor:** Render tooltips above popovers ([#29997](https://github.com/n8n-io/n8n/issues/29997)) ([ba5b3d1](https://github.com/n8n-io/n8n/commit/ba5b3d13b116d8e055fe3a4dce1b5349545ff540))
|
||||
* **editor:** Resolve expressions in 'Go to Sub-workflow' navigation ([#29843](https://github.com/n8n-io/n8n/issues/29843)) ([d6bae35](https://github.com/n8n-io/n8n/commit/d6bae35e8f8f0399cd722606d911ae2c67b60431))
|
||||
* Fix 15 security issues in fast-xml-builder, basic-ftp, fast-uri and 5 more ([#30169](https://github.com/n8n-io/n8n/issues/30169)) ([267fe49](https://github.com/n8n-io/n8n/commit/267fe49d51b7b8bcc80489b0f9f1a585986bc525))
|
||||
* **Git Node:** Restore Clone and other operations on simple-git 3.36+ ([#30223](https://github.com/n8n-io/n8n/issues/30223)) ([a8aa955](https://github.com/n8n-io/n8n/commit/a8aa95551e5950fd1920c2cce21cd2739b464266))
|
||||
* **Google Chat Node:** Clarify message resource name field ([#29964](https://github.com/n8n-io/n8n/issues/29964)) ([55df7cb](https://github.com/n8n-io/n8n/commit/55df7cbd0619e483e7e02207bc5084c715dcb53a))
|
||||
* **Google Sheets Node:** Reduce duplicate API calls in append operation to avoid quota limits ([#29444](https://github.com/n8n-io/n8n/issues/29444)) ([d63e1ae](https://github.com/n8n-io/n8n/commit/d63e1ae84e767df33c1fc394f646e8ca093aa4a3))
|
||||
* Handle IMAP fetch errors to prevent instance crash and stuck workflows ([#29469](https://github.com/n8n-io/n8n/issues/29469)) ([46d52ff](https://github.com/n8n-io/n8n/commit/46d52ffc7e719f17db56c433ee97a0b48861ba36))
|
||||
* **HTTP Request Node:** Validate URL type in older node versions ([#29886](https://github.com/n8n-io/n8n/issues/29886)) ([29a864c](https://github.com/n8n-io/n8n/commit/29a864ca9bcd88e82cf5f998c9ea36d2f81a5dee))
|
||||
* **MongoDB Node:** Resolve collection parameter per item in write operations ([#29956](https://github.com/n8n-io/n8n/issues/29956)) ([582b6ae](https://github.com/n8n-io/n8n/commit/582b6ae9eaaef6a616233e9bd4eda7230c36eb0a))
|
||||
* **Notion Node:** Paginate Get Many operations beyond 100-item API cap ([#29690](https://github.com/n8n-io/n8n/issues/29690)) ([d318bc1](https://github.com/n8n-io/n8n/commit/d318bc1e330eeb92d84bc35a2ad9cf6931eccfdf))
|
||||
* **Notion Node:** Serialize staticData as ISO string in NotionTrigger ([#29688](https://github.com/n8n-io/n8n/issues/29688)) ([d2e1eb3](https://github.com/n8n-io/n8n/commit/d2e1eb30f15c1e2380b815f4d1f62b2b98b23e9a))
|
||||
* **Notion Node:** Update UI URLs from notion.so to notion.com ahead of domain migration ([#29861](https://github.com/n8n-io/n8n/issues/29861)) ([3593131](https://github.com/n8n-io/n8n/commit/35931319b5b987b7cdd7104accea407fd5390582))
|
||||
* **Oracle DB Node:** Handle the test failures ([#28341](https://github.com/n8n-io/n8n/issues/28341)) ([0697562](https://github.com/n8n-io/n8n/commit/0697562ac9f1507ca0230d02f462889259a5bdcf))
|
||||
* Restore broken stdlib calls in Python Code node ([#29776](https://github.com/n8n-io/n8n/issues/29776)) ([a786476](https://github.com/n8n-io/n8n/commit/a7864762ca656c8e636df1ea33750dff604b60ab))
|
||||
* **RSS Feed Read Node:** Respect proxy settings ([#30059](https://github.com/n8n-io/n8n/issues/30059)) ([2e046d5](https://github.com/n8n-io/n8n/commit/2e046d5b7f2ec4a6fbf00107ee088239f87ce8c5))
|
||||
* **Salesforce Node:** Fix trigger not firing on repeated record updates ([#29107](https://github.com/n8n-io/n8n/issues/29107)) ([f871d44](https://github.com/n8n-io/n8n/commit/f871d44cabc95fb102af8ba1a9e5d2e314205297))
|
||||
* **Schedule Node:** Fix hourly intervals that don't divide evenly into 24h ([#29778](https://github.com/n8n-io/n8n/issues/29778)) ([1a22c76](https://github.com/n8n-io/n8n/commit/1a22c762703bed75a18de868a7bfb7c60eacc516))
|
||||
* **Snowflake Node:** Fix issue with Insert and Update operations not working ([#29339](https://github.com/n8n-io/n8n/issues/29339)) ([4c369e8](https://github.com/n8n-io/n8n/commit/4c369e83f26450395a5a28b6c39a04b2c7650f1f))
|
||||
* **Supabase Node:** Don't display RPCs in an RLC for the table ([#28146](https://github.com/n8n-io/n8n/issues/28146)) ([78aa0e7](https://github.com/n8n-io/n8n/commit/78aa0e70f21df2533a494c02a3e35ca3ab6ca7b0))
|
||||
* **Wait Node:** Resolve expressions inside Custom HTML form fields ([#30060](https://github.com/n8n-io/n8n/issues/30060)) ([7c1a771](https://github.com/n8n-io/n8n/commit/7c1a77154ccf1a5f2a11da3cdf0949b2883c85fb))
|
||||
* **YouTube Node:** Fix misspelled "unlisted" privacy status value in Video Update operation ([#30203](https://github.com/n8n-io/n8n/issues/30203)) ([96b018d](https://github.com/n8n-io/n8n/commit/96b018d3569623e1696a28981b24120a3ceb46d0))
|
||||
* **core:** Simple-git update broke https connection ([#30003](https://github.com/n8n-io/n8n/issues/30003)) ([ce01685](https://github.com/n8n-io/n8n/commit/ce016859cb055b7d193208412e5e656b0048a1fa))
|
||||
|
||||
|
||||
### Features
|
||||
## [2.20.4](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.20.4) (2026-05-07)
|
||||
|
||||
* **Acuity Scheduling Trigger Node:** Add webhook request verification ([#29261](https://github.com/n8n-io/n8n/issues/29261)) ([da41470](https://github.com/n8n-io/n8n/commit/da41470311a03a15beb5d7361c0385b7dd9acc12))
|
||||
* Add fully dynamic disclaimer to Quick Connect offer ([#29852](https://github.com/n8n-io/n8n/issues/29852)) ([b6127d8](https://github.com/n8n-io/n8n/commit/b6127d8722ff1bddd9eb5786a6cbd90ce2f98ac1))
|
||||
* **ai-builder:** Add per-PR eval regression detection vs LangSmith baseline ([#29456](https://github.com/n8n-io/n8n/issues/29456)) ([bbe3e2d](https://github.com/n8n-io/n8n/commit/bbe3e2d1487e06df1e58057ec8c47edb5ad19aa7))
|
||||
* **ai-builder:** Guarantee user-visible output on terminal states ([#29636](https://github.com/n8n-io/n8n/issues/29636)) ([4d9e624](https://github.com/n8n-io/n8n/commit/4d9e624b4113d06a4cc7a632aed357806349abcb))
|
||||
* **Asana Trigger Node:** Add webhook request verification ([#29258](https://github.com/n8n-io/n8n/issues/29258)) ([94e4033](https://github.com/n8n-io/n8n/commit/94e403300b44d2f25f4d88dd3d9d1300adfea3bc))
|
||||
* **Cal Trigger Node:** Add webhook request verification ([#29484](https://github.com/n8n-io/n8n/issues/29484)) ([3276edc](https://github.com/n8n-io/n8n/commit/3276edce10dfc7e59aa12e43fd7fc566f91723c4))
|
||||
* **Calendly Trigger Node:** Add webhook request verification ([#29482](https://github.com/n8n-io/n8n/issues/29482)) ([e929f9f](https://github.com/n8n-io/n8n/commit/e929f9fbe751742da7f27658ded1ff0101af19d2))
|
||||
* **core:** Accept merge.input(n) inside ifElse/switch branch targets in workflow-sdk ([#29716](https://github.com/n8n-io/n8n/issues/29716)) ([34f2107](https://github.com/n8n-io/n8n/commit/34f2107071478591a1c98b65576262c40408a157))
|
||||
* **core:** Add flag to import workflow cli to activate workflow on import ([#29770](https://github.com/n8n-io/n8n/issues/29770)) ([283071e](https://github.com/n8n-io/n8n/commit/283071e6114fd8e8b5063e1ba38daf158bd762d2))
|
||||
* **core:** Add IP rate limiting to dynamic credential authentication endpoints ([#30199](https://github.com/n8n-io/n8n/issues/30199)) ([515ae7c](https://github.com/n8n-io/n8n/commit/515ae7ced4b109880306788cb16977c15de92279))
|
||||
* **core:** Add MCP tool to list credentials ([#29438](https://github.com/n8n-io/n8n/issues/29438)) ([d6cc3be](https://github.com/n8n-io/n8n/commit/d6cc3bedd1c4e7a2849eb5cf2acf538fb3a8f3da))
|
||||
* **core:** Add multi-config evaluations backend ([#29784](https://github.com/n8n-io/n8n/issues/29784)) ([8116e0a](https://github.com/n8n-io/n8n/commit/8116e0a4858044712e45c078e06e0a36103d141c))
|
||||
* **core:** Add n8n-object-validation ESLint rule for community nodes ([#29698](https://github.com/n8n-io/n8n/issues/29698)) ([701f9a4](https://github.com/n8n-io/n8n/commit/701f9a462773c204a6dc8bd15c533f9c07cd6e08))
|
||||
* **core:** Add no-template-placeholders ESLint rule for community nodes ([#29796](https://github.com/n8n-io/n8n/issues/29796)) ([c4056b2](https://github.com/n8n-io/n8n/commit/c4056b255edd4420fde6cb5e1028b61f10b2bcf7))
|
||||
* **core:** Add observational memory storage foundation ([#29814](https://github.com/n8n-io/n8n/issues/29814)) ([be4ef22](https://github.com/n8n-io/n8n/commit/be4ef225336166937a8847c2f2615bfd29e40765))
|
||||
* **core:** Define community packages with environment variables ([#29961](https://github.com/n8n-io/n8n/issues/29961)) ([730c3e1](https://github.com/n8n-io/n8n/commit/730c3e12a55a38cdbe9090eabef508cd56d67a9e))
|
||||
* **core:** Generate service-specific OAuth2 credentials for dedicated MCP tools ([#29884](https://github.com/n8n-io/n8n/issues/29884)) ([8617067](https://github.com/n8n-io/n8n/commit/86170674b72acc16d781eafd08cd762c55a7672f))
|
||||
* **core:** Server-side pagination, sorting, and filtering for encryption keys ([#29708](https://github.com/n8n-io/n8n/issues/29708)) ([9afbe13](https://github.com/n8n-io/n8n/commit/9afbe13b81f00f0ea7730541b4909e31b1080249))
|
||||
* **core:** Transform MCP server configs into dedicated MCP tools ([#29493](https://github.com/n8n-io/n8n/issues/29493)) ([4dce41f](https://github.com/n8n-io/n8n/commit/4dce41f79573f864fde16df622c028134d743f03))
|
||||
* **core:** Use McpManagerClient and enforce whether MCP server connections are allowed ([#29694](https://github.com/n8n-io/n8n/issues/29694)) ([8235474](https://github.com/n8n-io/n8n/commit/82354742d348850d8cb6efc6ffe490c53ff0a8a0))
|
||||
* **Customer.io Trigger Node:** Add webhook request verification ([#29480](https://github.com/n8n-io/n8n/issues/29480)) ([a772016](https://github.com/n8n-io/n8n/commit/a772016e36a87d1fbbacbee59ebcd80dbe3b9150))
|
||||
* **editor:** Add envFeatureFlag and copyButton property options ([#29733](https://github.com/n8n-io/n8n/issues/29733)) ([75053fe](https://github.com/n8n-io/n8n/commit/75053fec9373076abfba3db01a967f54f8274e83))
|
||||
* **editor:** Cap eval concurrency slider at admin-set limit ([#29807](https://github.com/n8n-io/n8n/issues/29807)) ([6232de4](https://github.com/n8n-io/n8n/commit/6232de4d477ffa56e0082d87a5b63d1c9ef00d4c))
|
||||
* **editor:** Eval run detail loading + error states (TRUST-70 follow-up) ([#29817](https://github.com/n8n-io/n8n/issues/29817)) ([6f9b99a](https://github.com/n8n-io/n8n/commit/6f9b99a3cf1207ece10a6bd6239a5005c6a10540))
|
||||
* **editor:** Redesign evaluation run detail page ([#29592](https://github.com/n8n-io/n8n/issues/29592)) ([9014bae](https://github.com/n8n-io/n8n/commit/9014baea7ea952aaf782c53bce03d3a8f0ae5ddf))
|
||||
* **editor:** Show locked state and permission notice on data redaction workflow settings ([#30022](https://github.com/n8n-io/n8n/issues/30022)) ([7635131](https://github.com/n8n-io/n8n/commit/7635131bd396252f51d29e7407099eafa92a304f))
|
||||
* **Figma Trigger Node:** Add OAuth2 authentication support ([#30079](https://github.com/n8n-io/n8n/issues/30079)) ([e3e70d6](https://github.com/n8n-io/n8n/commit/e3e70d6068a3d543b29b1bd24682101ecb2e641f))
|
||||
* **Figma Trigger Node:** Add webhook request verification ([#29262](https://github.com/n8n-io/n8n/issues/29262)) ([910822f](https://github.com/n8n-io/n8n/commit/910822fb0951f6ead55fc000e7743a8ee13e82e9))
|
||||
* **Formstack Trigger Node:** Add webhook request verification ([#29495](https://github.com/n8n-io/n8n/issues/29495)) ([4e28652](https://github.com/n8n-io/n8n/commit/4e2865206c72833d9fe585ed941ecc83c1bec699))
|
||||
* **GitLab Trigger Node:** Add webhook request verification ([#29260](https://github.com/n8n-io/n8n/issues/29260)) ([fbf89bd](https://github.com/n8n-io/n8n/commit/fbf89bde1164a19365fe4418405ddec7108543d9))
|
||||
* **Jira Node:** Add OAuth2 (3LO) support ([#29414](https://github.com/n8n-io/n8n/issues/29414)) ([4d5bafc](https://github.com/n8n-io/n8n/commit/4d5bafc146125fa22d05cf924c5e68bc51263722))
|
||||
* **MailerLite Trigger Node:** Add webhook request verification ([#29491](https://github.com/n8n-io/n8n/issues/29491)) ([12b7cc6](https://github.com/n8n-io/n8n/commit/12b7cc67395bf1991235ae0f00739d9f2803cb9c))
|
||||
* **Mautic Trigger Node:** Add webhook request verification ([#29658](https://github.com/n8n-io/n8n/issues/29658)) ([eaadf19](https://github.com/n8n-io/n8n/commit/eaadf190b89f21f74bc3a25b16803576f91e9618))
|
||||
* **Microsoft Outlook Node:** Add location and attendees fields to calendar events ([#29844](https://github.com/n8n-io/n8n/issues/29844)) ([2e21c5f](https://github.com/n8n-io/n8n/commit/2e21c5fcf83a2fc86659c7464b2bc6672230389f))
|
||||
* **Microsoft Outlook Node:** Add support for recurring event instances ([#29802](https://github.com/n8n-io/n8n/issues/29802)) ([dab3653](https://github.com/n8n-io/n8n/commit/dab3653f8016b7f9187559658ea6ef58220df2d1))
|
||||
* **Onfleet Trigger Node:** Add webhook request verification ([#29485](https://github.com/n8n-io/n8n/issues/29485)) ([133a5aa](https://github.com/n8n-io/n8n/commit/133a5aa0adae69f86f1603bd9ad85c852c0ccdf5))
|
||||
* **Strava Node:** Allow custom OAuth2 scopes ([#29972](https://github.com/n8n-io/n8n/issues/29972)) ([5abcae6](https://github.com/n8n-io/n8n/commit/5abcae686cf1b64e06bbbd6f62b6871bc4feec56))
|
||||
* **Taiga Trigger Node:** Add webhook request verification ([#29487](https://github.com/n8n-io/n8n/issues/29487)) ([3c97c49](https://github.com/n8n-io/n8n/commit/3c97c49d63c824c2a3b4284beecf8957c44c1c16))
|
||||
* **Trello Trigger Node:** Add webhook request verification ([#29252](https://github.com/n8n-io/n8n/issues/29252)) ([8f1f42d](https://github.com/n8n-io/n8n/commit/8f1f42d18056ba51e450ba90ba3be65cbf9745aa))
|
||||
* **Twilio Trigger Node:** Add webhook request verification ([#29259](https://github.com/n8n-io/n8n/issues/29259)) ([acc9643](https://github.com/n8n-io/n8n/commit/acc964381189aaacbeb584a16c0155ba6f96ffa1))
|
||||
## [2.20.3](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.20.3) (2026-05-07)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **core:** Add support for context establishment hooks in webhook mode ([#29900](https://github.com/n8n-io/n8n/issues/29900)) ([71d4122](https://github.com/n8n-io/n8n/commit/71d41224385e64098000569bf9ac4838a61c669c))
|
||||
* **core:** Allow GIT_SSH_COMMAND in simple-git after 3.36.0 upgrade ([#29946](https://github.com/n8n-io/n8n/issues/29946)) ([2f31aca](https://github.com/n8n-io/n8n/commit/2f31aca2dc4b5258492a678a44464146a2a29d01))
|
||||
* **Snowflake Node:** Fix issue with Insert and Update operations not working ([#29809](https://github.com/n8n-io/n8n/issues/29809)) ([98004c6](https://github.com/n8n-io/n8n/commit/98004c6269456c3bfe600da951856c81b3861034))
|
||||
|
||||
|
||||
# [2.20.0](https://github.com/n8n-io/n8n/compare/n8n@2.19.0...n8n@2.20.0) (2026-05-05)
|
||||
|
|
|
|||
|
|
@ -1,20 +0,0 @@
|
|||
ARG NODE_VERSION=24.14.1
|
||||
|
||||
FROM node:${NODE_VERSION}-alpine3.22
|
||||
|
||||
ENV NODE_ENV=production
|
||||
|
||||
RUN apk add --no-cache tini
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# `compiled/` is produced by `pnpm build:docker`. It's a `pnpm deploy --prod`
|
||||
# output containing package.json, dist/, and a node_modules with only
|
||||
# production dependencies — no devDeps, no workspace bloat.
|
||||
COPY --chown=node:node ./compiled /app
|
||||
|
||||
USER node
|
||||
EXPOSE 3000
|
||||
|
||||
ENTRYPOINT ["tini", "--"]
|
||||
CMD ["node", "dist/serve.js"]
|
||||
10
package.json
10
package.json
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "n8n-monorepo",
|
||||
"version": "2.21.0",
|
||||
"version": "2.20.5",
|
||||
"private": true,
|
||||
"engines": {
|
||||
"node": ">=22.16",
|
||||
|
|
@ -136,7 +136,6 @@
|
|||
"@smithy/config-resolver": ">=4.4.0",
|
||||
"@rudderstack/rudder-sdk-node@<=3.0.0": "3.0.0",
|
||||
"diff": "8.0.3",
|
||||
"undici@5": "^6.24.0",
|
||||
"undici@6": "^6.24.0",
|
||||
"undici@7": "^7.24.0",
|
||||
"tar": "^7.5.11",
|
||||
|
|
@ -166,12 +165,9 @@
|
|||
"@xmldom/xmldom": "0.8.13",
|
||||
"langsmith": "0.5.19",
|
||||
"yaml@<=2.8.3": "2.8.3",
|
||||
"hono": "4.12.16",
|
||||
"axios": "1.16.0",
|
||||
"fast-xml-parser": "5.7.2",
|
||||
"hono": "4.12.18",
|
||||
"@anthropic-ai/sdk@<=0.91.1": "0.91.1",
|
||||
"uuid@<=13.0.1": "13.0.1",
|
||||
"fast-uri": "3.1.2"
|
||||
"fast-xml-parser": "5.7.2"
|
||||
},
|
||||
"patchedDependencies": {
|
||||
"bull@4.16.4": "patches/bull@4.16.4.patch",
|
||||
|
|
|
|||
|
|
@ -70,7 +70,8 @@ docs/
|
|||
```
|
||||
|
||||
The **`index.ts`** surface also exports `Workspace` / sandbox / filesystem types,
|
||||
`InMemoryMemory`, `LangSmithTelemetry`, and `evals` alongside the core SDK builders.
|
||||
`SqliteMemory` / `PostgresMemory`, `LangSmithTelemetry`, and `evals` alongside the
|
||||
core SDK builders.
|
||||
|
||||
Optional **peer dependencies** (telemetry): `langsmith`, `@opentelemetry/sdk-trace-node`,
|
||||
`@opentelemetry/sdk-trace-base`, `@opentelemetry/exporter-trace-otlp-http` — all
|
||||
|
|
|
|||
|
|
@ -367,7 +367,7 @@ At end of turn, `saveToMemory()` uses `list.turnDelta()` and
|
|||
`saveMessagesToThread`. If **semantic recall** is configured with an embedder
|
||||
and `memory.saveEmbeddings`, new messages are embedded and stored.
|
||||
|
||||
**Working memory:** when configured, the runtime injects an `update_working_memory`
|
||||
**Working memory:** when configured, the runtime injects an `updateWorkingMemory`
|
||||
tool into the agent's tool set. The current state is included in the system prompt
|
||||
so the model can read it; when new information should be persisted the model calls
|
||||
the tool, which validates the input and asynchronously persists via
|
||||
|
|
@ -415,7 +415,7 @@ src/
|
|||
tool-adapter.ts — buildToolMap, executeTool, toAiSdkTools, suspend / agent-result guards
|
||||
stream.ts — convertChunk, toTokenUsage
|
||||
runtime-helpers.ts — normalizeInput, usage merge, stream error helpers, …
|
||||
working-memory.ts — instruction text, update_working_memory tool builder
|
||||
working-memory.ts — instruction text, updateWorkingMemory tool builder
|
||||
strip-orphaned-tool-messages.ts
|
||||
title-generation.ts
|
||||
logger.ts
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@n8n/agents",
|
||||
"version": "0.7.0",
|
||||
"version": "0.6.0",
|
||||
"description": "AI agent SDK for n8n's code-first execution engine",
|
||||
"main": "dist/index.js",
|
||||
"module": "dist/index.js",
|
||||
|
|
@ -24,32 +24,23 @@
|
|||
"test:integration": "vitest run --config vitest.integration.config.mjs"
|
||||
},
|
||||
"dependencies": {
|
||||
"@ai-sdk/amazon-bedrock": "catalog:",
|
||||
"@ai-sdk/anthropic": "^3.0.58",
|
||||
"@ai-sdk/azure": "catalog:",
|
||||
"@ai-sdk/cohere": "catalog:",
|
||||
"@ai-sdk/deepseek": "catalog:",
|
||||
"@ai-sdk/gateway": "catalog:",
|
||||
"@ai-sdk/google": "^3.0.43",
|
||||
"@ai-sdk/groq": "catalog:",
|
||||
"@ai-sdk/mistral": "catalog:",
|
||||
"@ai-sdk/openai": "^3.0.41",
|
||||
"@ai-sdk/provider-utils": "^4.0.21",
|
||||
"@ai-sdk/xai": "^3.0.67",
|
||||
"@libsql/client": "^0.17.0",
|
||||
"@ai-sdk/provider-utils": "^4.0.21",
|
||||
"@modelcontextprotocol/sdk": "1.26.0",
|
||||
"@n8n/ai-utilities": "workspace:*",
|
||||
"@openrouter/ai-sdk-provider": "catalog:",
|
||||
"ai": "^6.0.116",
|
||||
"ajv": "^8.18.0",
|
||||
"@libsql/client": "^0.17.0",
|
||||
"ai": "^6.0.116",
|
||||
"pg": "catalog:",
|
||||
"zod": "catalog:"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@opentelemetry/exporter-trace-otlp-http": ">=0.50.0",
|
||||
"@opentelemetry/sdk-trace-base": ">=1.0.0",
|
||||
"langsmith": ">=0.3.0",
|
||||
"@opentelemetry/sdk-trace-node": ">=1.0.0",
|
||||
"langsmith": "catalog:"
|
||||
"@opentelemetry/sdk-trace-base": ">=1.0.0",
|
||||
"@opentelemetry/exporter-trace-otlp-http": ">=0.50.0"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"langsmith": {
|
||||
|
|
|
|||
|
|
@ -1,16 +1,14 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
import { isLlmMessage } from '../../sdk/message';
|
||||
import { Tool, Tool as ToolBuilder } from '../../sdk/tool';
|
||||
import { AgentEvent } from '../../types/runtime/event';
|
||||
import type { StreamChunk } from '../../types/sdk/agent';
|
||||
import type { BuiltMemory } from '../../types/sdk/memory';
|
||||
import type { ContentToolCall, Message } from '../../types/sdk/message';
|
||||
import type { BuiltTool, InterruptibleToolContext } from '../../types/sdk/tool';
|
||||
import type { BuiltTelemetry } from '../../types/telemetry';
|
||||
import { AgentRuntime } from '../agent-runtime';
|
||||
import { AgentEventBus } from '../event-bus';
|
||||
import { InMemoryMemory } from '../memory-store';
|
||||
import { AgentRuntime } from '../runtime/agent-runtime';
|
||||
import { AgentEventBus } from '../runtime/event-bus';
|
||||
import { isLlmMessage } from '../sdk/message';
|
||||
import { Tool, Tool as ToolBuilder } from '../sdk/tool';
|
||||
import { AgentEvent } from '../types/runtime/event';
|
||||
import type { StreamChunk } from '../types/sdk/agent';
|
||||
import type { ContentToolResult, Message } from '../types/sdk/message';
|
||||
import type { BuiltTool, InterruptibleToolContext } from '../types/sdk/tool';
|
||||
import type { BuiltTelemetry } from '../types/telemetry';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Module mocks
|
||||
|
|
@ -238,9 +236,9 @@ describe('AgentRuntime.generate() — graceful error contract', () => {
|
|||
generateText.mockRejectedValue(new Error('API failure'));
|
||||
|
||||
const { runtime } = createRuntime();
|
||||
await runtime.generate('hello');
|
||||
const result = await runtime.generate('hello');
|
||||
|
||||
expect(runtime.getState().status).toBe('failed');
|
||||
expect(result.getState().status).toBe('failed');
|
||||
});
|
||||
|
||||
it('emits AgentEvent.Error (not AgentEnd) when the LLM call throws', async () => {
|
||||
|
|
@ -268,10 +266,10 @@ describe('AgentRuntime.generate() — graceful error contract', () => {
|
|||
// Abort during AgentStart so the loop's first abort-check fires before generateText is called
|
||||
bus.on(AgentEvent.AgentStart, () => bus.abort());
|
||||
|
||||
await runtime.generate('hello');
|
||||
const result = await runtime.generate('hello');
|
||||
|
||||
expect(errorEvents.length).toBe(0);
|
||||
expect(runtime.getState().status).toBe('cancelled');
|
||||
expect(result.getState().status).toBe('cancelled');
|
||||
});
|
||||
|
||||
it('returns finishReason "error" and sets cancelled status on abort', async () => {
|
||||
|
|
@ -284,7 +282,7 @@ describe('AgentRuntime.generate() — graceful error contract', () => {
|
|||
const result = await runtime.generate('hello');
|
||||
|
||||
expect(result.finishReason).toBe('error');
|
||||
expect(runtime.getState().status).toBe('cancelled');
|
||||
expect(result.getState().status).toBe('cancelled');
|
||||
});
|
||||
|
||||
it('is reusable after an error — subsequent call with a good LLM response succeeds', async () => {
|
||||
|
|
@ -402,10 +400,10 @@ describe('AgentRuntime.stream() — graceful error contract', () => {
|
|||
});
|
||||
|
||||
const { runtime } = createRuntime();
|
||||
const { stream: readableStream } = await runtime.stream('hello');
|
||||
const { stream: readableStream, getState } = await runtime.stream('hello');
|
||||
await collectChunks(readableStream);
|
||||
|
||||
expect(runtime.getState().status).toBe('failed');
|
||||
expect(getState().status).toBe('failed');
|
||||
});
|
||||
|
||||
it('yields error chunk and finishes cleanly on abort', async () => {
|
||||
|
|
@ -414,13 +412,13 @@ describe('AgentRuntime.stream() — graceful error contract', () => {
|
|||
const { runtime, bus } = createRuntime();
|
||||
bus.on(AgentEvent.TurnStart, () => bus.abort());
|
||||
|
||||
const { stream: readableStream } = await runtime.stream('hello');
|
||||
const { stream: readableStream, getState } = await runtime.stream('hello');
|
||||
const chunks = await collectChunks(readableStream);
|
||||
|
||||
const errorChunks = chunks.filter((c) => c.type === 'error');
|
||||
expect(errorChunks.length).toBeGreaterThan(0);
|
||||
|
||||
expect(runtime.getState().status).toBe('cancelled');
|
||||
expect(getState().status).toBe('cancelled');
|
||||
});
|
||||
|
||||
it('stream is reusable after an error', async () => {
|
||||
|
|
@ -468,71 +466,6 @@ describe('AgentRuntime.stream() — graceful error contract', () => {
|
|||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// stream() — working memory
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('AgentRuntime.stream() — working memory', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
function makeMemory(savedWorkingMemory: string[]): BuiltMemory {
|
||||
return {
|
||||
getThread: jest.fn().mockResolvedValue(null),
|
||||
saveThread: jest.fn(async (thread) => {
|
||||
await Promise.resolve();
|
||||
return {
|
||||
...thread,
|
||||
createdAt: new Date(),
|
||||
updatedAt: new Date(),
|
||||
};
|
||||
}),
|
||||
deleteThread: jest.fn(),
|
||||
getMessages: jest.fn().mockResolvedValue([]),
|
||||
saveMessages: jest.fn(),
|
||||
deleteMessages: jest.fn(),
|
||||
getWorkingMemory: jest.fn().mockResolvedValue(null),
|
||||
saveWorkingMemory: jest.fn(async (_params, content: string) => {
|
||||
await Promise.resolve();
|
||||
savedWorkingMemory.push(content);
|
||||
}),
|
||||
describe: jest
|
||||
.fn()
|
||||
.mockReturnValue({ name: 'test', constructorName: 'TestMemory', connectionParams: {} }),
|
||||
};
|
||||
}
|
||||
|
||||
it('does not expose a working-memory write tool to the main agent', async () => {
|
||||
const savedWorkingMemory: string[] = [];
|
||||
const memory = makeMemory(savedWorkingMemory);
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'You are a test assistant.',
|
||||
memory,
|
||||
lastMessages: 5,
|
||||
workingMemory: {
|
||||
template: '# Thread memory\n- User facts:',
|
||||
structured: false,
|
||||
scope: 'thread',
|
||||
},
|
||||
});
|
||||
|
||||
streamText.mockReturnValueOnce(makeStreamSuccess('Done'));
|
||||
|
||||
const { stream } = await runtime.stream('remember this', {
|
||||
persistence: { threadId: 'thread-1', resourceId: 'user-1' },
|
||||
});
|
||||
await collectChunks(stream);
|
||||
|
||||
const calls = streamText.mock.calls as Array<[Record<string, unknown>]>;
|
||||
const callArgs = calls[0]?.[0] ?? {};
|
||||
expect(callArgs.tools ?? {}).not.toHaveProperty('update_working_memory');
|
||||
expect(savedWorkingMemory).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// resume() — graceful error contract
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -564,35 +497,37 @@ describe('AgentRuntime — state transitions on error', () => {
|
|||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it('starts idle, then reflects running→failed after a generate error', async () => {
|
||||
it('starts idle before first run', () => {
|
||||
const { runtime } = createRuntime();
|
||||
|
||||
expect(runtime.getState().status).toBe('idle');
|
||||
|
||||
generateText.mockRejectedValue(new Error('oops'));
|
||||
const runDone = runtime.generate('hi');
|
||||
|
||||
await runDone;
|
||||
expect(runtime.getState().status).toBe('failed');
|
||||
});
|
||||
|
||||
it('starts idle, then reflects running→cancelled on abort', async () => {
|
||||
it('result.getState() reflects failed after a generate error', async () => {
|
||||
generateText.mockRejectedValue(new Error('oops'));
|
||||
|
||||
const { runtime } = createRuntime();
|
||||
const result = await runtime.generate('hi');
|
||||
|
||||
expect(result.getState().status).toBe('failed');
|
||||
});
|
||||
|
||||
it('result.getState() reflects cancelled on abort', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
|
||||
const { runtime, bus } = createRuntime();
|
||||
bus.on(AgentEvent.AgentStart, () => bus.abort());
|
||||
|
||||
await runtime.generate('hi');
|
||||
expect(runtime.getState().status).toBe('cancelled');
|
||||
const result = await runtime.generate('hi');
|
||||
expect(result.getState().status).toBe('cancelled');
|
||||
});
|
||||
|
||||
it('transitions to success on a clean run', async () => {
|
||||
it('result.getState() transitions to success on a clean run', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
|
||||
const { runtime } = createRuntime();
|
||||
await runtime.generate('hi');
|
||||
const result = await runtime.generate('hi');
|
||||
|
||||
expect(runtime.getState().status).toBe('success');
|
||||
expect(result.getState().status).toBe('success');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -740,7 +675,7 @@ describe('AgentRuntime — concurrent tool execution', () => {
|
|||
expect(result.pendingSuspend![0].toolCallId).toBe('tc-1');
|
||||
|
||||
// Verify tc-3 is in the persisted state as a pending tool call (without suspendPayload)
|
||||
const state = runtime.getState();
|
||||
const state = result.getState();
|
||||
expect(state.pendingToolCalls['tc-3']).toBeDefined();
|
||||
expect(state.pendingToolCalls['tc-3'].suspended).toBe(false);
|
||||
});
|
||||
|
|
@ -970,7 +905,7 @@ describe('AgentRuntime — concurrent tool execution', () => {
|
|||
it('tool error produces an error tool-result in the message list and loop continues', async () => {
|
||||
type ToolOutputContent = {
|
||||
type: string;
|
||||
output?: { type: string; value?: unknown };
|
||||
output?: { type: string; value?: { error?: string } };
|
||||
};
|
||||
type ToolMessage = { role: string; content: ToolOutputContent[] };
|
||||
const receivedMessages: unknown[] = [];
|
||||
|
|
@ -997,15 +932,13 @@ describe('AgentRuntime — concurrent tool execution', () => {
|
|||
expect(result.finishReason).toBe('stop');
|
||||
// LLM was called a second time — it saw the error tool result and continued
|
||||
expect(generateText).toHaveBeenCalledTimes(2);
|
||||
// The second LLM call received a tool message whose output carries the error description.
|
||||
// The second LLM call received a tool message whose output carries the error description
|
||||
const toolMsg = receivedMessages.find(
|
||||
(m): m is ToolMessage =>
|
||||
typeof m === 'object' && m !== null && (m as ToolMessage).role === 'tool',
|
||||
);
|
||||
expect(toolMsg).toBeDefined();
|
||||
const hasErrorOutput = toolMsg!.content.some(
|
||||
(c) => c.output?.type === 'error-text' || c.output?.type === 'error-json',
|
||||
);
|
||||
const hasErrorOutput = toolMsg!.content.some((c) => !!c.output?.value?.error);
|
||||
expect(hasErrorOutput).toBe(true);
|
||||
});
|
||||
|
||||
|
|
@ -1049,9 +982,9 @@ describe('AgentRuntime — concurrent tool execution', () => {
|
|||
]),
|
||||
);
|
||||
|
||||
await runtime.generate('run tools');
|
||||
const result = await runtime.generate('run tools');
|
||||
|
||||
const state = runtime.getState();
|
||||
const state = result.getState();
|
||||
expect(state.pendingToolCalls['tc-1']).toBeDefined();
|
||||
expect(state.pendingToolCalls['tc-1'].toolName).toBe('suspend_tool');
|
||||
});
|
||||
|
|
@ -1074,9 +1007,9 @@ describe('AgentRuntime — concurrent tool execution', () => {
|
|||
]),
|
||||
);
|
||||
|
||||
await runtime.generate('run tools');
|
||||
const result = await runtime.generate('run tools');
|
||||
|
||||
const state = runtime.getState();
|
||||
const state = result.getState();
|
||||
expect(state.pendingToolCalls['tc-2']).toBeDefined();
|
||||
expect(state.pendingToolCalls['tc-2'].toolName).toBe('normal_tool');
|
||||
expect(state.pendingToolCalls['tc-2'].suspended).toBe(false);
|
||||
|
|
@ -1471,7 +1404,7 @@ describe('providerOptions — tool adapter', () => {
|
|||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const ai = require('ai') as { tool: jest.Mock };
|
||||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const adapter = require('../tool-adapter') as {
|
||||
const adapter = require('../runtime/tool-adapter') as {
|
||||
toAiSdkTools: (tools: BuiltTool[]) => Record<string, unknown>;
|
||||
};
|
||||
|
||||
|
|
@ -1499,7 +1432,7 @@ describe('providerOptions — tool adapter', () => {
|
|||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const ai = require('ai') as { tool: jest.Mock };
|
||||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const adapter = require('../tool-adapter') as {
|
||||
const adapter = require('../runtime/tool-adapter') as {
|
||||
toAiSdkTools: (tools: BuiltTool[]) => Record<string, unknown>;
|
||||
};
|
||||
|
||||
|
|
@ -1524,7 +1457,7 @@ describe('providerOptions — tool adapter', () => {
|
|||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const ai = require('ai') as { tool: jest.Mock };
|
||||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const adapter = require('../tool-adapter') as {
|
||||
const adapter = require('../runtime/tool-adapter') as {
|
||||
toAiSdkTools: (tools: BuiltTool[]) => Record<string, unknown>;
|
||||
};
|
||||
|
||||
|
|
@ -1621,14 +1554,17 @@ describe('AgentRuntime — runtime input schema validation', () => {
|
|||
// the LLM responds with 'done' on the next turn.
|
||||
expect(result.finishReason).toBe('stop');
|
||||
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
const toolErrorMessage = result.messages.find(
|
||||
(m) => isLlmMessage(m) && m.role === 'tool' && m.content[0].type === 'tool-result',
|
||||
) as Message;
|
||||
expect(assistantMsg).toBeDefined();
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('rejected');
|
||||
expect(call.state === 'rejected' && call.error).toContain('Expected string, received number');
|
||||
expect(toolErrorMessage).toBeDefined();
|
||||
const content = toolErrorMessage.content[0] as ContentToolResult;
|
||||
expect(content.result).toEqual(
|
||||
expect.objectContaining({
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
|
||||
error: expect.stringContaining('Expected string, received number'),
|
||||
}),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -1667,14 +1603,13 @@ describe('AgentRuntime — runtime JSON Schema input validation', () => {
|
|||
const result = await runtime.generate('go');
|
||||
expect(result.finishReason).toBe('stop');
|
||||
|
||||
// No error — the tool ran successfully
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
// No tool-result error — the tool ran successfully
|
||||
const toolResultMsg = result.messages.find(
|
||||
(m) => isLlmMessage(m) && m.role === 'tool',
|
||||
) as Message;
|
||||
expect(assistantMsg).toBeDefined();
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('resolved');
|
||||
expect(toolResultMsg).toBeDefined();
|
||||
const content = toolResultMsg.content[0] as ContentToolResult;
|
||||
expect(content.isError).toBeFalsy();
|
||||
});
|
||||
|
||||
it('surfaces a validation error as a tool error outcome when LLM provides the wrong type', async () => {
|
||||
|
|
@ -1704,14 +1639,14 @@ describe('AgentRuntime — runtime JSON Schema input validation', () => {
|
|||
const result = await runtime.generate('go');
|
||||
expect(result.finishReason).toBe('stop');
|
||||
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
const toolResultMsg = result.messages.find(
|
||||
(m) => isLlmMessage(m) && m.role === 'tool',
|
||||
) as Message;
|
||||
expect(assistantMsg).toBeDefined();
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('rejected');
|
||||
expect(call.state === 'rejected' && call.error).toContain('Invalid tool input');
|
||||
expect(toolResultMsg).toBeDefined();
|
||||
console.log('ToolResultMsg', toolResultMsg);
|
||||
const content = toolResultMsg.content[0] as ContentToolResult;
|
||||
expect(content.isError).toBe(true);
|
||||
expect(JSON.stringify(content.result)).toContain('Invalid tool input');
|
||||
});
|
||||
|
||||
it('surfaces a validation error when a required property is missing', async () => {
|
||||
|
|
@ -1742,15 +1677,15 @@ describe('AgentRuntime — runtime JSON Schema input validation', () => {
|
|||
});
|
||||
|
||||
const result = await runtime.generate('go');
|
||||
console.log('Result', result.error);
|
||||
expect(result.finishReason).toBe('stop');
|
||||
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
const toolResultMsg = result.messages.find(
|
||||
(m) => isLlmMessage(m) && m.role === 'tool',
|
||||
) as Message;
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('rejected');
|
||||
expect(call.state === 'rejected' && call.error).toContain('Invalid tool input');
|
||||
const content = toolResultMsg.content[0] as ContentToolResult;
|
||||
expect(content.isError).toBe(true);
|
||||
expect(JSON.stringify(content.result)).toContain('Invalid tool input');
|
||||
});
|
||||
|
||||
it('does not invoke the handler when JSON Schema validation fails', async () => {
|
||||
|
|
@ -1783,142 +1718,6 @@ describe('AgentRuntime — runtime JSON Schema input validation', () => {
|
|||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tool builder — JSON Schema input integration
|
||||
//
|
||||
// Mirrors the resolveNodeTool() code path in node-tool-factory.ts where the
|
||||
// input schema is a raw JSON Schema object (converted from Zod by ToolFromNode).
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('AgentRuntime — Tool builder with JSON Schema input', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it('passes valid input to the handler when built via Tool builder', async () => {
|
||||
const handlerFn = jest.fn().mockResolvedValue({ found: true });
|
||||
|
||||
const tool = new Tool('lookup')
|
||||
.description('Look up a record by id')
|
||||
.input({
|
||||
type: 'object',
|
||||
properties: { id: { type: 'string' } },
|
||||
required: ['id'],
|
||||
})
|
||||
.handler(handlerFn)
|
||||
.build();
|
||||
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateWithToolCall('tc-1', 'lookup', { id: 'abc-123' }))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('done'));
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
tools: [tool],
|
||||
});
|
||||
|
||||
const result = await runtime.generate('go');
|
||||
|
||||
expect(result.finishReason).toBe('stop');
|
||||
expect(handlerFn).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ id: 'abc-123' }),
|
||||
expect.anything(),
|
||||
);
|
||||
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
) as Message;
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('resolved');
|
||||
});
|
||||
|
||||
it('produces a tool error when the LLM sends input that fails JSON Schema validation', async () => {
|
||||
const handlerFn = jest.fn().mockResolvedValue({ found: true });
|
||||
|
||||
const tool = new Tool('lookup')
|
||||
.description('Look up a record by id')
|
||||
.input({
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: 'string' },
|
||||
count: { type: 'integer', minimum: 1 },
|
||||
},
|
||||
required: ['id', 'count'],
|
||||
})
|
||||
.handler(handlerFn)
|
||||
.build();
|
||||
|
||||
generateText
|
||||
// LLM sends count: 0 (violates minimum: 1) and id as a number (wrong type)
|
||||
.mockResolvedValueOnce(makeGenerateWithToolCall('tc-1', 'lookup', { id: 42, count: 0 }))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('corrected'));
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
tools: [tool],
|
||||
});
|
||||
|
||||
const result = await runtime.generate('go');
|
||||
|
||||
expect(result.finishReason).toBe('stop');
|
||||
// Handler must not be called — validation should block execution
|
||||
expect(handlerFn).not.toHaveBeenCalled();
|
||||
|
||||
const assistantMsg = result.messages.find(
|
||||
(m) =>
|
||||
isLlmMessage(m) && m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
) as Message;
|
||||
const call = assistantMsg.content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(call.state).toBe('rejected');
|
||||
expect(call.state === 'rejected' && call.error).toContain('Invalid tool input');
|
||||
});
|
||||
|
||||
it('validates enum and pattern constraints defined in JSON Schema', async () => {
|
||||
const handlerFn = jest.fn().mockResolvedValue({ ok: true });
|
||||
|
||||
const tool = new Tool('set_status')
|
||||
.description('Set the status of a record')
|
||||
.input({
|
||||
type: 'object',
|
||||
properties: {
|
||||
status: { type: 'string', enum: ['active', 'inactive', 'pending'] },
|
||||
},
|
||||
required: ['status'],
|
||||
})
|
||||
.handler(handlerFn)
|
||||
.build();
|
||||
|
||||
// First call: invalid enum value
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateWithToolCall('tc-1', 'set_status', { status: 'deleted' }))
|
||||
// Second call: valid enum value after self-correction
|
||||
.mockResolvedValueOnce(makeGenerateWithToolCall('tc-2', 'set_status', { status: 'inactive' }))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('done'));
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
tools: [tool],
|
||||
});
|
||||
|
||||
const result = await runtime.generate('go');
|
||||
|
||||
expect(result.finishReason).toBe('stop');
|
||||
// Handler called exactly once — only for the valid input
|
||||
expect(handlerFn).toHaveBeenCalledTimes(1);
|
||||
expect(handlerFn).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ status: 'inactive' }),
|
||||
expect.anything(),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Runtime validation — resume data schema
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -2154,114 +1953,6 @@ describe('provider options merging', () => {
|
|||
// Instruction providerOptions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('tool systemInstruction merging', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
function getSystemMessageText(): string {
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
|
||||
const callArgs = generateText.mock.calls[0][0] as Record<string, unknown>;
|
||||
const messages = callArgs.messages as Array<Record<string, unknown>>;
|
||||
const systemMsg = messages[0];
|
||||
expect(systemMsg.role).toBe('system');
|
||||
return String(systemMsg.content);
|
||||
}
|
||||
|
||||
it("wraps a tool's systemInstruction in a built_in_rules block above user instructions", async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
|
||||
const toolWithDirective: BuiltTool = {
|
||||
name: 'show_card',
|
||||
description: 'show a card',
|
||||
systemInstruction: 'Prefer this tool over plain text when posting images.',
|
||||
inputSchema: z.object({ value: z.string().optional() }),
|
||||
handler: async () => await Promise.resolve('ok'),
|
||||
};
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'You are a helpful assistant.',
|
||||
tools: [toolWithDirective],
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
const text = getSystemMessageText();
|
||||
expect(text).toContain('<built_in_rules>');
|
||||
expect(text).toContain('- Prefer this tool over plain text when posting images.');
|
||||
expect(text).toContain('</built_in_rules>');
|
||||
expect(text).toContain('You are a helpful assistant.');
|
||||
expect(text.indexOf('<built_in_rules>')).toBeLessThan(text.indexOf('You are a helpful'));
|
||||
});
|
||||
|
||||
it('joins multiple tools systemInstructions into a single block', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
|
||||
const toolA: BuiltTool = {
|
||||
name: 'a',
|
||||
description: 'a',
|
||||
systemInstruction: 'Rule A.',
|
||||
inputSchema: z.object({}),
|
||||
handler: async () => await Promise.resolve('ok'),
|
||||
};
|
||||
const toolB: BuiltTool = {
|
||||
name: 'b',
|
||||
description: 'b',
|
||||
systemInstruction: 'Rule B.',
|
||||
inputSchema: z.object({}),
|
||||
handler: async () => await Promise.resolve('ok'),
|
||||
};
|
||||
const toolC: BuiltTool = {
|
||||
name: 'c',
|
||||
description: 'c',
|
||||
inputSchema: z.object({}),
|
||||
handler: async () => await Promise.resolve('ok'),
|
||||
};
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'base',
|
||||
tools: [toolA, toolB, toolC],
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
const text = getSystemMessageText();
|
||||
const block = text.match(/<built_in_rules>([\s\S]*?)<\/built_in_rules>/);
|
||||
expect(block).not.toBeNull();
|
||||
expect(block![1]).toContain('- Rule A.');
|
||||
expect(block![1]).toContain('- Rule B.');
|
||||
expect(block![1]).not.toContain('Rule C');
|
||||
});
|
||||
|
||||
it('does not add a built_in_rules block when no tool sets a systemInstruction', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
|
||||
const plainTool: BuiltTool = {
|
||||
name: 'plain',
|
||||
description: 'plain',
|
||||
inputSchema: z.object({}),
|
||||
handler: async () => await Promise.resolve('ok'),
|
||||
};
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'You are a helpful assistant.',
|
||||
tools: [plainTool],
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
const text = getSystemMessageText();
|
||||
expect(text).not.toContain('<built_in_rules>');
|
||||
expect(text).toContain('You are a helpful assistant.');
|
||||
});
|
||||
});
|
||||
|
||||
describe('instruction providerOptions', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
|
@ -2337,144 +2028,6 @@ describe('AgentRuntime — telemetry propagation', () => {
|
|||
expect(expTelemetry.recordOutputs).toBe(false);
|
||||
});
|
||||
|
||||
it('wraps generate calls in a telemetry root span when the tracer supports active spans', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
const span = {
|
||||
end: jest.fn(),
|
||||
recordException: jest.fn(),
|
||||
setStatus: jest.fn(),
|
||||
};
|
||||
const tracer = {
|
||||
startActiveSpan: jest.fn(async (_name: string, _options: unknown, fn: unknown) => {
|
||||
if (typeof fn !== 'function') {
|
||||
throw new Error('Expected span callback');
|
||||
}
|
||||
const spanFn = fn as (spanValue: typeof span) => Promise<unknown>;
|
||||
return await spanFn(span);
|
||||
}),
|
||||
};
|
||||
const telemetry: BuiltTelemetry = { ...baseTelemetry, tracer };
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'telemetry-root-test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
eventBus: new AgentEventBus(),
|
||||
telemetry,
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
expect(tracer.startActiveSpan).toHaveBeenCalledWith(
|
||||
'test-agent.generate',
|
||||
{
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
|
||||
attributes: expect.objectContaining<Record<string, string>>({
|
||||
'langsmith.traceable': 'true',
|
||||
'langsmith.trace.name': 'test-agent.generate',
|
||||
'langsmith.span.kind': 'chain',
|
||||
'langsmith.metadata.agent_name': 'telemetry-root-test',
|
||||
'langsmith.metadata.env': 'test',
|
||||
}),
|
||||
},
|
||||
expect.any(Function),
|
||||
);
|
||||
expect(span.end).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('can suppress the generic runtime root span while keeping native telemetry enabled', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
const tracer = {
|
||||
startActiveSpan: jest.fn(),
|
||||
};
|
||||
const telemetry: BuiltTelemetry = {
|
||||
...baseTelemetry,
|
||||
runtimeRootSpanEnabled: false,
|
||||
tracer,
|
||||
};
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'telemetry-root-test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
eventBus: new AgentEventBus(),
|
||||
telemetry,
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
expect(tracer.startActiveSpan).not.toHaveBeenCalled();
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
|
||||
const callArgs = generateText.mock.calls[0][0] as Record<string, unknown>;
|
||||
expect(callArgs.experimental_telemetry).toEqual(
|
||||
expect.objectContaining({
|
||||
isEnabled: true,
|
||||
functionId: 'test-agent',
|
||||
tracer,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('adds a LangSmith tool catalog to telemetry root spans', async () => {
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
const span = {
|
||||
end: jest.fn(),
|
||||
recordException: jest.fn(),
|
||||
setStatus: jest.fn(),
|
||||
};
|
||||
const tracer = {
|
||||
startActiveSpan: jest.fn(async (_name: string, _options: unknown, fn: unknown) => {
|
||||
if (typeof fn !== 'function') {
|
||||
throw new Error('Expected span callback');
|
||||
}
|
||||
const spanFn = fn as (spanValue: typeof span) => Promise<unknown>;
|
||||
return await spanFn(span);
|
||||
}),
|
||||
};
|
||||
const telemetry: BuiltTelemetry = {
|
||||
...baseTelemetry,
|
||||
metadata: {
|
||||
...baseTelemetry.metadata,
|
||||
langsmith_trace_id: 'trace-1',
|
||||
langsmith_actor_run_id: 'actor-run-1',
|
||||
},
|
||||
tracer,
|
||||
};
|
||||
const tool = new ToolBuilder('lookup')
|
||||
.description('Lookup records')
|
||||
.input(z.object({ query: z.string() }))
|
||||
.handler(async () => await Promise.resolve({ ok: true }))
|
||||
.build();
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'telemetry-root-test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
eventBus: new AgentEventBus(),
|
||||
tools: [tool],
|
||||
telemetry,
|
||||
});
|
||||
|
||||
await runtime.generate('hello');
|
||||
|
||||
const rootSpanOptions = tracer.startActiveSpan.mock.calls[0][1] as {
|
||||
attributes: Record<string, unknown>;
|
||||
};
|
||||
const { attributes } = rootSpanOptions;
|
||||
expect(attributes).toEqual(
|
||||
expect.objectContaining({
|
||||
'langsmith.metadata.available_tools': ['lookup'],
|
||||
}),
|
||||
);
|
||||
expect(attributes).not.toHaveProperty('langsmith.trace.id');
|
||||
expect(attributes).not.toHaveProperty('langsmith.span.parent_id');
|
||||
expect(attributes['gen_ai.prompt']).toEqual(expect.stringContaining('"name":"lookup"'));
|
||||
expect(attributes['gen_ai.prompt']).toEqual(
|
||||
expect.stringContaining('"description":"Lookup records"'),
|
||||
);
|
||||
expect(attributes['gen_ai.prompt']).toEqual(expect.stringContaining('"input_schema"'));
|
||||
});
|
||||
|
||||
it('passes telemetry config into streamText as experimental_telemetry', async () => {
|
||||
streamText.mockReturnValue(makeStreamSuccess());
|
||||
|
||||
|
|
@ -2526,7 +2079,6 @@ describe('AgentRuntime — telemetry propagation', () => {
|
|||
|
||||
it('passes resolved telemetry to tool handlers via parentTelemetry', async () => {
|
||||
let capturedTelemetry: BuiltTelemetry | undefined;
|
||||
let capturedToolCallId: string | undefined;
|
||||
|
||||
const spyTool: BuiltTool = new ToolBuilder('spy')
|
||||
.description('captures telemetry from context')
|
||||
|
|
@ -2534,7 +2086,6 @@ describe('AgentRuntime — telemetry propagation', () => {
|
|||
.output(z.object({ ok: z.boolean() }))
|
||||
.handler(async (_input, ctx) => {
|
||||
capturedTelemetry = ctx.parentTelemetry;
|
||||
capturedToolCallId = ctx.toolCallId;
|
||||
return await Promise.resolve({ ok: true });
|
||||
})
|
||||
.build();
|
||||
|
|
@ -2555,82 +2106,6 @@ describe('AgentRuntime — telemetry propagation', () => {
|
|||
await runtime.generate('test');
|
||||
|
||||
expect(capturedTelemetry).toBe(baseTelemetry);
|
||||
expect(capturedToolCallId).toBe('tc1');
|
||||
});
|
||||
|
||||
it('emits AI SDK-compatible telemetry spans for local tool execution', async () => {
|
||||
const spans: Array<{
|
||||
name: string;
|
||||
span: {
|
||||
end: jest.Mock;
|
||||
recordException: jest.Mock;
|
||||
setAttributes: jest.Mock;
|
||||
setStatus: jest.Mock;
|
||||
};
|
||||
}> = [];
|
||||
const tracer = {
|
||||
startActiveSpan: jest.fn(async (name: string, _options: unknown, fn: unknown) => {
|
||||
if (typeof fn !== 'function') {
|
||||
throw new Error('Expected span callback');
|
||||
}
|
||||
const span = {
|
||||
end: jest.fn(),
|
||||
recordException: jest.fn(),
|
||||
setAttributes: jest.fn(),
|
||||
setStatus: jest.fn(),
|
||||
};
|
||||
spans.push({ name, span });
|
||||
const spanFn = fn as (spanValue: typeof span) => Promise<unknown>;
|
||||
return await spanFn(span);
|
||||
}),
|
||||
};
|
||||
const telemetry: BuiltTelemetry = {
|
||||
...baseTelemetry,
|
||||
recordOutputs: true,
|
||||
tracer,
|
||||
};
|
||||
const spyTool: BuiltTool = new ToolBuilder('spy')
|
||||
.description('captures telemetry from context')
|
||||
.input(z.object({ x: z.string() }))
|
||||
.output(z.object({ ok: z.boolean() }))
|
||||
.handler(async () => await Promise.resolve({ ok: true }))
|
||||
.build();
|
||||
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateWithToolCall('tc1', 'spy', { x: 'test' }))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('done'));
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'tool-telemetry-test',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'test',
|
||||
eventBus: new AgentEventBus(),
|
||||
tools: [spyTool],
|
||||
telemetry,
|
||||
});
|
||||
|
||||
await runtime.generate('test');
|
||||
|
||||
const toolCallSpan = tracer.startActiveSpan.mock.calls.find(([name]) => name === 'ai.toolCall');
|
||||
expect(toolCallSpan).toBeDefined();
|
||||
expect(toolCallSpan?.[1]).toEqual({
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
|
||||
attributes: expect.objectContaining<Record<string, string>>({
|
||||
'operation.name': 'ai.toolCall test-agent',
|
||||
'resource.name': 'test-agent',
|
||||
'ai.operationId': 'ai.toolCall',
|
||||
'ai.telemetry.functionId': 'test-agent',
|
||||
'ai.telemetry.metadata.env': 'test',
|
||||
'ai.toolCall.name': 'spy',
|
||||
'ai.toolCall.id': 'tc1',
|
||||
'ai.toolCall.args': '{"x":"test"}',
|
||||
}),
|
||||
});
|
||||
const toolSpan = spans.find((span) => span.name === 'ai.toolCall')?.span;
|
||||
expect(toolSpan?.setAttributes).toHaveBeenCalledWith({
|
||||
'ai.toolCall.result': '{"ok":true}',
|
||||
});
|
||||
expect(toolSpan?.end).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('passes inherited telemetry to tool handlers for sub-agent scenarios', async () => {
|
||||
|
|
@ -2688,75 +2163,3 @@ describe('AgentRuntime — telemetry propagation', () => {
|
|||
expect(callArgs.experimental_telemetry).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Observational memory — post-turn writer
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('AgentRuntime — observational memory writer', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
generateText.mockResolvedValue(makeGenerateSuccess());
|
||||
});
|
||||
|
||||
it('runs the observer after saving the turn and compacts into thread working memory', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const observe = jest.fn().mockResolvedValue([
|
||||
{
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-obs',
|
||||
kind: 'observation',
|
||||
payload: { text: 'User prefers concise answers.' },
|
||||
durationMs: null,
|
||||
schemaVersion: 1,
|
||||
createdAt: new Date(),
|
||||
},
|
||||
]);
|
||||
const compact = jest.fn().mockResolvedValue({
|
||||
content: '# Thread memory\n- User preferences: concise answers',
|
||||
});
|
||||
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'obs-writer',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'base instructions',
|
||||
memory: store,
|
||||
workingMemory: {
|
||||
template: '# Thread memory\n- User preferences:',
|
||||
structured: false,
|
||||
scope: 'thread',
|
||||
},
|
||||
observationalMemory: { observe, compact, compactionThreshold: 1, sync: true },
|
||||
});
|
||||
|
||||
await runtime.generate('remember that I like concise answers', {
|
||||
persistence: { threadId: 't-obs', resourceId: 'u-1' },
|
||||
});
|
||||
|
||||
expect(observe).toHaveBeenCalledTimes(1);
|
||||
expect(compact).toHaveBeenCalledTimes(1);
|
||||
expect(
|
||||
await store.getWorkingMemory({ threadId: 't-obs', resourceId: 'u-1', scope: 'thread' }),
|
||||
).toBe('# Thread memory\n- User preferences: concise answers');
|
||||
expect(await store.getObservations({ scopeKind: 'thread', scopeId: 't-obs' })).toEqual([]);
|
||||
});
|
||||
|
||||
it('does not run when observational memory is not configured', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const runtime = new AgentRuntime({
|
||||
name: 'obs-disabled',
|
||||
model: 'openai/gpt-4o-mini',
|
||||
instructions: 'base instructions',
|
||||
memory: store,
|
||||
workingMemory: {
|
||||
template: '# Thread memory',
|
||||
structured: false,
|
||||
scope: 'thread',
|
||||
},
|
||||
});
|
||||
|
||||
await runtime.generate('hi', { persistence: { threadId: 't-none', resourceId: 'u-1' } });
|
||||
|
||||
expect(await store.getCursor('thread', 't-none')).toBeNull();
|
||||
});
|
||||
});
|
||||
445
packages/@n8n/agents/src/__tests__/agent.test.ts
Normal file
445
packages/@n8n/agents/src/__tests__/agent.test.ts
Normal file
|
|
@ -0,0 +1,445 @@
|
|||
/**
|
||||
* Tests for the Agent builder focusing on per-run isolation guarantees introduced
|
||||
* by the "shared config, per-run runtime" refactor.
|
||||
*/
|
||||
|
||||
import { Agent } from '../sdk/agent';
|
||||
import { AgentEvent } from '../types/runtime/event';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Module mocks (same as agent-runtime.test.ts)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
jest.mock('@ai-sdk/openai', () => ({
|
||||
createOpenAI: () => () => ({ provider: 'openai', modelId: 'mock', specificationVersion: 'v3' }),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/anthropic', () => ({
|
||||
createAnthropic: () => () => ({
|
||||
provider: 'anthropic',
|
||||
modelId: 'mock',
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/consistent-type-imports
|
||||
type AiImport = typeof import('ai');
|
||||
|
||||
jest.mock('ai', () => {
|
||||
const actual = jest.requireActual<AiImport>('ai');
|
||||
return {
|
||||
...actual,
|
||||
generateText: jest.fn(),
|
||||
streamText: jest.fn(),
|
||||
tool: jest.fn((config: unknown) => config),
|
||||
Output: {
|
||||
object: jest.fn(({ schema }: { schema: unknown }) => ({ _type: 'object', schema })),
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
// Prevent real catalog HTTP calls
|
||||
jest.mock('../sdk/catalog', () => ({
|
||||
getModelCost: jest.fn().mockResolvedValue(undefined),
|
||||
computeCost: jest.fn(),
|
||||
}));
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-require-imports
|
||||
const { generateText, streamText } = require('ai') as {
|
||||
generateText: jest.Mock;
|
||||
streamText: jest.Mock;
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeGenerateSuccess(text = 'OK') {
|
||||
return {
|
||||
finishReason: 'stop',
|
||||
usage: { inputTokens: 10, outputTokens: 5, totalTokens: 15 },
|
||||
response: {
|
||||
messages: [{ role: 'assistant', content: [{ type: 'text', text }] }],
|
||||
},
|
||||
toolCalls: [],
|
||||
};
|
||||
}
|
||||
|
||||
function* makeChunkStream(chunks: Array<Record<string, unknown>>) {
|
||||
for (const c of chunks) yield c;
|
||||
}
|
||||
|
||||
function makeStreamSuccess(text = 'Hello') {
|
||||
return {
|
||||
fullStream: makeChunkStream([{ type: 'text-delta', textDelta: text }]),
|
||||
finishReason: Promise.resolve('stop'),
|
||||
usage: Promise.resolve({ inputTokens: 10, outputTokens: 5, totalTokens: 15 }),
|
||||
response: Promise.resolve({
|
||||
messages: [{ role: 'assistant', content: [{ type: 'text', text }] }],
|
||||
}),
|
||||
toolCalls: Promise.resolve([]),
|
||||
};
|
||||
}
|
||||
|
||||
async function drainStream(stream: ReadableStream<unknown>): Promise<void> {
|
||||
const reader = stream.getReader();
|
||||
|
||||
while (true) {
|
||||
const { done } = await reader.read();
|
||||
if (done) break;
|
||||
}
|
||||
}
|
||||
|
||||
function buildAgent() {
|
||||
return new Agent('test').model('openai/gpt-4o-mini').instructions('You are a test assistant.');
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Agent — per-run isolation', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('concurrent generate() calls', () => {
|
||||
it('returns independent results for each call', async () => {
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('Result A'))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('Result B'));
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
const [resultA, resultB] = await Promise.all([
|
||||
agent.generate('Prompt A'),
|
||||
agent.generate('Prompt B'),
|
||||
]);
|
||||
|
||||
const textA = resultA.messages
|
||||
.flatMap((m) => ('content' in m ? m.content : []))
|
||||
.filter((c) => c.type === 'text')
|
||||
.map((c) => ('text' in c ? c.text : ''))
|
||||
.join('');
|
||||
|
||||
const textB = resultB.messages
|
||||
.flatMap((m) => ('content' in m ? m.content : []))
|
||||
.filter((c) => c.type === 'text')
|
||||
.map((c) => ('text' in c ? c.text : ''))
|
||||
.join('');
|
||||
|
||||
expect(textA).toBe('Result A');
|
||||
expect(textB).toBe('Result B');
|
||||
expect(resultA.runId).not.toBe(resultB.runId);
|
||||
});
|
||||
|
||||
it('aborting one generate() does not cancel the other', async () => {
|
||||
const abortControllerA = new AbortController();
|
||||
|
||||
// Run A resolves only after a delay; we'll abort it via its signal.
|
||||
// Run B resolves immediately.
|
||||
let resolveA!: (v: unknown) => void;
|
||||
const pendingA = new Promise((res) => {
|
||||
resolveA = res;
|
||||
});
|
||||
|
||||
generateText.mockImplementation(async ({ abortSignal }: { abortSignal?: AbortSignal }) => {
|
||||
if (abortSignal === abortControllerA.signal || abortSignal?.aborted) {
|
||||
// Simulate the AI SDK throwing on abort
|
||||
await new Promise((_, rej) =>
|
||||
abortSignal.addEventListener('abort', () => rej(new Error('aborted')), {
|
||||
once: true,
|
||||
}),
|
||||
);
|
||||
}
|
||||
// Run B path — return immediately
|
||||
await pendingA;
|
||||
return makeGenerateSuccess('Result B');
|
||||
});
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
// Start both runs; abort run A immediately
|
||||
const runAPromise = agent.generate('Prompt A', { abortSignal: abortControllerA.signal });
|
||||
abortControllerA.abort();
|
||||
resolveA(undefined);
|
||||
|
||||
const runA = await runAPromise;
|
||||
expect(runA.finishReason).toBe('error');
|
||||
|
||||
// Run B separately (no abort)
|
||||
generateText.mockResolvedValueOnce(makeGenerateSuccess('Result B'));
|
||||
const runB = await agent.generate('Prompt B');
|
||||
const textB = runB.messages
|
||||
.flatMap((m) => ('content' in m ? m.content : []))
|
||||
.filter((c) => c.type === 'text')
|
||||
.map((c) => ('text' in c ? c.text : ''))
|
||||
.join('');
|
||||
expect(textB).toBe('Result B');
|
||||
});
|
||||
});
|
||||
|
||||
describe('concurrent stream() calls', () => {
|
||||
it('returns independent streams for each call', async () => {
|
||||
streamText
|
||||
.mockReturnValueOnce(makeStreamSuccess('Stream A'))
|
||||
.mockReturnValueOnce(makeStreamSuccess('Stream B'));
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
const [resultA, resultB] = await Promise.all([
|
||||
agent.stream('Prompt A'),
|
||||
agent.stream('Prompt B'),
|
||||
]);
|
||||
|
||||
// Both streams should be distinct ReadableStream objects
|
||||
expect(resultA.stream).not.toBe(resultB.stream);
|
||||
expect(resultA.runId).not.toBe(resultB.runId);
|
||||
|
||||
// Drain both streams to completion
|
||||
await Promise.all([drainStream(resultA.stream), drainStream(resultB.stream)]);
|
||||
});
|
||||
|
||||
it('aborting one stream does not cancel the other', async () => {
|
||||
const abortControllerA = new AbortController();
|
||||
|
||||
streamText.mockImplementation(({ abortSignal }: { abortSignal?: AbortSignal }) => {
|
||||
if (abortSignal === abortControllerA.signal) {
|
||||
return {
|
||||
fullStream: (async function* () {
|
||||
// Wait until aborted then throw
|
||||
await new Promise<void>((_, rej) => {
|
||||
abortSignal.addEventListener('abort', () => rej(new Error('aborted')), {
|
||||
once: true,
|
||||
});
|
||||
});
|
||||
yield 'something';
|
||||
})(),
|
||||
finishReason: Promise.resolve('error'),
|
||||
usage: Promise.resolve({ inputTokens: 0, outputTokens: 0, totalTokens: 0 }),
|
||||
response: Promise.resolve({ messages: [] }),
|
||||
toolCalls: Promise.resolve([]),
|
||||
};
|
||||
}
|
||||
return makeStreamSuccess('Stream B');
|
||||
});
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
const [resultA, resultB] = await Promise.all([
|
||||
agent.stream('Prompt A', { abortSignal: abortControllerA.signal }),
|
||||
agent.stream('Prompt B'),
|
||||
]);
|
||||
|
||||
// Abort run A
|
||||
abortControllerA.abort();
|
||||
|
||||
// Drain stream B — it should complete successfully regardless of A being aborted
|
||||
await drainStream(resultB.stream);
|
||||
|
||||
// Drain stream A — it will error but shouldn't affect B
|
||||
await drainStream(resultA.stream).catch(() => {});
|
||||
});
|
||||
});
|
||||
|
||||
describe('event handlers (on())', () => {
|
||||
it('fires registered handlers for every concurrent run', async () => {
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('A'))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('B'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const agentStartEvents: string[] = [];
|
||||
|
||||
agent.on(AgentEvent.AgentStart, () => {
|
||||
agentStartEvents.push('start');
|
||||
});
|
||||
|
||||
await Promise.all([agent.generate('Prompt A'), agent.generate('Prompt B')]);
|
||||
|
||||
// Handler should have fired once per run
|
||||
expect(agentStartEvents).toHaveLength(2);
|
||||
});
|
||||
|
||||
it('handlers registered before first run still fire on every subsequent run', async () => {
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('First'))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('Second'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const events: string[] = [];
|
||||
|
||||
agent.on(AgentEvent.AgentEnd, () => {
|
||||
events.push('end');
|
||||
});
|
||||
|
||||
await agent.generate('First');
|
||||
await agent.generate('Second');
|
||||
|
||||
expect(events).toHaveLength(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('abort() broadcast', () => {
|
||||
it('aborts all active runs when agent.abort() is called', async () => {
|
||||
let resolveA!: (v: unknown) => void;
|
||||
|
||||
generateText.mockImplementation(async ({ abortSignal }: { abortSignal?: AbortSignal }) => {
|
||||
// Each call waits until its resolver is called or the signal fires
|
||||
await new Promise((res, rej) => {
|
||||
abortSignal?.addEventListener('abort', () => rej(new Error('aborted')), {
|
||||
once: true,
|
||||
});
|
||||
resolveA ??= res;
|
||||
});
|
||||
return makeGenerateSuccess();
|
||||
});
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
const runAPromise = agent.generate('A');
|
||||
const runBPromise = agent.generate('B');
|
||||
|
||||
// Give both calls time to reach the mock and register abort listeners
|
||||
await new Promise((res) => setTimeout(res, 10));
|
||||
|
||||
// Broadcast abort — both runs should be cancelled
|
||||
agent.abort();
|
||||
|
||||
const [runA, runB] = await Promise.all([runAPromise, runBPromise]);
|
||||
expect(runA.finishReason).toBe('error');
|
||||
expect(runB.finishReason).toBe('error');
|
||||
});
|
||||
});
|
||||
|
||||
describe('off() — event handler removal', () => {
|
||||
it('removes a specific handler so it no longer fires', async () => {
|
||||
generateText
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('A'))
|
||||
.mockResolvedValueOnce(makeGenerateSuccess('B'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const events: string[] = [];
|
||||
|
||||
const handler = () => events.push('end');
|
||||
agent.on(AgentEvent.AgentEnd, handler);
|
||||
await agent.generate('First');
|
||||
|
||||
agent.off(AgentEvent.AgentEnd, handler);
|
||||
await agent.generate('Second');
|
||||
|
||||
// Handler should have fired only for the first run
|
||||
expect(events).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('removing one handler does not affect other handlers for the same event', async () => {
|
||||
generateText.mockResolvedValueOnce(makeGenerateSuccess('A'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const firedA: string[] = [];
|
||||
const firedB: string[] = [];
|
||||
|
||||
const handlerA = () => firedA.push('a');
|
||||
const handlerB = () => firedB.push('b');
|
||||
|
||||
agent.on(AgentEvent.AgentEnd, handlerA);
|
||||
agent.on(AgentEvent.AgentEnd, handlerB);
|
||||
|
||||
agent.off(AgentEvent.AgentEnd, handlerA);
|
||||
|
||||
await agent.generate('Hello');
|
||||
|
||||
expect(firedA).toHaveLength(0);
|
||||
expect(firedB).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('off() on a handler that was never registered is a no-op', () => {
|
||||
const agent = buildAgent();
|
||||
expect(() => agent.off(AgentEvent.AgentEnd, () => {})).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('trackStreamBus — cleanup on stream cancel', () => {
|
||||
it('removes the bus from active runs when the consumer cancels the stream', async () => {
|
||||
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
|
||||
|
||||
const agent = buildAgent();
|
||||
|
||||
// Access the private set via casting so we can assert its size
|
||||
const getActiveBuses = () =>
|
||||
(agent as unknown as { activeEventBuses: Set<unknown> }).activeEventBuses;
|
||||
|
||||
const { stream } = await agent.stream('Hello');
|
||||
|
||||
// Bus is registered while the stream is live
|
||||
expect(getActiveBuses().size).toBe(1);
|
||||
|
||||
// Consumer cancels instead of draining
|
||||
await stream.cancel();
|
||||
|
||||
// Bus must be removed immediately after cancel
|
||||
expect(getActiveBuses().size).toBe(0);
|
||||
});
|
||||
|
||||
it('removes the bus from active runs when the consumer drains the stream normally', async () => {
|
||||
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const getActiveBuses = () =>
|
||||
(agent as unknown as { activeEventBuses: Set<unknown> }).activeEventBuses;
|
||||
|
||||
const { stream } = await agent.stream('Hello');
|
||||
expect(getActiveBuses().size).toBe(1);
|
||||
|
||||
await drainStream(stream);
|
||||
|
||||
expect(getActiveBuses().size).toBe(0);
|
||||
});
|
||||
|
||||
it('abort() after stream cancel does not throw on a disposed bus', async () => {
|
||||
streamText.mockReturnValueOnce(makeStreamSuccess('Hello'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const { stream } = await agent.stream('Hello');
|
||||
|
||||
await stream.cancel();
|
||||
|
||||
// agent.abort() should be harmless — no active buses remain
|
||||
expect(() => agent.abort()).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('result.getState()', () => {
|
||||
it('generate() result.getState() reports success after a clean run', async () => {
|
||||
generateText.mockResolvedValueOnce(makeGenerateSuccess());
|
||||
|
||||
const agent = buildAgent();
|
||||
const result = await agent.generate('Hello');
|
||||
|
||||
expect(result.getState().status).toBe('success');
|
||||
});
|
||||
|
||||
it('generate() result.getState() reports failed after an error', async () => {
|
||||
generateText.mockRejectedValueOnce(new Error('boom'));
|
||||
|
||||
const agent = buildAgent();
|
||||
const result = await agent.generate('Hello');
|
||||
|
||||
expect(result.getState().status).toBe('failed');
|
||||
});
|
||||
|
||||
it('stream() result.getState() reports success after the stream is consumed', async () => {
|
||||
streamText.mockReturnValueOnce(makeStreamSuccess());
|
||||
|
||||
const agent = buildAgent();
|
||||
const { stream, getState } = await agent.stream('Hello');
|
||||
|
||||
// State is running while stream is open
|
||||
expect(getState().status).toBe('running');
|
||||
|
||||
await drainStream(stream);
|
||||
|
||||
expect(getState().status).toBe('success');
|
||||
});
|
||||
});
|
||||
});
|
||||
405
packages/@n8n/agents/src/__tests__/describe.test.ts
Normal file
405
packages/@n8n/agents/src/__tests__/describe.test.ts
Normal file
|
|
@ -0,0 +1,405 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
import { Agent } from '../sdk/agent';
|
||||
import { McpClient } from '../sdk/mcp-client';
|
||||
import { Telemetry } from '../sdk/telemetry';
|
||||
import { Tool } from '../sdk/tool';
|
||||
import type { BuiltEval, BuiltGuardrail, BuiltMemory, BuiltProviderTool } from '../types';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeMockMemory(): BuiltMemory {
|
||||
return {
|
||||
getThread: jest.fn(),
|
||||
saveThread: jest.fn(),
|
||||
deleteThread: jest.fn(),
|
||||
getMessages: jest.fn(),
|
||||
saveMessages: jest.fn(),
|
||||
deleteMessages: jest.fn(),
|
||||
} as unknown as BuiltMemory;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Agent.describe()', () => {
|
||||
it('returns null/empty fields for an unconfigured agent', () => {
|
||||
const agent = new Agent('test-agent');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.model).toEqual({ provider: null, name: null });
|
||||
expect(schema.credential).toBeNull();
|
||||
expect(schema.instructions).toBeNull();
|
||||
expect(schema.description).toBeNull();
|
||||
expect(schema.tools).toEqual([]);
|
||||
expect(schema.providerTools).toEqual([]);
|
||||
expect(schema.memory).toBeNull();
|
||||
expect(schema.evaluations).toEqual([]);
|
||||
expect(schema.guardrails).toEqual([]);
|
||||
expect(schema.mcp).toBeNull();
|
||||
expect(schema.telemetry).toBeNull();
|
||||
expect(schema.checkpoint).toBeNull();
|
||||
expect(schema.config.structuredOutput).toEqual({ enabled: false, schemaSource: null });
|
||||
expect(schema.config.thinking).toBeNull();
|
||||
expect(schema.config.toolCallConcurrency).toBeNull();
|
||||
expect(schema.config.requireToolApproval).toBe(false);
|
||||
});
|
||||
|
||||
// --- Model parsing ---
|
||||
|
||||
it('parses two-arg model (provider, name)', () => {
|
||||
const agent = new Agent('test-agent').model('anthropic', 'claude-sonnet-4-5');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
|
||||
});
|
||||
|
||||
it('parses single-arg model with slash', () => {
|
||||
const agent = new Agent('test-agent').model('anthropic/claude-sonnet-4-5');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
|
||||
});
|
||||
|
||||
it('parses model without slash', () => {
|
||||
const agent = new Agent('test-agent').model('gpt-4o');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.model).toEqual({ provider: null, name: 'gpt-4o' });
|
||||
});
|
||||
|
||||
it('handles object model config', () => {
|
||||
const agent = new Agent('test-agent').model({
|
||||
id: 'anthropic/claude-sonnet-4-5',
|
||||
apiKey: 'sk-test',
|
||||
});
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.model).toEqual({ provider: null, name: null, raw: 'object' });
|
||||
});
|
||||
|
||||
// --- Credential ---
|
||||
|
||||
it('returns credential name', () => {
|
||||
const agent = new Agent('test-agent').credential('my-anthropic-key');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.credential).toBe('my-anthropic-key');
|
||||
});
|
||||
|
||||
// --- Instructions ---
|
||||
|
||||
it('returns instructions text', () => {
|
||||
const agent = new Agent('test-agent').instructions('You are helpful.');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.instructions).toBe('You are helpful.');
|
||||
});
|
||||
|
||||
// --- Custom tool ---
|
||||
|
||||
it('describes a custom tool with handler, input schema, and suspend/resume', () => {
|
||||
const suspendSchema = z.object({ reason: z.string() });
|
||||
const resumeSchema = z.object({ approved: z.boolean() });
|
||||
|
||||
const tool = new Tool('danger')
|
||||
.description('A dangerous action')
|
||||
.input(z.object({ target: z.string() }))
|
||||
.output(z.object({ result: z.string() }))
|
||||
.suspend(suspendSchema)
|
||||
.resume(resumeSchema)
|
||||
.handler(async ({ target }) => await Promise.resolve({ result: target }))
|
||||
.build();
|
||||
|
||||
const agent = new Agent('test-agent').tool(tool);
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.tools).toHaveLength(1);
|
||||
const ts = schema.tools[0];
|
||||
expect(ts.name).toBe('danger');
|
||||
expect(ts.editable).toBe(true);
|
||||
expect(ts.hasSuspend).toBe(true);
|
||||
expect(ts.hasResume).toBe(true);
|
||||
expect(ts.hasToMessage).toBe(false);
|
||||
expect(ts.inputSchema).toBeTruthy();
|
||||
expect(ts.outputSchema).toBeTruthy();
|
||||
// handlerSource is a fallback (compiled JS), CLI overrides with real TypeScript
|
||||
expect(ts.handlerSource).toContain('target');
|
||||
// Source string fields are null — CLI patches with original TypeScript
|
||||
expect(ts.inputSchemaSource).toBeNull();
|
||||
expect(ts.outputSchemaSource).toBeNull();
|
||||
expect(ts.suspendSchemaSource).toBeNull();
|
||||
expect(ts.resumeSchemaSource).toBeNull();
|
||||
expect(ts.toMessageSource).toBeNull();
|
||||
expect(ts.requireApproval).toBe(false);
|
||||
expect(ts.needsApprovalFnSource).toBeNull();
|
||||
expect(ts.providerOptions).toBeNull();
|
||||
});
|
||||
|
||||
// --- Provider tool ---
|
||||
|
||||
it('describes a provider tool in providerTools array', () => {
|
||||
const providerTool: BuiltProviderTool = {
|
||||
name: 'anthropic.web_search_20250305',
|
||||
args: { maxResults: 5 },
|
||||
};
|
||||
|
||||
const agent = new Agent('test-agent').providerTool(providerTool);
|
||||
const schema = agent.describe();
|
||||
|
||||
// Provider tools are now in a separate array
|
||||
expect(schema.tools).toHaveLength(0);
|
||||
expect(schema.providerTools).toHaveLength(1);
|
||||
expect(schema.providerTools[0].name).toBe('anthropic.web_search_20250305');
|
||||
expect(schema.providerTools[0].source).toBe('');
|
||||
});
|
||||
|
||||
// --- MCP servers ---
|
||||
|
||||
it('describes MCP servers in mcp field', () => {
|
||||
const client = new McpClient([
|
||||
{ name: 'browser', url: 'http://localhost:9222/mcp', transport: 'streamableHttp' },
|
||||
{ name: 'fs', command: 'echo', args: ['test'] },
|
||||
]);
|
||||
|
||||
const agent = new Agent('test-agent').mcp(client);
|
||||
const schema = agent.describe();
|
||||
|
||||
// MCP servers are now in a separate mcp field
|
||||
expect(schema.tools).toHaveLength(0);
|
||||
expect(schema.mcp).toHaveLength(2);
|
||||
expect(schema.mcp![0].name).toBe('browser');
|
||||
expect(schema.mcp![0].configSource).toBe('');
|
||||
expect(schema.mcp![1].name).toBe('fs');
|
||||
expect(schema.mcp![1].configSource).toBe('');
|
||||
});
|
||||
|
||||
it('returns null mcp when no clients are configured', () => {
|
||||
const agent = new Agent('test-agent');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.mcp).toBeNull();
|
||||
});
|
||||
|
||||
// --- Guardrails ---
|
||||
|
||||
it('describes input and output guardrails', () => {
|
||||
const inputGuard: BuiltGuardrail = {
|
||||
name: 'pii-filter',
|
||||
guardType: 'pii',
|
||||
strategy: 'redact',
|
||||
_config: { types: ['email', 'phone'] },
|
||||
};
|
||||
const outputGuard: BuiltGuardrail = {
|
||||
name: 'moderation-check',
|
||||
guardType: 'moderation',
|
||||
strategy: 'block',
|
||||
_config: {},
|
||||
};
|
||||
|
||||
const agent = new Agent('test-agent').inputGuardrail(inputGuard).outputGuardrail(outputGuard);
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.guardrails).toHaveLength(2);
|
||||
expect(schema.guardrails[0]).toEqual({
|
||||
name: 'pii-filter',
|
||||
guardType: 'pii',
|
||||
strategy: 'redact',
|
||||
position: 'input',
|
||||
config: { types: ['email', 'phone'] },
|
||||
source: '',
|
||||
});
|
||||
expect(schema.guardrails[1]).toEqual({
|
||||
name: 'moderation-check',
|
||||
guardType: 'moderation',
|
||||
strategy: 'block',
|
||||
position: 'output',
|
||||
config: {},
|
||||
source: '',
|
||||
});
|
||||
});
|
||||
|
||||
// --- Telemetry ---
|
||||
|
||||
it('returns telemetry schema when telemetry builder is set', () => {
|
||||
const agent = new Agent('test-agent').telemetry(new Telemetry());
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.telemetry).toEqual({ source: '' });
|
||||
});
|
||||
|
||||
it('returns null telemetry when not configured', () => {
|
||||
const agent = new Agent('test-agent');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.telemetry).toBeNull();
|
||||
});
|
||||
|
||||
// --- Checkpoint ---
|
||||
|
||||
it('returns memory checkpoint when checkpoint is memory', () => {
|
||||
const agent = new Agent('test-agent').checkpoint('memory');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.checkpoint).toBe('memory');
|
||||
});
|
||||
|
||||
it('returns null checkpoint when not configured', () => {
|
||||
const agent = new Agent('test-agent');
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.checkpoint).toBeNull();
|
||||
});
|
||||
|
||||
// --- Memory ---
|
||||
|
||||
it('describes memory configuration', () => {
|
||||
const agent = new Agent('test-agent').memory({
|
||||
memory: makeMockMemory(),
|
||||
lastMessages: 20,
|
||||
semanticRecall: {
|
||||
topK: 5,
|
||||
messageRange: { before: 2, after: 2 },
|
||||
embedder: 'openai/text-embedding-3-small',
|
||||
},
|
||||
workingMemory: {
|
||||
template: 'Current state: {{state}}',
|
||||
structured: false,
|
||||
scope: 'resource' as const,
|
||||
},
|
||||
});
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.memory).toBeTruthy();
|
||||
expect(schema.memory!.source).toBeNull();
|
||||
expect(schema.memory!.lastMessages).toBe(20);
|
||||
expect(schema.memory!.semanticRecall).toEqual({
|
||||
topK: 5,
|
||||
messageRange: { before: 2, after: 2 },
|
||||
embedder: 'openai/text-embedding-3-small',
|
||||
});
|
||||
expect(schema.memory!.workingMemory).toEqual({
|
||||
type: 'freeform',
|
||||
template: 'Current state: {{state}}',
|
||||
});
|
||||
});
|
||||
|
||||
it('describes structured working memory', () => {
|
||||
const agent = new Agent('test-agent').memory({
|
||||
memory: makeMockMemory(),
|
||||
lastMessages: 10,
|
||||
workingMemory: {
|
||||
template: '',
|
||||
structured: true,
|
||||
schema: z.object({ notes: z.string() }),
|
||||
scope: 'resource' as const,
|
||||
},
|
||||
});
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.memory!.workingMemory!.type).toBe('structured');
|
||||
expect(schema.memory!.workingMemory!.schema).toBeTruthy();
|
||||
});
|
||||
|
||||
// --- Evaluations ---
|
||||
|
||||
it('describes evaluations with evalType, modelId, and handlerSource', () => {
|
||||
const checkEval: BuiltEval = {
|
||||
name: 'has-greeting',
|
||||
description: 'Checks for greeting',
|
||||
evalType: 'check',
|
||||
modelId: null,
|
||||
credentialName: null,
|
||||
_run: jest.fn(),
|
||||
};
|
||||
const judgeEval: BuiltEval = {
|
||||
name: 'quality-judge',
|
||||
description: undefined,
|
||||
evalType: 'judge',
|
||||
modelId: 'anthropic/claude-haiku-4-5',
|
||||
credentialName: 'anthropic-key',
|
||||
_run: jest.fn(),
|
||||
};
|
||||
|
||||
const agent = new Agent('test-agent').eval(checkEval).eval(judgeEval);
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.evaluations).toHaveLength(2);
|
||||
expect(schema.evaluations[0]).toEqual({
|
||||
name: 'has-greeting',
|
||||
description: 'Checks for greeting',
|
||||
type: 'check',
|
||||
modelId: null,
|
||||
hasCredential: false,
|
||||
credentialName: null,
|
||||
handlerSource: null,
|
||||
});
|
||||
expect(schema.evaluations[1]).toEqual({
|
||||
name: 'quality-judge',
|
||||
description: null,
|
||||
type: 'judge',
|
||||
modelId: 'anthropic/claude-haiku-4-5',
|
||||
hasCredential: true,
|
||||
credentialName: 'anthropic-key',
|
||||
handlerSource: null,
|
||||
});
|
||||
});
|
||||
|
||||
// --- Thinking config ---
|
||||
|
||||
it('describes anthropic thinking config', () => {
|
||||
const agent = new Agent('test-agent')
|
||||
.model('anthropic', 'claude-sonnet-4-5')
|
||||
.thinking('anthropic', { budgetTokens: 10000 });
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.config.thinking).toEqual({
|
||||
provider: 'anthropic',
|
||||
budgetTokens: 10000,
|
||||
});
|
||||
});
|
||||
|
||||
it('describes openai thinking config', () => {
|
||||
const agent = new Agent('test-agent')
|
||||
.model('openai', 'o3-mini')
|
||||
.thinking('openai', { reasoningEffort: 'high' });
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.config.thinking).toEqual({
|
||||
provider: 'openai',
|
||||
reasoningEffort: 'high',
|
||||
});
|
||||
});
|
||||
|
||||
// --- requireToolApproval ---
|
||||
|
||||
it('reflects requireToolApproval flag', () => {
|
||||
const agent = new Agent('test-agent').requireToolApproval();
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.config.requireToolApproval).toBe(true);
|
||||
});
|
||||
|
||||
// --- toolCallConcurrency ---
|
||||
|
||||
it('reflects toolCallConcurrency', () => {
|
||||
const agent = new Agent('test-agent').toolCallConcurrency(5);
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.config.toolCallConcurrency).toBe(5);
|
||||
});
|
||||
|
||||
// --- Structured output ---
|
||||
|
||||
it('describes structured output with schemaSource null', () => {
|
||||
const outputSchema = z.object({ code: z.string(), explanation: z.string() });
|
||||
const agent = new Agent('test-agent').structuredOutput(outputSchema);
|
||||
const schema = agent.describe();
|
||||
|
||||
expect(schema.config.structuredOutput.enabled).toBe(true);
|
||||
expect(schema.config.structuredOutput.schemaSource).toBeNull();
|
||||
});
|
||||
});
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
import { AgentEventBus } from '../event-bus';
|
||||
import { AgentEventBus } from '../runtime/event-bus';
|
||||
|
||||
describe('AgentEventBus', () => {
|
||||
describe('resetAbort', () => {
|
||||
606
packages/@n8n/agents/src/__tests__/from-schema.test.ts
Normal file
606
packages/@n8n/agents/src/__tests__/from-schema.test.ts
Normal file
|
|
@ -0,0 +1,606 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
import { Agent } from '../sdk/agent';
|
||||
import { isSuspendResult } from '../sdk/from-schema';
|
||||
import type { HandlerExecutor } from '../types/sdk/handler-executor';
|
||||
import type { AgentSchema, ToolSchema } from '../types/sdk/schema';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function mockExecutor(): HandlerExecutor {
|
||||
return {
|
||||
executeTool: jest.fn().mockResolvedValue({ result: 'mocked' }),
|
||||
executeToMessage: jest.fn().mockResolvedValue(undefined),
|
||||
executeEval: jest.fn().mockResolvedValue({ score: 1 }),
|
||||
evaluateSchema: jest.fn().mockResolvedValue(undefined),
|
||||
evaluateExpression: jest.fn().mockResolvedValue(undefined),
|
||||
};
|
||||
}
|
||||
|
||||
function minimalSchema(overrides: Partial<AgentSchema> = {}): AgentSchema {
|
||||
return {
|
||||
model: { provider: 'anthropic', name: 'claude-sonnet-4-5' },
|
||||
credential: 'my-credential',
|
||||
instructions: 'You are helpful.',
|
||||
description: null,
|
||||
tools: [],
|
||||
providerTools: [],
|
||||
memory: null,
|
||||
evaluations: [],
|
||||
guardrails: [],
|
||||
mcp: null,
|
||||
telemetry: null,
|
||||
checkpoint: null,
|
||||
config: {
|
||||
structuredOutput: { enabled: false, schemaSource: null },
|
||||
thinking: null,
|
||||
toolCallConcurrency: null,
|
||||
requireToolApproval: false,
|
||||
},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
function makeToolSchema(overrides: Partial<ToolSchema> = {}): ToolSchema {
|
||||
return {
|
||||
name: 'test-tool',
|
||||
description: 'A test tool',
|
||||
type: 'custom',
|
||||
editable: true,
|
||||
inputSchemaSource: null,
|
||||
outputSchemaSource: null,
|
||||
handlerSource: null,
|
||||
suspendSchemaSource: null,
|
||||
resumeSchemaSource: null,
|
||||
toMessageSource: null,
|
||||
requireApproval: false,
|
||||
needsApprovalFnSource: null,
|
||||
providerOptions: null,
|
||||
inputSchema: { type: 'object', properties: { query: { type: 'string' } } },
|
||||
outputSchema: null,
|
||||
hasSuspend: false,
|
||||
hasResume: false,
|
||||
hasToMessage: false,
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Agent.fromSchema()', () => {
|
||||
it('reconstructs basic agent config', async () => {
|
||||
const schema = minimalSchema();
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.model).toEqual({ provider: 'anthropic', name: 'claude-sonnet-4-5' });
|
||||
expect(described.credential).toBe('my-credential');
|
||||
expect(described.instructions).toBe('You are helpful.');
|
||||
});
|
||||
|
||||
it('reconstructs model with only name (no provider)', async () => {
|
||||
const schema = minimalSchema({
|
||||
model: { provider: null, name: 'gpt-4o' },
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.model).toEqual({ provider: null, name: 'gpt-4o' });
|
||||
});
|
||||
|
||||
it('reconstructs thinking config with correct provider arg', async () => {
|
||||
const schema = minimalSchema({
|
||||
config: {
|
||||
structuredOutput: { enabled: false, schemaSource: null },
|
||||
thinking: { provider: 'anthropic', budgetTokens: 10000 },
|
||||
toolCallConcurrency: null,
|
||||
requireToolApproval: false,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.config.thinking).toEqual({
|
||||
provider: 'anthropic',
|
||||
budgetTokens: 10000,
|
||||
});
|
||||
});
|
||||
|
||||
it('reconstructs openai thinking config', async () => {
|
||||
const schema = minimalSchema({
|
||||
model: { provider: 'openai', name: 'o3-mini' },
|
||||
config: {
|
||||
structuredOutput: { enabled: false, schemaSource: null },
|
||||
thinking: { provider: 'openai', reasoningEffort: 'high' },
|
||||
toolCallConcurrency: null,
|
||||
requireToolApproval: false,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.config.thinking).toEqual({
|
||||
provider: 'openai',
|
||||
reasoningEffort: 'high',
|
||||
});
|
||||
});
|
||||
|
||||
it('creates proxy handlers for custom tools', async () => {
|
||||
const toolSchema = makeToolSchema({
|
||||
name: 'search',
|
||||
description: 'Search the web',
|
||||
});
|
||||
const schema = minimalSchema({ tools: [toolSchema] });
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.tools).toHaveLength(1);
|
||||
expect(described.tools[0].name).toBe('search');
|
||||
expect(described.tools[0].description).toBe('Search the web');
|
||||
expect(described.tools[0].editable).toBe(true);
|
||||
});
|
||||
|
||||
it('adds WorkflowTool markers for non-editable tools', async () => {
|
||||
const toolSchema = makeToolSchema({ name: 'Send Email', type: 'workflow', editable: false });
|
||||
const schema = minimalSchema({ tools: [toolSchema] });
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
// Non-editable tools become WorkflowTool markers in declaredTools
|
||||
const markers = agent.declaredTools.filter(
|
||||
(t) => '__workflowTool' in t && (t as Record<string, unknown>).__workflowTool === true,
|
||||
);
|
||||
expect(markers).toHaveLength(1);
|
||||
expect(markers[0].name).toBe('Send Email');
|
||||
});
|
||||
|
||||
it('reconstructs memory from schema fields', async () => {
|
||||
const schema = minimalSchema({
|
||||
memory: {
|
||||
source: null,
|
||||
storage: 'memory',
|
||||
lastMessages: 20,
|
||||
semanticRecall: null,
|
||||
workingMemory: null,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.memory).toBeTruthy();
|
||||
expect(described.memory!.lastMessages).toBe(20);
|
||||
expect(described.memory!.storage).toBe('memory');
|
||||
});
|
||||
|
||||
it('sets toolCallConcurrency when specified', async () => {
|
||||
const schema = minimalSchema({
|
||||
config: {
|
||||
structuredOutput: { enabled: false, schemaSource: null },
|
||||
thinking: null,
|
||||
toolCallConcurrency: 5,
|
||||
requireToolApproval: false,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.config.toolCallConcurrency).toBe(5);
|
||||
});
|
||||
|
||||
it('sets requireToolApproval when true', async () => {
|
||||
const schema = minimalSchema({
|
||||
config: {
|
||||
structuredOutput: { enabled: false, schemaSource: null },
|
||||
thinking: null,
|
||||
toolCallConcurrency: null,
|
||||
requireToolApproval: true,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.config.requireToolApproval).toBe(true);
|
||||
});
|
||||
|
||||
it('sets checkpoint when specified', async () => {
|
||||
const schema = minimalSchema({ checkpoint: 'memory' });
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.checkpoint).toBe('memory');
|
||||
});
|
||||
|
||||
it('delegates tool execution to handlerExecutor', async () => {
|
||||
const executor = mockExecutor();
|
||||
const toolSchema = makeToolSchema({ name: 'my-tool' });
|
||||
const schema = minimalSchema({ tools: [toolSchema] });
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
// Access the built tool's handler via declaredTools
|
||||
const tools = agent.declaredTools;
|
||||
expect(tools).toHaveLength(1);
|
||||
|
||||
const result = await tools[0].handler!({ query: 'test' }, { parentTelemetry: undefined });
|
||||
expect(executor.executeTool).toHaveBeenCalledWith(
|
||||
'my-tool',
|
||||
{ query: 'test' },
|
||||
{ parentTelemetry: undefined },
|
||||
);
|
||||
expect(result).toEqual({ result: 'mocked' });
|
||||
});
|
||||
|
||||
it('reconstructs guardrails with correct position', async () => {
|
||||
const schema = minimalSchema({
|
||||
guardrails: [
|
||||
{
|
||||
name: 'pii-guard',
|
||||
guardType: 'pii',
|
||||
strategy: 'redact',
|
||||
position: 'input',
|
||||
config: { detectionTypes: ['email', 'phone'] },
|
||||
source: '',
|
||||
},
|
||||
{
|
||||
name: 'mod-guard',
|
||||
guardType: 'moderation',
|
||||
strategy: 'block',
|
||||
position: 'output',
|
||||
config: {},
|
||||
source: '',
|
||||
},
|
||||
],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.guardrails).toHaveLength(2);
|
||||
expect(described.guardrails[0].name).toBe('pii-guard');
|
||||
expect(described.guardrails[0].position).toBe('input');
|
||||
expect(described.guardrails[0].guardType).toBe('pii');
|
||||
expect(described.guardrails[1].name).toBe('mod-guard');
|
||||
expect(described.guardrails[1].position).toBe('output');
|
||||
});
|
||||
|
||||
it('reconstructs evals with proxy _run', async () => {
|
||||
const executor = mockExecutor();
|
||||
const schema = minimalSchema({
|
||||
evaluations: [
|
||||
{
|
||||
name: 'accuracy',
|
||||
description: 'Check accuracy',
|
||||
type: 'check',
|
||||
modelId: null,
|
||||
credentialName: null,
|
||||
hasCredential: false,
|
||||
handlerSource: null,
|
||||
},
|
||||
{
|
||||
name: 'quality',
|
||||
description: 'Judge quality',
|
||||
type: 'judge',
|
||||
modelId: 'anthropic/claude-sonnet-4-5',
|
||||
credentialName: 'anthropic',
|
||||
hasCredential: true,
|
||||
handlerSource: null,
|
||||
},
|
||||
],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.evaluations).toHaveLength(2);
|
||||
expect(described.evaluations[0].name).toBe('accuracy');
|
||||
expect(described.evaluations[0].type).toBe('check');
|
||||
expect(described.evaluations[1].name).toBe('quality');
|
||||
expect(described.evaluations[1].type).toBe('judge');
|
||||
});
|
||||
|
||||
it('reconstructs provider tools', async () => {
|
||||
const schema = minimalSchema({
|
||||
providerTools: [{ name: 'anthropic.web_search_20250305', source: '' }],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
const described = agent.describe();
|
||||
|
||||
expect(described.providerTools).toHaveLength(1);
|
||||
expect(described.providerTools[0].name).toBe('anthropic.web_search_20250305');
|
||||
});
|
||||
|
||||
it('evaluates provider tool source via evaluateExpression', async () => {
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
|
||||
name: 'anthropic.web_search_20250305',
|
||||
args: { maxUses: 5 },
|
||||
});
|
||||
const schema = minimalSchema({
|
||||
providerTools: [
|
||||
{
|
||||
name: 'anthropic.web_search_20250305',
|
||||
source: 'providerTools.anthropicWebSearch({ maxUses: 5 })',
|
||||
},
|
||||
],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
const described = agent.describe();
|
||||
|
||||
expect(executor.evaluateExpression).toHaveBeenCalledWith(
|
||||
'providerTools.anthropicWebSearch({ maxUses: 5 })',
|
||||
);
|
||||
expect(described.providerTools).toHaveLength(1);
|
||||
expect(described.providerTools[0].name).toBe('anthropic.web_search_20250305');
|
||||
});
|
||||
|
||||
it('evaluates structuredOutput schema via evaluateSchema', async () => {
|
||||
const zodSchema = z.object({ answer: z.string() });
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateSchema as jest.Mock).mockResolvedValue(zodSchema);
|
||||
const schema = minimalSchema({
|
||||
config: {
|
||||
structuredOutput: { enabled: true, schemaSource: 'z.object({ answer: z.string() })' },
|
||||
thinking: null,
|
||||
toolCallConcurrency: null,
|
||||
requireToolApproval: false,
|
||||
},
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
|
||||
expect(executor.evaluateSchema).toHaveBeenCalledWith('z.object({ answer: z.string() })');
|
||||
expect(described.config.structuredOutput.enabled).toBe(true);
|
||||
});
|
||||
|
||||
it('handles suspend result detection via isSuspendResult', () => {
|
||||
const suspendMarker = Symbol.for('n8n.agent.suspend');
|
||||
const suspendResult = { [suspendMarker]: true, payload: { message: 'approve?' } };
|
||||
const nonSuspend = { result: 42 };
|
||||
|
||||
expect(isSuspendResult(suspendResult)).toBe(true);
|
||||
expect(isSuspendResult(nonSuspend)).toBe(false);
|
||||
expect(isSuspendResult(null)).toBe(false);
|
||||
expect(isSuspendResult(undefined)).toBe(false);
|
||||
});
|
||||
|
||||
it('delegates interruptible tool execution with suspend detection', async () => {
|
||||
const suspendMarker = Symbol.for('n8n.agent.suspend');
|
||||
const executor = {
|
||||
...mockExecutor(),
|
||||
executeTool: jest.fn().mockResolvedValue({
|
||||
[suspendMarker]: true,
|
||||
payload: { message: 'Please approve' },
|
||||
}),
|
||||
};
|
||||
|
||||
const toolSchema = makeToolSchema({
|
||||
name: 'suspend-tool',
|
||||
hasSuspend: true,
|
||||
});
|
||||
const schema = minimalSchema({ tools: [toolSchema] });
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
const tools = agent.declaredTools;
|
||||
expect(tools).toHaveLength(1);
|
||||
|
||||
// Call with an interruptible context
|
||||
let suspendedPayload: unknown;
|
||||
const ctx = {
|
||||
parentTelemetry: undefined,
|
||||
resumeData: undefined,
|
||||
// eslint-disable-next-line @typescript-eslint/require-await
|
||||
suspend: jest.fn().mockImplementation(async (payload: unknown) => {
|
||||
suspendedPayload = payload;
|
||||
return { suspended: true };
|
||||
}),
|
||||
};
|
||||
|
||||
await tools[0].handler!({ query: 'test' }, ctx);
|
||||
|
||||
expect(ctx.suspend).toHaveBeenCalledWith({ message: 'Please approve' });
|
||||
expect(suspendedPayload).toEqual({ message: 'Please approve' });
|
||||
});
|
||||
|
||||
it('reconstructs requireApproval on individual tools', async () => {
|
||||
const toolSchema = makeToolSchema({
|
||||
name: 'danger-tool',
|
||||
requireApproval: true,
|
||||
});
|
||||
const schema = minimalSchema({
|
||||
tools: [toolSchema],
|
||||
checkpoint: 'memory',
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: mockExecutor(),
|
||||
});
|
||||
|
||||
// The tool should be wrapped for approval, which adds suspendSchema
|
||||
const tools = agent.declaredTools;
|
||||
expect(tools).toHaveLength(1);
|
||||
expect(tools[0].suspendSchema).toBeDefined();
|
||||
});
|
||||
|
||||
it('reconstructs MCP servers by evaluating configSource', async () => {
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
|
||||
name: 'browser',
|
||||
url: 'http://localhost:9222/mcp',
|
||||
transport: 'streamableHttp',
|
||||
});
|
||||
|
||||
const schema = minimalSchema({
|
||||
mcp: [
|
||||
{
|
||||
name: 'browser',
|
||||
configSource:
|
||||
'({ name: "browser", url: "http://localhost:9222/mcp", transport: "streamableHttp" })',
|
||||
},
|
||||
],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
expect(executor.evaluateExpression).toHaveBeenCalledWith(
|
||||
'({ name: "browser", url: "http://localhost:9222/mcp", transport: "streamableHttp" })',
|
||||
);
|
||||
|
||||
const described = agent.describe();
|
||||
expect(described.mcp).toHaveLength(1);
|
||||
expect(described.mcp![0].name).toBe('browser');
|
||||
});
|
||||
|
||||
it('reconstructs multiple MCP servers', async () => {
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateExpression as jest.Mock)
|
||||
.mockResolvedValueOnce({
|
||||
name: 'browser',
|
||||
url: 'http://localhost:9222/mcp',
|
||||
transport: 'streamableHttp',
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
name: 'fs',
|
||||
command: 'npx',
|
||||
args: ['@anthropic/mcp-fs', '/tmp'],
|
||||
});
|
||||
|
||||
const schema = minimalSchema({
|
||||
mcp: [
|
||||
{ name: 'browser', configSource: 'browserConfig' },
|
||||
{ name: 'fs', configSource: 'fsConfig' },
|
||||
],
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
expect(described.mcp).toHaveLength(2);
|
||||
expect(described.mcp![0].name).toBe('browser');
|
||||
expect(described.mcp![1].name).toBe('fs');
|
||||
});
|
||||
|
||||
it('skips MCP servers with empty configSource', async () => {
|
||||
const schema = minimalSchema({
|
||||
mcp: [{ name: 'browser', configSource: '' }],
|
||||
});
|
||||
const executor = mockExecutor();
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
expect(executor.evaluateExpression).not.toHaveBeenCalled();
|
||||
// No MCP configs evaluated means no client is added
|
||||
const described = agent.describe();
|
||||
expect(described.mcp).toBeNull();
|
||||
});
|
||||
|
||||
it('reconstructs telemetry by evaluating source', async () => {
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateExpression as jest.Mock).mockResolvedValue({
|
||||
enabled: true,
|
||||
functionId: 'my-agent',
|
||||
recordInputs: true,
|
||||
recordOutputs: true,
|
||||
integrations: [],
|
||||
});
|
||||
|
||||
const schema = minimalSchema({
|
||||
telemetry: { source: 'new Telemetry().functionId("my-agent").build()' },
|
||||
});
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
expect(executor.evaluateExpression).toHaveBeenCalledWith(
|
||||
'new Telemetry().functionId("my-agent").build()',
|
||||
);
|
||||
|
||||
const described = agent.describe();
|
||||
expect(described.telemetry).not.toBeNull();
|
||||
});
|
||||
|
||||
it('does not set telemetry when schema has no telemetry', async () => {
|
||||
const schema = minimalSchema({ telemetry: null });
|
||||
const executor = mockExecutor();
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
const described = agent.describe();
|
||||
expect(described.telemetry).toBeNull();
|
||||
expect(executor.evaluateExpression).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('evaluates suspend/resume schemas via evaluateSchema', async () => {
|
||||
const suspendSchema = z.object({ reason: z.string() });
|
||||
const resumeSchema = z.object({ approved: z.boolean() });
|
||||
|
||||
const executor = mockExecutor();
|
||||
(executor.evaluateSchema as jest.Mock)
|
||||
.mockResolvedValueOnce(suspendSchema)
|
||||
.mockResolvedValueOnce(resumeSchema);
|
||||
|
||||
const toolSchema = makeToolSchema({
|
||||
name: 'interruptible-tool',
|
||||
hasSuspend: true,
|
||||
hasResume: true,
|
||||
suspendSchemaSource: 'z.object({ reason: z.string() })',
|
||||
resumeSchemaSource: 'z.object({ approved: z.boolean() })',
|
||||
});
|
||||
const schema = minimalSchema({ tools: [toolSchema] });
|
||||
|
||||
const agent = await Agent.fromSchema(schema, 'test-agent', {
|
||||
handlerExecutor: executor,
|
||||
});
|
||||
|
||||
const tools = agent.declaredTools;
|
||||
expect(tools).toHaveLength(1);
|
||||
expect(tools[0].suspendSchema).toBe(suspendSchema);
|
||||
expect(tools[0].resumeSchema).toBe(resumeSchema);
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,119 @@
|
|||
import { InMemoryMemory } from '../runtime/memory-store';
|
||||
import type { AgentDbMessage } from '../types/sdk/message';
|
||||
|
||||
describe('InMemoryMemory working memory', () => {
|
||||
it('returns null for unknown key', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-x', resourceId: 'unknown' })).toBeNull();
|
||||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by resourceId', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'user-1' },
|
||||
'# Context\n- Name: Alice',
|
||||
);
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' })).toBe(
|
||||
'# Context\n- Name: Alice',
|
||||
);
|
||||
});
|
||||
|
||||
it('overwrites on subsequent save', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' }, 'v1');
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' }, 'v2');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1' })).toBe('v2');
|
||||
});
|
||||
|
||||
it('isolates by resourceId (resource scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-a', resourceId: 'user-1' }, 'Alice data');
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-b', resourceId: 'user-2' }, 'Bob data');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-a', resourceId: 'user-1' })).toBe(
|
||||
'Alice data',
|
||||
);
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-b', resourceId: 'user-2' })).toBe(
|
||||
'Bob data',
|
||||
);
|
||||
});
|
||||
|
||||
it('returns null for unknown threadId (thread scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
expect(await mem.getWorkingMemory({ threadId: 'unknown' })).toBeNull();
|
||||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by threadId', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1' }, '# Thread Notes');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1' })).toBe('# Thread Notes');
|
||||
});
|
||||
|
||||
it('isolates by threadId (thread scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1' }, 'data for thread 1');
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-2' }, 'data for thread 2');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1' })).toBe('data for thread 1');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-2' })).toBe('data for thread 2');
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Message persistence — createdAt correctness
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeDbMsg(id: string, createdAt: Date, text: string): AgentDbMessage {
|
||||
return { id, createdAt, role: 'user', content: [{ type: 'text', text }] };
|
||||
}
|
||||
|
||||
describe('InMemoryMemory — message createdAt', () => {
|
||||
it('before filter uses each message createdAt, not a shared batch timestamp', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
|
||||
// Use dates clearly in the past so the batch wall-clock time (≈ now)
|
||||
// never accidentally falls inside the range we're filtering.
|
||||
const t1 = new Date('2020-01-01T00:00:01.000Z');
|
||||
const t2 = new Date('2020-01-01T00:00:02.000Z');
|
||||
const t3 = new Date('2020-01-01T00:00:03.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [
|
||||
makeDbMsg('m1', t1, 'first'),
|
||||
makeDbMsg('m2', t2, 'second'),
|
||||
makeDbMsg('m3', t3, 'third'),
|
||||
],
|
||||
});
|
||||
|
||||
// before: t3 should return only the two earlier messages
|
||||
const result = await mem.getMessages('t1', { before: t3 });
|
||||
|
||||
// Pre-fix: saveMessages stores StoredMessage.createdAt = new Date() (wall clock,
|
||||
// much later than t3), so the before filter excludes all messages → length 0.
|
||||
// Post-fix: each StoredMessage.createdAt = dbMsg.createdAt, so t1 and t2 pass.
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0].id).toBe('m1');
|
||||
expect(result[1].id).toBe('m2');
|
||||
});
|
||||
|
||||
it('getMessages returns createdAt from the stored record (consistent with before filter)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
|
||||
const t1 = new Date('2020-06-01T10:00:00.000Z');
|
||||
const t2 = new Date('2020-06-01T10:00:01.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [makeDbMsg('a', t1, 'alpha'), makeDbMsg('b', t2, 'beta')],
|
||||
});
|
||||
|
||||
const loaded = await mem.getMessages('t1');
|
||||
|
||||
// Pre-fix: getMessages returns s.message whose createdAt is from toDbMessage
|
||||
// (correct), but StoredMessage.createdAt is 'now' — the two are inconsistent.
|
||||
// Post-fix: both use the same authoritative value, so this is always consistent.
|
||||
expect(loaded[0].createdAt).toBeInstanceOf(Date);
|
||||
expect(loaded[0].createdAt.getTime()).toBe(t1.getTime());
|
||||
expect(loaded[1].createdAt).toBeInstanceOf(Date);
|
||||
expect(loaded[1].createdAt.getTime()).toBe(t2.getTime());
|
||||
});
|
||||
});
|
||||
|
|
@ -1,327 +0,0 @@
|
|||
/**
|
||||
* Round-trip conversion tests: toAiMessages ↔ fromAiMessages
|
||||
*
|
||||
* These tests exercise the message split/merge logic without making real LLM
|
||||
* calls. They lock down the structural invariants that the agent runtime relies
|
||||
* on, including the key interim-message ordering guarantee described in the
|
||||
* plan:
|
||||
*
|
||||
* input: [assistant{tool-call resolved}, user{x}, assistant{y}]
|
||||
* output: [assistant{tool-call}, tool{tool-result}, user{x}, assistant{y}]
|
||||
*
|
||||
* The tool-result is inserted right after its tool-call, regardless of what
|
||||
* messages follow it in the n8n list.
|
||||
*/
|
||||
import { describe, it, expect } from 'vitest';
|
||||
|
||||
import { toAiMessages, fromAiMessages } from '../../runtime/messages';
|
||||
import type { Message } from '../../types/sdk/message';
|
||||
|
||||
describe('toAiMessages + fromAiMessages — round-trip', () => {
|
||||
it('splits a resolved tool-call into assistant + tool ModelMessages', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'add',
|
||||
input: { a: 1, b: 2 },
|
||||
state: 'resolved',
|
||||
output: { result: 3 },
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
expect(aiMessages).toHaveLength(2);
|
||||
expect(aiMessages[0].role).toBe('assistant');
|
||||
expect(aiMessages[1].role).toBe('tool');
|
||||
|
||||
const toolCallPart = (
|
||||
aiMessages[0] as { role: string; content: Array<{ type: string; toolCallId: string }> }
|
||||
).content[0];
|
||||
expect(toolCallPart.type).toBe('tool-call');
|
||||
expect(toolCallPart.toolCallId).toBe('tc-1');
|
||||
|
||||
const toolResultPart = (
|
||||
aiMessages[1] as {
|
||||
role: string;
|
||||
content: Array<{
|
||||
type: string;
|
||||
toolCallId: string;
|
||||
output: { type: string; value: unknown };
|
||||
}>;
|
||||
}
|
||||
).content[0];
|
||||
expect(toolResultPart.type).toBe('tool-result');
|
||||
expect(toolResultPart.toolCallId).toBe('tc-1');
|
||||
expect(toolResultPart.output.type).toBe('json');
|
||||
expect(toolResultPart.output.value).toEqual({ result: 3 });
|
||||
});
|
||||
|
||||
it('encodes rejected tool-call as error-text in the tool ModelMessage', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'do_it',
|
||||
input: {},
|
||||
state: 'rejected',
|
||||
error: 'Error: something went wrong',
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
expect(aiMessages).toHaveLength(2);
|
||||
|
||||
const toolResultPart = (
|
||||
aiMessages[1] as { role: string; content: Array<{ output: { type: string; value: string } }> }
|
||||
).content[0];
|
||||
expect(toolResultPart.output.type).toBe('error-text');
|
||||
expect(toolResultPart.output.value).toBe('Error: something went wrong');
|
||||
});
|
||||
|
||||
it('drops pending tool-call blocks from both assistant and tool ModelMessages', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Thinking...' },
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'do_it',
|
||||
input: {},
|
||||
state: 'pending',
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
// Only the assistant text part remains; no tool-result emitted for pending
|
||||
expect(aiMessages).toHaveLength(1);
|
||||
expect(aiMessages[0].role).toBe('assistant');
|
||||
const content = (aiMessages[0] as { role: string; content: Array<{ type: string }> }).content;
|
||||
expect(content).toHaveLength(1);
|
||||
expect(content[0].type).toBe('text');
|
||||
});
|
||||
|
||||
it('emits nothing for an assistant message whose only blocks are all pending', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'do_it',
|
||||
input: {},
|
||||
state: 'pending',
|
||||
},
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-2',
|
||||
toolName: 'do_more',
|
||||
input: {},
|
||||
state: 'pending',
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
// No empty-content assistant message — the whole message is suppressed
|
||||
expect(aiMessages).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('skips legacy tool-call blocks that have no state field and emits nothing when they are the only content', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
// Simulate a DB row written before the state field was introduced
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-legacy',
|
||||
toolName: 'old_tool',
|
||||
input: {},
|
||||
} as unknown as Message['content'][number],
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
// No empty-content assistant message and no spurious error-json tool message
|
||||
expect(aiMessages).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('emits one tool ModelMessage per settled block in the same assistant turn', () => {
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'add',
|
||||
input: { a: 1, b: 2 },
|
||||
state: 'resolved',
|
||||
output: { result: 3 },
|
||||
},
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-2',
|
||||
toolName: 'mul',
|
||||
input: { a: 4, b: 5 },
|
||||
state: 'resolved',
|
||||
output: { result: 20 },
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
// assistant{tc-1, tc-2} + tool{tc-1} + tool{tc-2}
|
||||
expect(aiMessages).toHaveLength(3);
|
||||
expect(aiMessages[0].role).toBe('assistant');
|
||||
const assistantContent = (
|
||||
aiMessages[0] as { content: Array<{ type: string; toolCallId: string }> }
|
||||
).content;
|
||||
expect(assistantContent).toHaveLength(2);
|
||||
expect(assistantContent[0].toolCallId).toBe('tc-1');
|
||||
expect(assistantContent[1].toolCallId).toBe('tc-2');
|
||||
|
||||
expect(aiMessages[1].role).toBe('tool');
|
||||
expect(aiMessages[2].role).toBe('tool');
|
||||
});
|
||||
|
||||
it('merges role:tool ModelMessages into the preceding assistant tool-call block', () => {
|
||||
// Simulate AI SDK output: [assistant{tool-call}, tool{tool-result}]
|
||||
const aiMessages = [
|
||||
{
|
||||
role: 'assistant' as const,
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call' as const,
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'add',
|
||||
input: { a: 1, b: 2 },
|
||||
providerExecuted: undefined,
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
role: 'tool' as const,
|
||||
content: [
|
||||
{
|
||||
type: 'tool-result' as const,
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'add',
|
||||
output: { type: 'json' as const, value: { result: 3 } },
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const n8nMessages = fromAiMessages(aiMessages);
|
||||
|
||||
// Should produce a single assistant message with the resolved block
|
||||
expect(n8nMessages).toHaveLength(1);
|
||||
expect((n8nMessages[0] as Message).role).toBe('assistant');
|
||||
const block = (n8nMessages[0] as Message).content[0];
|
||||
expect(block.type).toBe('tool-call');
|
||||
expect((block as { state: string }).state).toBe('resolved');
|
||||
expect((block as { output: unknown }).output).toEqual({ result: 3 });
|
||||
});
|
||||
|
||||
it('round-trip is structurally equivalent for a resolved tool-call', () => {
|
||||
const original: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'echo',
|
||||
input: { text: 'hello' },
|
||||
state: 'resolved',
|
||||
output: { echoed: 'hello' },
|
||||
},
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(original);
|
||||
const roundTripped = fromAiMessages(aiMessages);
|
||||
|
||||
expect(roundTripped).toHaveLength(1);
|
||||
expect((roundTripped[0] as Message).role).toBe('assistant');
|
||||
const block = (roundTripped[0] as Message).content[0];
|
||||
expect(block.type).toBe('tool-call');
|
||||
expect((block as { state: string }).state).toBe('resolved');
|
||||
expect((block as { output: unknown }).output).toEqual({ echoed: 'hello' });
|
||||
expect((block as { toolCallId: string }).toolCallId).toBe('tc-1');
|
||||
});
|
||||
|
||||
it('interim-message ordering: tool-result is inserted right after its tool-call', () => {
|
||||
// This is the key regression test for the interim-message scenario.
|
||||
// Input n8n list: [assistant{tool-call resolved}, user{x}, assistant{y}]
|
||||
// Expected AI SDK output: [assistant{tc}, tool{tr}, user{x}, assistant{y}]
|
||||
const input: Message[] = [
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-1',
|
||||
toolName: 'delete_file',
|
||||
input: { path: 'foo.txt' },
|
||||
state: 'resolved',
|
||||
output: { deleted: true },
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: [{ type: 'text', text: 'Actually, what is 2+2?' }],
|
||||
},
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'It is 4.' }],
|
||||
},
|
||||
];
|
||||
|
||||
const aiMessages = toAiMessages(input);
|
||||
|
||||
// 4 messages: assistant{tool-call}, tool{tool-result}, user, assistant
|
||||
expect(aiMessages).toHaveLength(4);
|
||||
expect(aiMessages[0].role).toBe('assistant');
|
||||
expect(aiMessages[1].role).toBe('tool');
|
||||
expect(aiMessages[2].role).toBe('user');
|
||||
expect(aiMessages[3].role).toBe('assistant');
|
||||
|
||||
// tool-result is immediately after the assistant tool-call message
|
||||
const toolResultContent = (aiMessages[1] as { content: Array<{ toolCallId: string }> })
|
||||
.content[0];
|
||||
expect(toolResultContent.toolCallId).toBe('tc-1');
|
||||
|
||||
// user interim message is after the tool-result
|
||||
const userContent = (aiMessages[2] as { content: Array<{ type: string; text: string }> })
|
||||
.content[0];
|
||||
expect(userContent.text).toBe('Actually, what is 2+2?');
|
||||
});
|
||||
});
|
||||
|
|
@ -106,7 +106,7 @@ describe('batched tool execution integration', () => {
|
|||
const resumedStream = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{ runId: next.runId, toolCallId: next.toolCallId },
|
||||
{ runId: next.runId!, toolCallId: next.toolCallId! },
|
||||
);
|
||||
|
||||
const resumedChunks = await collectStreamChunks(resumedStream.stream);
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import {
|
|||
createAgentWithConcurrentMixedTools,
|
||||
collectTextDeltas,
|
||||
} from './helpers';
|
||||
import type { StreamChunk } from '../../index';
|
||||
import { isLlmMessage, type StreamChunk } from '../../index';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ describe('concurrent tool execution integration', () => {
|
|||
const resumedStream = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{ runId: next.runId, toolCallId: next.toolCallId },
|
||||
{ runId: next.runId!, toolCallId: next.toolCallId! },
|
||||
);
|
||||
|
||||
const resumedChunks = await collectStreamChunks(resumedStream.stream);
|
||||
|
|
@ -147,8 +147,13 @@ describe('concurrent tool execution integration', () => {
|
|||
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
|
||||
// list_files should auto-execute — its result should appear as a discrete tool-result chunk
|
||||
const toolResultChunks = chunksOfType(chunks, 'tool-result');
|
||||
// list_files should auto-execute — its result should appear as a message chunk
|
||||
const toolResultChunks = chunks.filter(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
isLlmMessage(c.message) &&
|
||||
c.message.content.some((p) => p.type === 'tool-result'),
|
||||
);
|
||||
|
||||
// delete_file should be suspended
|
||||
const suspendedChunks = chunksOfType(chunks, 'tool-call-suspended');
|
||||
|
|
@ -165,7 +170,12 @@ describe('concurrent tool execution integration', () => {
|
|||
);
|
||||
|
||||
// list_files result should be present even though delete_file suspended
|
||||
const listResult = toolResultChunks.find((c) => c.toolName === 'list_files');
|
||||
const listResult = toolResultChunks.find(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
isLlmMessage(c.message) &&
|
||||
c.message.content.some((p) => p.type === 'tool-result' && p.toolName === 'list_files'),
|
||||
);
|
||||
expect(listResult).toBeDefined();
|
||||
}
|
||||
});
|
||||
|
|
@ -194,7 +204,7 @@ describe('concurrent tool execution integration', () => {
|
|||
'content' in m
|
||||
? m.content
|
||||
.filter((c) => c.type === 'text')
|
||||
.map((c) => ({ type: 'text-delta' as const, id: '', delta: c.text }))
|
||||
.map((c) => ({ type: 'text-delta' as const, delta: c.text }))
|
||||
: [],
|
||||
),
|
||||
);
|
||||
|
|
|
|||
|
|
@ -175,53 +175,42 @@ describe('event system — stream', () => {
|
|||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// getState()
|
||||
// result.getState()
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('getState()', () => {
|
||||
it('returns idle before first run', () => {
|
||||
describe('result.getState()', () => {
|
||||
it('generate() result reports success after a successful run', async () => {
|
||||
const agent = createSimpleAgent();
|
||||
const state = agent.getState();
|
||||
expect(state.status).toBe('idle');
|
||||
expect(state.messageList.messages).toHaveLength(0);
|
||||
const result = await agent.generate('Say hello');
|
||||
expect(result.getState().status).toBe('success');
|
||||
});
|
||||
|
||||
it('returns success after a successful generate()', async () => {
|
||||
it('stream() result reports success after the stream is fully consumed', async () => {
|
||||
const agent = createSimpleAgent();
|
||||
await agent.generate('Say hello');
|
||||
const state = agent.getState();
|
||||
expect(state.status).toBe('success');
|
||||
});
|
||||
|
||||
it('returns success after a completed stream()', async () => {
|
||||
const agent = createSimpleAgent();
|
||||
const { stream } = await agent.stream('Say hello');
|
||||
const { stream, getState } = await agent.stream('Say hello');
|
||||
await collectStreamChunks(stream);
|
||||
const state = agent.getState();
|
||||
expect(state.status).toBe('success');
|
||||
expect(getState().status).toBe('success');
|
||||
});
|
||||
|
||||
it('state is running during the generate loop (observed via event)', async () => {
|
||||
it('stream() getState() is running while the stream is being consumed', async () => {
|
||||
const agent = createSimpleAgent();
|
||||
const { stream, getState } = await agent.stream('Say hello');
|
||||
|
||||
let stateWhileRunning: string | undefined;
|
||||
agent.on(AgentEvent.TurnStart, () => {
|
||||
stateWhileRunning = agent.getState().status;
|
||||
});
|
||||
// State is running before the stream is consumed
|
||||
expect(getState().status).toBe('running');
|
||||
|
||||
await agent.generate('Say hello');
|
||||
await collectStreamChunks(stream);
|
||||
|
||||
expect(stateWhileRunning).toBe('running');
|
||||
expect(getState().status).toBe('success');
|
||||
});
|
||||
|
||||
it('reflects resourceId and threadId from RunOptions', async () => {
|
||||
it('generate() result reflects resourceId and threadId from RunOptions', async () => {
|
||||
const agent = createSimpleAgent();
|
||||
await agent.generate('Say hello', {
|
||||
const result = await agent.generate('Say hello', {
|
||||
persistence: { resourceId: 'user-123', threadId: 'thread-abc' },
|
||||
});
|
||||
const state = agent.getState();
|
||||
expect(state.persistence?.resourceId).toBe('user-123');
|
||||
expect(state.persistence?.threadId).toBe('thread-abc');
|
||||
expect(result.getState().persistence?.resourceId).toBe('user-123');
|
||||
expect(result.getState().persistence?.threadId).toBe('thread-abc');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,19 @@
|
|||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { describe as _describe } from 'vitest';
|
||||
import { z } from 'zod';
|
||||
|
||||
import {
|
||||
Agent,
|
||||
type ContentToolCall,
|
||||
type ContentToolResult,
|
||||
filterLlmMessages,
|
||||
Tool,
|
||||
type StreamChunk,
|
||||
type AgentMessage,
|
||||
} from '../../index';
|
||||
import { InMemoryMemory } from '../../runtime/memory-store';
|
||||
import { SqliteMemory } from '../../storage/sqlite-memory';
|
||||
|
||||
export type { StreamChunk };
|
||||
|
||||
|
|
@ -400,10 +404,10 @@ export const findAllToolCalls = (messages: AgentMessage[]): ContentToolCall[] =>
|
|||
.map((m) => m.content.filter((c) => c.type === 'tool-call'))
|
||||
.flat();
|
||||
};
|
||||
export const findAllToolResults = (messages: AgentMessage[]): ContentToolCall[] => {
|
||||
return filterLlmMessages(messages).flatMap((m) =>
|
||||
m.content.filter((c): c is ContentToolCall => c.type === 'tool-call' && c.state !== 'pending'),
|
||||
);
|
||||
export const findAllToolResults = (messages: AgentMessage[]): ContentToolResult[] => {
|
||||
return filterLlmMessages(messages)
|
||||
.filter((m) => m.content.find((c) => c.type === 'tool-result'))
|
||||
.map((m) => m.content.find((c) => c.type === 'tool-result') as ContentToolResult);
|
||||
};
|
||||
export const collectTextDeltas = (chunks: StreamChunk[]): string => {
|
||||
return chunks
|
||||
|
|
@ -413,18 +417,25 @@ export const collectTextDeltas = (chunks: StreamChunk[]): string => {
|
|||
};
|
||||
|
||||
export function createSqliteMemory(): {
|
||||
memory: InMemoryMemory;
|
||||
memory: SqliteMemory;
|
||||
cleanup: () => void;
|
||||
url: string;
|
||||
} {
|
||||
// In-memory backend; the `url` field is kept on the return type so existing
|
||||
// integration tests that reference it (e.g. for "restart" scenarios) keep
|
||||
// compiling, but it's not load-bearing — InMemoryMemory has no persistence.
|
||||
const dbPath = path.join(
|
||||
os.tmpdir(),
|
||||
`test-${Date.now()}-${Math.random().toString(36).slice(2)}.db`,
|
||||
);
|
||||
const url = `file:${dbPath}`;
|
||||
const memory = new SqliteMemory({ url });
|
||||
return {
|
||||
memory: new InMemoryMemory(),
|
||||
url: '',
|
||||
memory,
|
||||
url,
|
||||
cleanup: () => {
|
||||
// no-op for in-memory backend
|
||||
try {
|
||||
fs.unlinkSync(dbPath);
|
||||
} catch {
|
||||
// File may already be removed — ignore
|
||||
}
|
||||
},
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,214 +0,0 @@
|
|||
/**
|
||||
* Regression test: interim user message while a tool-call is suspended.
|
||||
*
|
||||
* Old architecture bug: if a user sent a new message between a tool-call
|
||||
* suspension and its eventual resume, the message list would contain:
|
||||
*
|
||||
* assistant{tool-call} → user{interim} → tool{tool-result}
|
||||
*
|
||||
* This order is invalid for AI SDK providers (tool-result must immediately
|
||||
* follow its tool-call). The new architecture stores the result ON the
|
||||
* tool-call block, so toAiMessages always emits:
|
||||
*
|
||||
* assistant{tool-call} → tool{tool-result} → user{interim} → assistant{reply}
|
||||
*
|
||||
* The tool-result is always adjacent to its tool-call regardless of what n8n
|
||||
* messages come after it in the list.
|
||||
*
|
||||
* This test drives the full scenario end-to-end and asserts that:
|
||||
* 1. The final result has finishReason 'stop' (no provider error).
|
||||
* 2. The tool-call block on the originating assistant message transitions to
|
||||
* state 'resolved' with the expected output.
|
||||
* 3. The interim user/assistant messages are still present in memory.
|
||||
*/
|
||||
import { afterEach, expect, it } from 'vitest';
|
||||
import { z } from 'zod';
|
||||
|
||||
import { describeIf, createSqliteMemory, getModel } from './helpers';
|
||||
import { Agent, filterLlmMessages, Memory, Tool } from '../../index';
|
||||
import type { AgentDbMessage } from '../../index';
|
||||
import type { ContentToolCall, Message } from '../../types/sdk/message';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
describe('interim user message during tool suspension', () => {
|
||||
const cleanups: Array<() => void> = [];
|
||||
|
||||
afterEach(() => {
|
||||
for (const fn of cleanups) fn();
|
||||
cleanups.length = 0;
|
||||
});
|
||||
|
||||
function buildInterruptibleAgent(mem: Memory): Agent {
|
||||
const deleteTool = new Tool('delete_file')
|
||||
.description('Delete a file at the given path')
|
||||
.input(z.object({ path: z.string().describe('File path to delete') }))
|
||||
.output(z.object({ deleted: z.boolean(), path: z.string() }))
|
||||
.suspend(z.object({ message: z.string(), severity: z.string() }))
|
||||
.resume(z.object({ approved: z.boolean() }))
|
||||
.handler(async ({ path }, ctx) => {
|
||||
if (!ctx.resumeData) {
|
||||
return await ctx.suspend({ message: `Delete "${path}"?`, severity: 'destructive' });
|
||||
}
|
||||
if (!ctx.resumeData.approved) return { deleted: false, path };
|
||||
return { deleted: true, path };
|
||||
});
|
||||
|
||||
return new Agent('interim-test-agent')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions(
|
||||
'You are a file manager. When asked to delete a file, use the delete_file tool. Be concise.',
|
||||
)
|
||||
.tool(deleteTool)
|
||||
.memory(mem)
|
||||
.checkpoint('memory');
|
||||
}
|
||||
|
||||
for (const method of ['generate', 'stream'] as const) {
|
||||
it(`[${method}] interim message does not break provider message ordering`, async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = `thread-interim-${method}`;
|
||||
const resourceId = 'res-interim';
|
||||
const persistence = { threadId, resourceId };
|
||||
const mem = new Memory().storage(memory);
|
||||
|
||||
const agent = buildInterruptibleAgent(mem);
|
||||
|
||||
// ----------------------------------------------------------------
|
||||
// Turn 1: trigger the tool suspension
|
||||
// ----------------------------------------------------------------
|
||||
const suspendResult = await agent.generate('Please delete /tmp/interim-test.txt', {
|
||||
persistence,
|
||||
});
|
||||
|
||||
expect(suspendResult.finishReason).toBe('tool-calls');
|
||||
expect(suspendResult.pendingSuspend).toBeDefined();
|
||||
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
|
||||
|
||||
// ----------------------------------------------------------------
|
||||
// Interim turn: send a new message while the tool is suspended.
|
||||
// Build a fresh agent instance to simulate a separate request.
|
||||
// ----------------------------------------------------------------
|
||||
const interimAgent = new Agent('interim-agent')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are helpful. Answer questions concisely.')
|
||||
.memory(mem);
|
||||
|
||||
const interimResult = await interimAgent.generate('What is 1 + 1?', { persistence });
|
||||
expect(interimResult.finishReason).toBe('stop');
|
||||
|
||||
// ----------------------------------------------------------------
|
||||
// Resume turn: approve the suspended tool call
|
||||
// ----------------------------------------------------------------
|
||||
let resumeFinishReason: string;
|
||||
if (method === 'generate') {
|
||||
const result = await agent.resume(
|
||||
'generate',
|
||||
{ approved: true },
|
||||
{
|
||||
runId,
|
||||
toolCallId,
|
||||
},
|
||||
);
|
||||
resumeFinishReason = result.finishReason ?? 'stop';
|
||||
} else {
|
||||
const { stream } = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{
|
||||
runId,
|
||||
toolCallId,
|
||||
},
|
||||
);
|
||||
// Drain the stream
|
||||
const reader = stream.getReader();
|
||||
let finishReason = 'stop';
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
if ((value as { type: string }).type === 'finish') {
|
||||
finishReason = (value as { finishReason?: string }).finishReason ?? 'stop';
|
||||
}
|
||||
}
|
||||
resumeFinishReason = finishReason;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------
|
||||
// Assertions
|
||||
// ----------------------------------------------------------------
|
||||
// 1. No provider error — the ordering was valid
|
||||
expect(resumeFinishReason).toBe('stop');
|
||||
|
||||
// 2. The originating assistant message's tool-call block is resolved
|
||||
const allMessages = await memory.getMessages(threadId);
|
||||
const llmMessages = filterLlmMessages(allMessages);
|
||||
|
||||
const ourBlock = llmMessages
|
||||
.flatMap((m) => m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'))
|
||||
.find((b) => b.toolCallId === toolCallId);
|
||||
|
||||
expect(ourBlock).toBeDefined();
|
||||
expect(ourBlock!.state).toBe('resolved');
|
||||
|
||||
// 3. The interim user/assistant exchange is present in memory
|
||||
const userMessages = allMessages.filter(
|
||||
(m): m is AgentDbMessage & Message => 'role' in m && m.role === 'user',
|
||||
);
|
||||
// Turn-1 user + interim user (at minimum)
|
||||
expect(userMessages.length).toBeGreaterThanOrEqual(2);
|
||||
});
|
||||
}
|
||||
|
||||
it('preserves chronological ordering of messages in memory after resume', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = 'thread-interim-ordering';
|
||||
const resourceId = 'res-ordering';
|
||||
const persistence = { threadId, resourceId };
|
||||
const mem = new Memory().storage(memory);
|
||||
|
||||
const agent = buildInterruptibleAgent(mem);
|
||||
|
||||
// Turn 1: suspend
|
||||
const suspendResult = await agent.generate('Delete /tmp/order-test.txt', { persistence });
|
||||
expect(suspendResult.finishReason).toBe('tool-calls');
|
||||
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
|
||||
|
||||
// Interim turn
|
||||
const interimAgent = new Agent('interim-ordering')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('Answer concisely.')
|
||||
.memory(mem);
|
||||
await interimAgent.generate('Say hi', { persistence });
|
||||
|
||||
// Resume
|
||||
const resumeResult = await agent.resume(
|
||||
'generate',
|
||||
{ approved: true },
|
||||
{
|
||||
runId,
|
||||
toolCallId,
|
||||
},
|
||||
);
|
||||
expect(resumeResult.finishReason).toBe('stop');
|
||||
|
||||
// The tool-call is resolved
|
||||
const allMessages = await memory.getMessages(threadId);
|
||||
const llmMessages = filterLlmMessages(allMessages);
|
||||
const ourBlock = llmMessages
|
||||
.flatMap((m) => m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'))
|
||||
.find((b) => b.toolCallId === toolCallId);
|
||||
|
||||
expect(ourBlock).toBeDefined();
|
||||
expect(ourBlock!.state).toBe('resolved');
|
||||
|
||||
// Messages are in chronological order (createdAt ascending)
|
||||
const timestamps = allMessages.map((m) => m.createdAt.getTime());
|
||||
for (let i = 1; i < timestamps.length; i++) {
|
||||
expect(timestamps[i]).toBeGreaterThanOrEqual(timestamps[i - 1]);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
|
@ -72,12 +72,12 @@ describe('JSON Schema validation — non-MCP tools with raw JSON Schema', () =>
|
|||
// The handler should have been called with valid data
|
||||
expect(handler).toHaveBeenCalledWith(expect.objectContaining({ age: 25 }), expect.anything());
|
||||
|
||||
// No tool-call block should have state 'rejected'
|
||||
// No tool-result should carry an error flag
|
||||
const allMessages = filterLlmMessages(result.messages);
|
||||
const toolCallBlocks = allMessages.flatMap((m) =>
|
||||
m.content.filter((c) => c.type === 'tool-call'),
|
||||
const toolResults = allMessages.flatMap((m) =>
|
||||
m.content.filter((c) => c.type === 'tool-result'),
|
||||
);
|
||||
expect(toolCallBlocks.every((c) => (c as { state: string }).state !== 'rejected')).toBe(true);
|
||||
expect(toolResults.every((r) => !r.isError)).toBe(true);
|
||||
});
|
||||
|
||||
it('allows the LLM to self-correct after receiving a JSON Schema validation error', async () => {
|
||||
|
|
@ -105,12 +105,12 @@ describe('JSON Schema validation — non-MCP tools with raw JSON Schema', () =>
|
|||
expect(result.finishReason).toBe('stop');
|
||||
expect(result.error).toBeUndefined();
|
||||
|
||||
// There should be at least two tool-call messages: one rejected, one resolved
|
||||
// There should be at least two tool-result messages: one error, one success
|
||||
const allMessages = filterLlmMessages(result.messages);
|
||||
const toolCallMessages = allMessages.filter((m) =>
|
||||
m.content.some((c) => c.type === 'tool-call'),
|
||||
const toolResultMessages = allMessages.filter((m) =>
|
||||
m.content.some((c) => c.type === 'tool-result'),
|
||||
);
|
||||
expect(toolCallMessages.length).toBeGreaterThanOrEqual(2);
|
||||
expect(toolResultMessages.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// The successful handler call should have received a valid age
|
||||
expect(callCount).toBeGreaterThanOrEqual(1);
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ import {
|
|||
chunksOfType,
|
||||
} from './helpers';
|
||||
import { startSseServer, type TestServer } from './mcp-server-helpers';
|
||||
import { Agent, McpClient, Tool } from '../../index';
|
||||
import { Agent, McpClient, Tool, isLlmMessage } from '../../index';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// McpClient constructor validation — no MCP server required
|
||||
|
|
@ -234,10 +234,13 @@ describe_llm('agent stream() with MCP tool', () => {
|
|||
const { stream } = await agent.stream('Echo "stream works" using tools_echo.');
|
||||
|
||||
const chunks = await collectStreamChunks(stream);
|
||||
// Tool calls now ride their own discrete `tool-call` chunks rather than
|
||||
// being wrapped in `message` envelopes.
|
||||
const toolCallChunks = chunksOfType(chunks, 'tool-call');
|
||||
expect(toolCallChunks.length).toBeGreaterThan(0);
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const messages = messageChunks.map((c) => c.message);
|
||||
|
||||
const hasToolCall = messages.some(
|
||||
(m) => isLlmMessage(m) && m.content.some((c) => c.type === 'tool-call'),
|
||||
);
|
||||
expect(hasToolCall).toBe(true);
|
||||
|
||||
await client.close();
|
||||
});
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
import { expect, it, beforeEach } from 'vitest';
|
||||
|
||||
import { Agent, Memory, type AgentDbMessage } from '../../../index';
|
||||
import type { BuiltMemory, MemoryDescriptor, Thread } from '../../../types/sdk/memory';
|
||||
import type { BuiltMemory, Thread } from '../../../types/sdk/memory';
|
||||
import { describeIf, findLastTextContent, getModel } from '../helpers';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
|
@ -17,9 +17,6 @@ const describe = describeIf('anthropic');
|
|||
// Custom in-memory BuiltMemory implementation (simulates Redis, DynamoDB, etc.)
|
||||
// ---------------------------------------------------------------------------
|
||||
class CustomMapMemory implements BuiltMemory {
|
||||
describe(): MemoryDescriptor {
|
||||
throw new Error('Method not implemented.');
|
||||
}
|
||||
readonly threads = new Map<string, Thread>();
|
||||
readonly messages = new Map<string, AgentDbMessage[]>();
|
||||
readonly workingMemory = new Map<string, string>();
|
||||
|
|
|
|||
|
|
@ -0,0 +1,106 @@
|
|||
import { expect, it, afterEach } from 'vitest';
|
||||
|
||||
import { Agent, Memory } from '../../../index';
|
||||
import { SqliteMemory } from '../../../storage/sqlite-memory';
|
||||
import { describeIf, findLastTextContent, getModel, createSqliteMemory } from '../helpers';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
const cleanups: Array<() => void> = [];
|
||||
afterEach(() => {
|
||||
cleanups.forEach((fn) => fn());
|
||||
cleanups.length = 0;
|
||||
});
|
||||
|
||||
describe('freeform working memory', () => {
|
||||
const template = '# User Context\n- **Name**:\n- **City**:\n- **Pet**:';
|
||||
|
||||
it('agent recalls info via working memory across turns', async () => {
|
||||
const memory = new Memory().storage('memory').lastMessages(10).freeform(template);
|
||||
|
||||
const agent = new Agent('freeform-test')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(memory);
|
||||
|
||||
const threadId = `freeform-${Date.now()}`;
|
||||
const options = { persistence: { threadId, resourceId: 'test-user' } };
|
||||
|
||||
await agent.generate('My name is Alice and I live in Berlin.', options);
|
||||
const result = await agent.generate('What city do I live in?', options);
|
||||
|
||||
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('berlin');
|
||||
});
|
||||
|
||||
it('working memory is updated when new information is provided', async () => {
|
||||
const memory = new Memory().storage('memory').lastMessages(10).freeform(template);
|
||||
|
||||
const agent = new Agent('wm-update-test')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(memory);
|
||||
|
||||
const threadId = `wm-update-${Date.now()}`;
|
||||
const options = { persistence: { threadId, resourceId: 'test-user' } };
|
||||
|
||||
const result = await agent.generate('My name is Bob.', options);
|
||||
|
||||
const toolCalls = result.messages.flatMap((m) =>
|
||||
'content' in m ? m.content.filter((c) => c.type === 'tool-call') : [],
|
||||
) as Array<{ type: 'tool-call'; toolName: string }>;
|
||||
const wmToolCall = toolCalls.find((c) => c.toolName === 'updateWorkingMemory');
|
||||
expect(wmToolCall).toBeDefined();
|
||||
});
|
||||
|
||||
it('working memory persists across threads with same resourceId', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const mem = new Memory().storage(memory).lastMessages(10).freeform(template);
|
||||
const agent = new Agent('cross-thread-test')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem);
|
||||
|
||||
const resourceId = `user-${Date.now()}`;
|
||||
|
||||
await agent.generate('My name is Charlie and I have a dog named Rex.', {
|
||||
persistence: { threadId: `thread-1-${Date.now()}`, resourceId },
|
||||
});
|
||||
|
||||
const result = await agent.generate("What's my dog's name?", {
|
||||
persistence: { threadId: `thread-2-${Date.now()}`, resourceId },
|
||||
});
|
||||
|
||||
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('rex');
|
||||
});
|
||||
|
||||
it('working memory survives SqliteMemory restart', async () => {
|
||||
const { memory, cleanup, url } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const mem = new Memory().storage(memory).lastMessages(10).freeform(template);
|
||||
const agent1 = new Agent('restart-wm-1')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem);
|
||||
|
||||
const resourceId = `user-${Date.now()}`;
|
||||
const threadId = `restart-wm-${Date.now()}`;
|
||||
|
||||
await agent1.generate('My name is Diana.', { persistence: { threadId, resourceId } });
|
||||
|
||||
const memory2 = new SqliteMemory({ url });
|
||||
const mem2 = new Memory().storage(memory2).lastMessages(10).freeform(template);
|
||||
const agent2 = new Agent('restart-wm-2')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem2);
|
||||
|
||||
const result = await agent2.generate('What is my name?', {
|
||||
persistence: { threadId: `new-thread-${Date.now()}`, resourceId },
|
||||
});
|
||||
|
||||
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('diana');
|
||||
});
|
||||
});
|
||||
|
|
@ -61,18 +61,6 @@ afterAll(async () => {
|
|||
}
|
||||
}, 30_000);
|
||||
|
||||
/**
|
||||
* Create a PostgresMemory instance backed by the test container connection string.
|
||||
* Uses a simple inline CredentialProvider that returns the raw URL.
|
||||
*/
|
||||
function makePostgresMemory(namespace: string): PostgresMemory {
|
||||
return new PostgresMemory({
|
||||
type: 'connection',
|
||||
connection: { connectionType: 'url', connection: { url: connectionString } },
|
||||
options: { namespace },
|
||||
});
|
||||
}
|
||||
|
||||
/** describe that requires Docker — tests are no-ops without it. */
|
||||
function describeWithDocker(name: string, fn: () => void) {
|
||||
describe(name, () => {
|
||||
|
|
@ -86,7 +74,7 @@ function describeWithDocker(name: string, fn: () => void) {
|
|||
|
||||
describeWithDocker('PostgresMemory saveThread upsert', () => {
|
||||
it('preserves existing title and metadata when not provided', async () => {
|
||||
const mem = makePostgresMemory('upsert_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'upsert_test' });
|
||||
|
||||
await mem.saveThread({
|
||||
id: 'upsert-t1',
|
||||
|
|
@ -107,7 +95,7 @@ describeWithDocker('PostgresMemory saveThread upsert', () => {
|
|||
});
|
||||
|
||||
it('overwrites title and metadata when explicitly provided', async () => {
|
||||
const mem = makePostgresMemory('upsert_ow');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'upsert_ow' });
|
||||
|
||||
await mem.saveThread({
|
||||
id: 'upsert-t2',
|
||||
|
|
@ -133,7 +121,7 @@ describeWithDocker('PostgresMemory saveThread upsert', () => {
|
|||
|
||||
describeWithDocker('PostgresMemory unit tests', () => {
|
||||
it('creates tables on first use and round-trips a thread', async () => {
|
||||
const mem = makePostgresMemory('default');
|
||||
const mem = new PostgresMemory({ connection: connectionString });
|
||||
|
||||
const thread = await mem.saveThread({
|
||||
id: 'thread-1',
|
||||
|
|
@ -153,7 +141,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('saves and retrieves messages with limit', async () => {
|
||||
const mem = makePostgresMemory('msg_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'msg_test' });
|
||||
|
||||
await mem.saveThread({ id: 't1', resourceId: 'u1' });
|
||||
|
||||
|
|
@ -192,7 +180,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by resourceId', async () => {
|
||||
const mem = makePostgresMemory('wm_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_test' });
|
||||
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' }),
|
||||
|
|
@ -219,7 +207,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by threadId (no resourceId)', async () => {
|
||||
const mem = makePostgresMemory('wm_thread_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_thread_test' });
|
||||
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'thread' }),
|
||||
|
|
@ -237,7 +225,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('isolates working memory by resourceId', async () => {
|
||||
const mem = makePostgresMemory('wm_iso_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_iso_test' });
|
||||
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-a', resourceId: 'user-a', scope: 'resource' },
|
||||
|
|
@ -259,7 +247,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('stores scope=resource when resourceId is provided', async () => {
|
||||
const mem = makePostgresMemory('wm_scope_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'wm_scope_test' });
|
||||
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'res-1', scope: 'resource' },
|
||||
|
|
@ -278,7 +266,10 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('stores scope=thread when only threadId is provided', async () => {
|
||||
const mem = makePostgresMemory('wm_scope_thread_test');
|
||||
const mem = new PostgresMemory({
|
||||
connection: connectionString,
|
||||
namespace: 'wm_scope_thread_test',
|
||||
});
|
||||
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'user-1', scope: 'thread' },
|
||||
|
|
@ -297,7 +288,10 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('does not mix resource-scoped and thread-scoped entries with the same key value', async () => {
|
||||
const mem = makePostgresMemory('wm_scope_iso_test');
|
||||
const mem = new PostgresMemory({
|
||||
connection: connectionString,
|
||||
namespace: 'wm_scope_iso_test',
|
||||
});
|
||||
const sharedKey = 'same-id';
|
||||
|
||||
await mem.saveWorkingMemory(
|
||||
|
|
@ -324,7 +318,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('deletes thread and cascades to messages', async () => {
|
||||
const mem = makePostgresMemory('del_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'del_test' });
|
||||
|
||||
await mem.saveThread({ id: 'del-t1', resourceId: 'u1' });
|
||||
await mem.saveMessages({
|
||||
|
|
@ -348,7 +342,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('stores and queries embeddings with pgvector', async () => {
|
||||
const mem = makePostgresMemory('vec_test');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_test' });
|
||||
|
||||
await mem.saveThread({ id: 'vec-t1', resourceId: 'u1' });
|
||||
|
||||
|
|
@ -381,7 +375,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('filters embeddings by resourceId with scope=resource (default)', async () => {
|
||||
const mem = makePostgresMemory('vec_res');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_res' });
|
||||
|
||||
await mem.saveEmbeddings({
|
||||
threadId: 't1',
|
||||
|
|
@ -416,7 +410,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('filters embeddings by threadId with scope=thread', async () => {
|
||||
const mem = makePostgresMemory('vec_thr');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_thr' });
|
||||
|
||||
await mem.saveEmbeddings({
|
||||
threadId: 't1',
|
||||
|
|
@ -449,7 +443,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('resource scope excludes embeddings from other resources', async () => {
|
||||
const mem = makePostgresMemory('vec_iso');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_iso' });
|
||||
|
||||
await mem.saveEmbeddings({
|
||||
threadId: 't1',
|
||||
|
|
@ -476,7 +470,7 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('stores resourceId in the embeddings table', async () => {
|
||||
const mem = makePostgresMemory('vec_col');
|
||||
const mem = new PostgresMemory({ connection: connectionString, namespace: 'vec_col' });
|
||||
|
||||
await mem.saveEmbeddings({
|
||||
threadId: 't1',
|
||||
|
|
@ -498,8 +492,8 @@ describeWithDocker('PostgresMemory unit tests', () => {
|
|||
});
|
||||
|
||||
it('isolates namespaces', async () => {
|
||||
const mem1 = makePostgresMemory('ns_a');
|
||||
const mem2 = makePostgresMemory('ns_b');
|
||||
const mem1 = new PostgresMemory({ connection: connectionString, namespace: 'ns_a' });
|
||||
const mem2 = new PostgresMemory({ connection: connectionString, namespace: 'ns_b' });
|
||||
|
||||
await mem1.saveThread({ id: 'shared-id', resourceId: 'u1', title: 'From A' });
|
||||
await mem2.saveThread({ id: 'shared-id', resourceId: 'u1', title: 'From B' });
|
||||
|
|
@ -526,7 +520,7 @@ function describeWithDockerAndApi(name: string, fn: () => void) {
|
|||
|
||||
describeWithDockerAndApi('PostgresMemory agent integration', () => {
|
||||
it('recalls previous messages across turns', async () => {
|
||||
const store = makePostgresMemory('agent_recall');
|
||||
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_recall' });
|
||||
const memory = new Memory().storage(store).lastMessages(10);
|
||||
|
||||
const agent = new Agent('pg-recall-test')
|
||||
|
|
@ -546,7 +540,7 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
|
|||
});
|
||||
|
||||
it('persists resource-scoped working memory via Postgres backend', async () => {
|
||||
const store = makePostgresMemory('agent_wm');
|
||||
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_wm' });
|
||||
const memory = new Memory()
|
||||
.storage(store)
|
||||
.lastMessages(10)
|
||||
|
|
@ -580,7 +574,10 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
|
|||
});
|
||||
|
||||
it('persists thread-scoped working memory via Postgres backend', async () => {
|
||||
const store = makePostgresMemory('agent_thread_wm');
|
||||
const store = new PostgresMemory({
|
||||
connection: connectionString,
|
||||
namespace: 'agent_thread_wm',
|
||||
});
|
||||
const memory = new Memory()
|
||||
.storage(store)
|
||||
.lastMessages(10)
|
||||
|
|
@ -620,7 +617,7 @@ describeWithDockerAndApi('PostgresMemory agent integration', () => {
|
|||
});
|
||||
|
||||
it('works with stream() path', async () => {
|
||||
const store = makePostgresMemory('agent_stream');
|
||||
const store = new PostgresMemory({ connection: connectionString, namespace: 'agent_stream' });
|
||||
const memory = new Memory().storage(store).lastMessages(10);
|
||||
|
||||
const agent = new Agent('pg-stream-test')
|
||||
|
|
|
|||
|
|
@ -0,0 +1,105 @@
|
|||
import { describe as _describe, expect, it, afterEach } from 'vitest';
|
||||
|
||||
import { Agent, Memory } from '../../../index';
|
||||
import { SqliteMemory } from '../../../storage/sqlite-memory';
|
||||
import { describeIf, findLastTextContent, getModel, createSqliteMemory } from '../helpers';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
const cleanups: Array<() => void> = [];
|
||||
afterEach(() => {
|
||||
cleanups.forEach((fn) => fn());
|
||||
cleanups.length = 0;
|
||||
});
|
||||
|
||||
_describe('SqliteMemory saveThread upsert', () => {
|
||||
it('preserves existing title and metadata when not provided', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
await memory.saveThread({
|
||||
id: 'upsert-t1',
|
||||
resourceId: 'user-1',
|
||||
title: 'Original Title',
|
||||
metadata: { key: 'value' },
|
||||
});
|
||||
|
||||
// Upsert without title or metadata (simulates saveMessagesToThread)
|
||||
await memory.saveThread({ id: 'upsert-t1', resourceId: 'user-1' });
|
||||
|
||||
const thread = await memory.getThread('upsert-t1');
|
||||
expect(thread).not.toBeNull();
|
||||
expect(thread!.title).toBe('Original Title');
|
||||
expect(thread!.metadata).toEqual({ key: 'value' });
|
||||
});
|
||||
|
||||
it('overwrites title and metadata when explicitly provided', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
await memory.saveThread({
|
||||
id: 'upsert-t2',
|
||||
resourceId: 'user-1',
|
||||
title: 'Old Title',
|
||||
metadata: { old: true },
|
||||
});
|
||||
|
||||
await memory.saveThread({
|
||||
id: 'upsert-t2',
|
||||
resourceId: 'user-1',
|
||||
title: 'New Title',
|
||||
metadata: { new: true },
|
||||
});
|
||||
|
||||
const thread = await memory.getThread('upsert-t2');
|
||||
expect(thread!.title).toBe('New Title');
|
||||
expect(thread!.metadata).toEqual({ new: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe('SQLite memory integration', () => {
|
||||
it('agent recalls info from previous turn with SqliteMemory', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const mem = new Memory().storage(memory).lastMessages(10);
|
||||
const agent = new Agent('sqlite-test')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem);
|
||||
|
||||
const threadId = `sqlite-${Date.now()}`;
|
||||
const options = { persistence: { threadId, resourceId: 'test-user' } };
|
||||
|
||||
await agent.generate('My favorite number is 42. Just acknowledge.', options);
|
||||
const result = await agent.generate('What is my favorite number?', options);
|
||||
|
||||
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('42');
|
||||
});
|
||||
|
||||
it('data survives a fresh SqliteMemory instance', async () => {
|
||||
const { memory, cleanup, url } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const mem1 = new Memory().storage(memory).lastMessages(10);
|
||||
const agent1 = new Agent('persist-test-1')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem1);
|
||||
|
||||
const threadId = `persist-${Date.now()}`;
|
||||
const options = { persistence: { threadId, resourceId: 'test-user' } };
|
||||
await agent1.generate('My favorite animal is a dolphin. Just acknowledge.', options);
|
||||
|
||||
// New SqliteMemory instance, same file
|
||||
const memory2 = new SqliteMemory({ url });
|
||||
const mem2 = new Memory().storage(memory2).lastMessages(10);
|
||||
const agent2 = new Agent('persist-test-2')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('You are a helpful assistant. Be concise.')
|
||||
.memory(mem2);
|
||||
|
||||
const result = await agent2.generate('What is my favorite animal?', options);
|
||||
expect(findLastTextContent(result.messages)?.toLowerCase()).toContain('dolphin');
|
||||
});
|
||||
});
|
||||
|
|
@ -1,403 +0,0 @@
|
|||
import { generateText } from 'ai';
|
||||
import { expect, it } from 'vitest';
|
||||
|
||||
import {
|
||||
Agent,
|
||||
type AgentDbMessage,
|
||||
type BuiltObservationStore,
|
||||
type CompactFn,
|
||||
createModel,
|
||||
Memory,
|
||||
type Observation,
|
||||
type ObservationCursor,
|
||||
OBSERVATION_SCHEMA_VERSION,
|
||||
type ObserveFn,
|
||||
} from '../../../index';
|
||||
import { InMemoryMemory } from '../../../runtime/memory-store';
|
||||
import { describeIf, findLastTextContent, getModel } from '../helpers';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
const WORKING_MEMORY_TEMPLATE = [
|
||||
'# User Memory',
|
||||
'- **Location**:',
|
||||
'- **Project codename**:',
|
||||
].join('\n');
|
||||
|
||||
type ObservationCycleStore = BuiltObservationStore &
|
||||
Pick<InMemoryMemory, 'getWorkingMemory' | 'saveWorkingMemory'>;
|
||||
|
||||
function uniqueId(prefix: string): string {
|
||||
return `${prefix}-${crypto.randomUUID()}`;
|
||||
}
|
||||
|
||||
function messageText(message: AgentDbMessage): string {
|
||||
if (!('content' in message) || !Array.isArray(message.content)) {
|
||||
return `${message.type}: ${JSON.stringify(message)}`;
|
||||
}
|
||||
|
||||
const text = message.content
|
||||
.map((part) => {
|
||||
if (part.type === 'text' || part.type === 'reasoning') return part.text;
|
||||
if (part.type === 'tool-call') return `[tool:${part.toolName}] ${JSON.stringify(part.input)}`;
|
||||
if (part.type === 'invalid-tool-call') return `[invalid-tool:${part.name ?? 'unknown'}]`;
|
||||
if (part.type === 'file') return `[file:${part.mediaType ?? 'unknown'}]`;
|
||||
if (part.type === 'citation') return `[citation:${part.title ?? part.url ?? 'unknown'}]`;
|
||||
if (part.type === 'provider') return JSON.stringify(part.value);
|
||||
return '';
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join(' ');
|
||||
|
||||
return `${message.role}: ${text}`;
|
||||
}
|
||||
|
||||
function observationText(observation: Observation): string {
|
||||
const payload = observation.payload;
|
||||
if (payload !== null && typeof payload === 'object' && !Array.isArray(payload)) {
|
||||
const text = (payload as Record<string, unknown>).text;
|
||||
if (typeof text === 'string') return text;
|
||||
}
|
||||
return JSON.stringify(payload);
|
||||
}
|
||||
|
||||
function observeWithModel(model: string): ObserveFn {
|
||||
return async ({ deltaMessages, threadId, now }) => {
|
||||
const transcript = deltaMessages.map(messageText).join('\n');
|
||||
const { text } = await generateText({
|
||||
model: createModel(model),
|
||||
temperature: 0,
|
||||
system: [
|
||||
'Extract durable user facts from the transcript.',
|
||||
'Return one concise observation sentence.',
|
||||
'Preserve exact names, places, and codes.',
|
||||
'If there are no durable facts, return NONE.',
|
||||
].join(' '),
|
||||
prompt: transcript,
|
||||
});
|
||||
|
||||
const content = text.trim();
|
||||
if (content.toUpperCase() === 'NONE') return [];
|
||||
|
||||
return [
|
||||
{
|
||||
scopeKind: 'thread',
|
||||
scopeId: threadId,
|
||||
kind: 'user-fact',
|
||||
payload: { text: content },
|
||||
durationMs: null,
|
||||
schemaVersion: OBSERVATION_SCHEMA_VERSION,
|
||||
createdAt: now,
|
||||
},
|
||||
];
|
||||
};
|
||||
}
|
||||
|
||||
function compactWithModel(model: string): CompactFn {
|
||||
return async ({ observations, currentWorkingMemory, workingMemoryTemplate }) => {
|
||||
const observationList = observations.map((observation) => `- ${observationText(observation)}`);
|
||||
const { text } = await generateText({
|
||||
model: createModel(model),
|
||||
temperature: 0,
|
||||
system: [
|
||||
'You maintain a concise working-memory document.',
|
||||
'Return the complete updated document only.',
|
||||
'Preserve exact names, places, and codes.',
|
||||
].join(' '),
|
||||
prompt: [
|
||||
'Template:',
|
||||
workingMemoryTemplate,
|
||||
'',
|
||||
'Current working memory:',
|
||||
currentWorkingMemory ?? workingMemoryTemplate,
|
||||
'',
|
||||
'New observations:',
|
||||
observationList.join('\n'),
|
||||
].join('\n'),
|
||||
});
|
||||
|
||||
return { content: text.trim() };
|
||||
};
|
||||
}
|
||||
|
||||
async function runObservationCycleForTest({
|
||||
store,
|
||||
threadId,
|
||||
resourceId,
|
||||
model,
|
||||
}: {
|
||||
store: ObservationCycleStore;
|
||||
threadId: string;
|
||||
resourceId: string;
|
||||
model: string;
|
||||
}): Promise<{
|
||||
deltaMessages: AgentDbMessage[];
|
||||
cursorAfter: ObservationCursor | null;
|
||||
}> {
|
||||
const handle = await store.acquireObservationLock('thread', threadId, {
|
||||
holderId: 'observational-memory-integration-test',
|
||||
ttlMs: 30_000,
|
||||
});
|
||||
expect(handle).not.toBeNull();
|
||||
if (!handle) throw new Error('Failed to acquire observation lock');
|
||||
|
||||
try {
|
||||
const cursor = await store.getCursor('thread', threadId);
|
||||
const deltaMessages = await store.getMessagesForScope('thread', threadId, {
|
||||
...(cursor && {
|
||||
since: {
|
||||
sinceCreatedAt: cursor.lastObservedAt,
|
||||
sinceMessageId: cursor.lastObservedMessageId,
|
||||
},
|
||||
}),
|
||||
});
|
||||
expect(deltaMessages.length).toBeGreaterThan(0);
|
||||
|
||||
const currentWorkingMemory = await store.getWorkingMemory({
|
||||
threadId,
|
||||
resourceId,
|
||||
scope: 'resource',
|
||||
});
|
||||
const now = new Date();
|
||||
const observedRows = await observeWithModel(model)({
|
||||
deltaMessages,
|
||||
currentWorkingMemory,
|
||||
cursor,
|
||||
threadId,
|
||||
resourceId,
|
||||
now,
|
||||
trigger: { type: 'per-turn' },
|
||||
gap: null,
|
||||
telemetry: undefined,
|
||||
});
|
||||
const persistedRows = await store.appendObservations(observedRows);
|
||||
expect(persistedRows.length).toBeGreaterThan(0);
|
||||
|
||||
const lastMessage = deltaMessages[deltaMessages.length - 1];
|
||||
await store.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: threadId,
|
||||
lastObservedMessageId: lastMessage.id,
|
||||
lastObservedAt: lastMessage.createdAt,
|
||||
updatedAt: now,
|
||||
});
|
||||
|
||||
const queuedRows = await store.getObservations({
|
||||
scopeKind: 'thread',
|
||||
scopeId: threadId,
|
||||
schemaVersionAtMost: OBSERVATION_SCHEMA_VERSION,
|
||||
});
|
||||
expect(queuedRows.length).toBeGreaterThan(0);
|
||||
|
||||
const compacted = await compactWithModel(model)({
|
||||
observations: queuedRows,
|
||||
currentWorkingMemory,
|
||||
workingMemoryTemplate: WORKING_MEMORY_TEMPLATE,
|
||||
structured: false,
|
||||
threadId,
|
||||
resourceId,
|
||||
model,
|
||||
compactorPrompt: 'Compact thread-scoped observations into resource-scoped working memory.',
|
||||
telemetry: undefined,
|
||||
});
|
||||
await store.saveWorkingMemory({ threadId, resourceId, scope: 'resource' }, compacted.content);
|
||||
await store.deleteObservations(queuedRows.map((row) => row.id));
|
||||
|
||||
const remainingRows = await store.getObservations({
|
||||
scopeKind: 'thread',
|
||||
scopeId: threadId,
|
||||
});
|
||||
expect(remainingRows).toHaveLength(0);
|
||||
|
||||
return {
|
||||
deltaMessages,
|
||||
cursorAfter: await store.getCursor('thread', threadId),
|
||||
};
|
||||
} finally {
|
||||
await store.releaseObservationLock(handle);
|
||||
}
|
||||
}
|
||||
|
||||
function createWriterAgent(model: string, store: InMemoryMemory): Agent {
|
||||
return new Agent('observational-memory-writer')
|
||||
.model(model)
|
||||
.instructions('You are a helpful assistant. Acknowledge briefly, and do not repeat user facts.')
|
||||
.memory(new Memory().storage(store).lastMessages(10));
|
||||
}
|
||||
|
||||
function createReaderAgent(model: string, store: InMemoryMemory): Agent {
|
||||
return new Agent('observational-memory-reader')
|
||||
.model(model)
|
||||
.instructions('Answer only from working memory. Be concise.')
|
||||
.memory(
|
||||
new Memory()
|
||||
.storage(store)
|
||||
.lastMessages(1)
|
||||
.scope('resource')
|
||||
.freeform(WORKING_MEMORY_TEMPLATE),
|
||||
);
|
||||
}
|
||||
|
||||
async function rememberFact(
|
||||
agent: Agent,
|
||||
fact: string,
|
||||
options: { persistence: { threadId: string; resourceId: string } },
|
||||
) {
|
||||
const result = await agent.generate(`${fact} Reply with "noted".`, options);
|
||||
expect(result.finishReason).toBe('stop');
|
||||
expect(findLastTextContent(result.messages)).toBeTruthy();
|
||||
}
|
||||
|
||||
async function addNeutralTurn(
|
||||
agent: Agent,
|
||||
options: { persistence: { threadId: string; resourceId: string } },
|
||||
forbiddenTerms: string[],
|
||||
) {
|
||||
const result = await agent.generate('Reply only with "ok".', options);
|
||||
expect(result.finishReason).toBe('stop');
|
||||
const text = findLastTextContent(result.messages)?.toLowerCase() ?? '';
|
||||
expect(text).toContain('ok');
|
||||
for (const term of forbiddenTerms) {
|
||||
expect(text).not.toContain(term);
|
||||
}
|
||||
}
|
||||
|
||||
function expectTextToContain(text: string | null | undefined, expectedTerms: string[]) {
|
||||
const normalized = text?.toLowerCase() ?? '';
|
||||
for (const term of expectedTerms) {
|
||||
expect(normalized).toContain(term);
|
||||
}
|
||||
}
|
||||
|
||||
describe('observational memory integration', () => {
|
||||
it('compacts observed thread facts into resource working memory for another thread', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const model = getModel('anthropic');
|
||||
const resourceId = uniqueId('obs-resource');
|
||||
const sourceThreadId = uniqueId('obs-source');
|
||||
const readerThreadId = uniqueId('obs-reader');
|
||||
|
||||
const writer = createWriterAgent(model, store);
|
||||
|
||||
await rememberFact(writer, 'Please remember this for later: I live in Reykjavik.', {
|
||||
persistence: { threadId: sourceThreadId, resourceId },
|
||||
});
|
||||
|
||||
await runObservationCycleForTest({
|
||||
store,
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
model,
|
||||
});
|
||||
|
||||
const reader = createReaderAgent(model, store);
|
||||
const result = await reader.generate('From memory only, where do I live?', {
|
||||
persistence: {
|
||||
threadId: readerThreadId,
|
||||
resourceId,
|
||||
},
|
||||
});
|
||||
|
||||
expectTextToContain(findLastTextContent(result.messages), ['reykjavik']);
|
||||
});
|
||||
|
||||
it('uses compacted working memory inside the observed thread after the fact leaves chat history', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const model = getModel('anthropic');
|
||||
const resourceId = uniqueId('obs-resource');
|
||||
const sourceThreadId = uniqueId('obs-source');
|
||||
const options = {
|
||||
persistence: { threadId: sourceThreadId, resourceId },
|
||||
};
|
||||
|
||||
const writer = createWriterAgent(model, store);
|
||||
|
||||
await rememberFact(
|
||||
writer,
|
||||
'Please remember this for later: I live in Reykjavik, and my project codename is Aurora-17.',
|
||||
options,
|
||||
);
|
||||
await addNeutralTurn(writer, options, ['reykjavik', 'aurora-17']);
|
||||
|
||||
await runObservationCycleForTest({
|
||||
store,
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
model,
|
||||
});
|
||||
|
||||
const workingMemory = await store.getWorkingMemory({
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
scope: 'resource',
|
||||
});
|
||||
expectTextToContain(workingMemory, ['reykjavik', 'aurora-17']);
|
||||
|
||||
const reader = createReaderAgent(model, store);
|
||||
const result = await reader.generate(
|
||||
'From memory only, where do I live and what is my project codename?',
|
||||
options,
|
||||
);
|
||||
|
||||
expectTextToContain(findLastTextContent(result.messages), ['reykjavik', 'aurora-17']);
|
||||
});
|
||||
|
||||
it('folds later turns from the same thread into existing working memory', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const model = getModel('anthropic');
|
||||
const resourceId = uniqueId('obs-resource');
|
||||
const sourceThreadId = uniqueId('obs-source');
|
||||
const options = {
|
||||
persistence: { threadId: sourceThreadId, resourceId },
|
||||
};
|
||||
|
||||
const writer = createWriterAgent(model, store);
|
||||
|
||||
await rememberFact(
|
||||
writer,
|
||||
'Please remember this for later: I live in Reykjavik, and my project codename is Aurora-17.',
|
||||
options,
|
||||
);
|
||||
await addNeutralTurn(writer, options, ['reykjavik', 'aurora-17']);
|
||||
const firstCycle = await runObservationCycleForTest({
|
||||
store,
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
model,
|
||||
});
|
||||
|
||||
await rememberFact(writer, 'Also remember that my editor theme is Solarized Dawn.', options);
|
||||
await addNeutralTurn(writer, options, ['solarized', 'dawn']);
|
||||
const secondCycle = await runObservationCycleForTest({
|
||||
store,
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
model,
|
||||
});
|
||||
|
||||
expect(firstCycle.cursorAfter).not.toBeNull();
|
||||
expect(secondCycle.cursorAfter?.lastObservedAt.getTime()).toBeGreaterThan(
|
||||
firstCycle.cursorAfter!.lastObservedAt.getTime(),
|
||||
);
|
||||
|
||||
const workingMemory = await store.getWorkingMemory({
|
||||
threadId: sourceThreadId,
|
||||
resourceId,
|
||||
scope: 'resource',
|
||||
});
|
||||
expectTextToContain(workingMemory, ['reykjavik', 'aurora-17', 'solarized dawn']);
|
||||
|
||||
const reader = createReaderAgent(model, store);
|
||||
const result = await reader.generate(
|
||||
'From memory only, where do I live, what is my project codename, and what is my editor theme?',
|
||||
options,
|
||||
);
|
||||
|
||||
expectTextToContain(findLastTextContent(result.messages), [
|
||||
'reykjavik',
|
||||
'aurora-17',
|
||||
'solarized',
|
||||
'dawn',
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
|
@ -6,6 +6,7 @@ import {
|
|||
collectStreamChunks,
|
||||
getModel,
|
||||
chunksOfType,
|
||||
findAllToolResults,
|
||||
collectTextDeltas,
|
||||
} from './helpers';
|
||||
import { Agent, Tool } from '../../index';
|
||||
|
|
@ -42,14 +43,15 @@ describe('multi-tool-calls integration', () => {
|
|||
);
|
||||
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
const toolCallResults = chunksOfType(chunks, 'tool-result');
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
// Should have called the tool multiple times
|
||||
const priceCalls = toolCallResults.filter((tc) => tc.toolName === 'lookup_price');
|
||||
expect(priceCalls.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// Each call should have its own correct output (not all pointing to the first result)
|
||||
const outputs = priceCalls.map((tc) => tc.output as { product: string; price: number });
|
||||
const outputs = priceCalls.map((tc) => tc.result as { product: string; price: number });
|
||||
|
||||
// Verify that different products got different prices (index-based merging works)
|
||||
const uniquePrices = new Set(outputs.map((o) => o.price));
|
||||
|
|
@ -88,7 +90,8 @@ describe('multi-tool-calls integration', () => {
|
|||
const { stream: fullStream } = await agent.stream('What is 3 + 4 and also what is 5 * 6?');
|
||||
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
const toolCallResults = chunksOfType(chunks, 'tool-result');
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
const toolCalls = toolCallResults.filter(
|
||||
(tc) => tc.toolName === 'add' || tc.toolName === 'multiply',
|
||||
|
|
@ -101,8 +104,8 @@ describe('multi-tool-calls integration', () => {
|
|||
expect(addCall).toBeDefined();
|
||||
expect(multiplyCall).toBeDefined();
|
||||
|
||||
expect((addCall!.output as { result: number }).result).toBe(7);
|
||||
expect((multiplyCall!.output as { result: number }).result).toBe(30);
|
||||
expect((addCall!.result as { result: number }).result).toBe(7);
|
||||
expect((multiplyCall!.result as { result: number }).result).toBe(30);
|
||||
});
|
||||
|
||||
it('correctly merges results via the run() path', async () => {
|
||||
|
|
@ -123,14 +126,15 @@ describe('multi-tool-calls integration', () => {
|
|||
'What are the lengths of "hello" and "world"? Look up each one separately.',
|
||||
);
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
const toolCallResults = chunksOfType(chunks, 'tool-result');
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolCallResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
const lengthCalls = toolCallResults.filter((tc) => tc.toolName === 'get_length');
|
||||
expect(lengthCalls.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// Each should have correct output
|
||||
for (const call of lengthCalls) {
|
||||
const output = call.output as { text: string; length: number };
|
||||
const output = call.result as { text: string; length: number };
|
||||
expect(output.length).toBe(output.text.length);
|
||||
}
|
||||
});
|
||||
|
|
|
|||
|
|
@ -28,92 +28,95 @@ describe('orphaned tool messages in memory', () => {
|
|||
}
|
||||
|
||||
/**
|
||||
* Seed memory with a conversation that has settled tool-call blocks
|
||||
* (state: 'resolved') surrounded by plain user/assistant exchanges.
|
||||
* Seed memory with a conversation that has tool-call / tool-result pairs
|
||||
* surrounded by plain user/assistant exchanges.
|
||||
*
|
||||
* Message layout (indices 0–5):
|
||||
* 0: user "How many widgets?"
|
||||
* 1: assistant text + tool-call(call_1, state:'resolved', output:{count:10})
|
||||
* 2: assistant "There are 10 widgets"
|
||||
* 3: user "What about gadgets?"
|
||||
* 4: assistant text + tool-call(call_2, state:'resolved', output:{count:5})
|
||||
* 5: assistant "There are 5 gadgets"
|
||||
* Message layout (indices 0–7):
|
||||
* 0: user "How many widgets?"
|
||||
* 1: assistant text + tool-call(call_1)
|
||||
* 2: tool tool-result(call_1)
|
||||
* 3: assistant "There are 10 widgets"
|
||||
* 4: user "What about gadgets?"
|
||||
* 5: assistant text + tool-call(call_2)
|
||||
* 6: tool tool-result(call_2)
|
||||
* 7: assistant "There are 5 gadgets"
|
||||
*/
|
||||
function buildSeedMessages(): AgentDbMessage[] {
|
||||
const now = Date.now();
|
||||
return [
|
||||
{
|
||||
id: 'm1',
|
||||
createdAt: new Date(now),
|
||||
createdAt: new Date(),
|
||||
role: 'user',
|
||||
content: [{ type: 'text', text: 'How many widgets do we have?' }],
|
||||
},
|
||||
{
|
||||
id: 'm2',
|
||||
createdAt: new Date(now + 1),
|
||||
createdAt: new Date(),
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Let me look that up.' },
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'call_1',
|
||||
toolName: 'lookup',
|
||||
input: { id: 'widgets' },
|
||||
state: 'resolved',
|
||||
output: { count: 10 },
|
||||
},
|
||||
{ type: 'tool-call', toolCallId: 'call_1', toolName: 'lookup', input: { id: 'widgets' } },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'm3',
|
||||
createdAt: new Date(now + 2),
|
||||
createdAt: new Date(),
|
||||
role: 'tool',
|
||||
content: [
|
||||
{ type: 'tool-result', toolCallId: 'call_1', toolName: 'lookup', result: { count: 10 } },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'm4',
|
||||
createdAt: new Date(),
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'There are 10 widgets in stock.' }],
|
||||
},
|
||||
{
|
||||
id: 'm4',
|
||||
createdAt: new Date(now + 3),
|
||||
id: 'm5',
|
||||
createdAt: new Date(),
|
||||
role: 'user',
|
||||
content: [{ type: 'text', text: 'What about gadgets?' }],
|
||||
},
|
||||
{
|
||||
id: 'm5',
|
||||
createdAt: new Date(now + 4),
|
||||
id: 'm6',
|
||||
createdAt: new Date(),
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Let me check.' },
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'call_2',
|
||||
toolName: 'lookup',
|
||||
input: { id: 'gadgets' },
|
||||
state: 'resolved',
|
||||
output: { count: 5 },
|
||||
},
|
||||
{ type: 'tool-call', toolCallId: 'call_2', toolName: 'lookup', input: { id: 'gadgets' } },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'm6',
|
||||
createdAt: new Date(now + 5),
|
||||
id: 'm7',
|
||||
createdAt: new Date(),
|
||||
role: 'tool',
|
||||
content: [
|
||||
{ type: 'tool-result', toolCallId: 'call_2', toolName: 'lookup', result: { count: 5 } },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'm8',
|
||||
createdAt: new Date(),
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'There are 5 gadgets in stock.' }],
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
it('handles partial history window when earlier messages are truncated', async () => {
|
||||
it('handles orphaned tool results when tool-call message is truncated from history', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = 'thread-orphan-result';
|
||||
|
||||
// Seed 6 messages into the thread
|
||||
// Seed 8 messages into the thread
|
||||
await memory.saveMessages({ threadId, messages: buildSeedMessages() });
|
||||
|
||||
// lastMessages=4 → loads messages 2–5
|
||||
// Each tool-call block carries its own result (state:'resolved'), so there
|
||||
// are no orphan issues regardless of window boundaries.
|
||||
const mem = new Memory().storage(memory).lastMessages(4);
|
||||
// lastMessages=6 → loads messages 2–7
|
||||
// Message at index 2 is a tool-result for call_1, but the matching
|
||||
// assistant+tool-call (index 1) is truncated. This is an orphaned tool result.
|
||||
const mem = new Memory().storage(memory).lastMessages(6);
|
||||
|
||||
const agent = new Agent('orphan-result-test')
|
||||
.model(getModel('anthropic'))
|
||||
|
|
@ -129,7 +132,7 @@ describe('orphaned tool messages in memory', () => {
|
|||
expect(result.finishReason).toBe('stop');
|
||||
});
|
||||
|
||||
it('handles pending tool-call blocks (interrupted turn) in history', async () => {
|
||||
it('handles orphaned tool calls when tool-result message is truncated from history', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
|
|
@ -137,9 +140,8 @@ describe('orphaned tool messages in memory', () => {
|
|||
const now = Date.now();
|
||||
|
||||
// Store a conversation where the last saved message is an assistant
|
||||
// with a pending tool-call block (simulating a partial save / interrupted turn).
|
||||
// stripOrphanedToolMessages will drop the pending block so the LLM receives
|
||||
// only the user message.
|
||||
// with a tool-call but the tool-result was never persisted (simulating
|
||||
// a partial save / interrupted turn).
|
||||
const messages: AgentDbMessage[] = [
|
||||
{
|
||||
id: 'm1',
|
||||
|
|
@ -158,7 +160,6 @@ describe('orphaned tool messages in memory', () => {
|
|||
toolCallId: 'call_orphan',
|
||||
toolName: 'lookup',
|
||||
input: { id: 'widgets' },
|
||||
state: 'pending',
|
||||
},
|
||||
],
|
||||
},
|
||||
|
|
|
|||
|
|
@ -183,7 +183,7 @@ describe('external abort signal', () => {
|
|||
});
|
||||
|
||||
expect(result.finishReason).toBe('error');
|
||||
expect(agent.getState().status).toBe('cancelled');
|
||||
expect(result.getState().status).toBe('cancelled');
|
||||
});
|
||||
|
||||
it('cancels a stream() call via external AbortSignal', async () => {
|
||||
|
|
|
|||
|
|
@ -55,8 +55,10 @@ describe('provider tools integration', () => {
|
|||
const lastFinish = finishChunks[finishChunks.length - 1];
|
||||
expect(lastFinish?.type === 'finish' && lastFinish.finishReason).toBe('stop');
|
||||
|
||||
// Tool calls now ride their own discrete `tool-call` chunks
|
||||
const toolCalls = chunksOfType(chunks, 'tool-call');
|
||||
// Collect tool calls from message chunks
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const allMessages = messageChunks.map((c) => c.message);
|
||||
const toolCalls = findAllToolCalls(allMessages);
|
||||
const webSearchCall = toolCalls.find((tc) => tc.toolName.includes('web_search'));
|
||||
expect(webSearchCall).toBeDefined();
|
||||
|
||||
|
|
@ -102,8 +104,9 @@ describe('provider tools integration', () => {
|
|||
expect(suspended.runId).toBeTruthy();
|
||||
expect(suspended.toolCallId).toBeTruthy();
|
||||
|
||||
// The web search provider tool call should appear as a discrete tool-call chunk
|
||||
const toolCalls = chunksOfType(chunks, 'tool-call');
|
||||
// The web search provider tool call should appear in the message history
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolCalls = findAllToolCalls(messageChunks.map((c) => c.message));
|
||||
const webSearchCall = toolCalls.find((tc) => tc.toolName.includes('web_search'));
|
||||
expect(webSearchCall).toBeDefined();
|
||||
|
||||
|
|
@ -112,8 +115,8 @@ describe('provider tools integration', () => {
|
|||
'stream',
|
||||
{ approved: true },
|
||||
{
|
||||
runId: suspended.runId,
|
||||
toolCallId: suspended.toolCallId,
|
||||
runId: suspended.runId!,
|
||||
toolCallId: suspended.toolCallId!,
|
||||
},
|
||||
);
|
||||
const resumeChunks = await collectStreamChunks(resumeStream.stream);
|
||||
|
|
|
|||
|
|
@ -155,8 +155,16 @@ describe('state restore after suspension', () => {
|
|||
const errorChunks = resumedChunks.filter((c) => c.type === 'error');
|
||||
expect(errorChunks).toHaveLength(0);
|
||||
|
||||
// Stream must contain a discrete tool-result chunk for the resumed call
|
||||
const toolResultChunks = chunksOfType(resumedChunks, 'tool-result');
|
||||
// Stream must contain the tool result message
|
||||
const toolResultChunks = resumedChunks.filter(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
'message' in c &&
|
||||
'content' in (c.message as object) &&
|
||||
(c.message as { content: Array<{ type: string }> }).content.some(
|
||||
(part) => part.type === 'tool-result',
|
||||
),
|
||||
);
|
||||
expect(toolResultChunks.length).toBeGreaterThan(0);
|
||||
|
||||
// Stream must end with a finish chunk (not error)
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import { Agent, Tool } from '../../index';
|
|||
const describe = describeIf('anthropic');
|
||||
|
||||
describe('stream timing', () => {
|
||||
it('tool-input-delta chunks arrive incrementally (not all buffered)', async () => {
|
||||
it('tool-call-delta chunks arrive incrementally (not all buffered)', async () => {
|
||||
const agent = new Agent('timing-test')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions(
|
||||
|
|
@ -31,21 +31,16 @@ describe('stream timing', () => {
|
|||
|
||||
const reader = result.stream.getReader();
|
||||
|
||||
// Track timestamps of each reader.read() that returns a tool-input-delta
|
||||
// for the set_code tool. We seed `setCodeToolCallId` from the matching
|
||||
// tool-input-start so subsequent deltas can be filtered by toolCallId.
|
||||
// Track timestamps of each reader.read() that returns a tool-call-delta
|
||||
// This measures when the reader YIELDS each chunk, not when the agent enqueues it.
|
||||
const deltaReadTimes: number[] = [];
|
||||
const start = Date.now();
|
||||
let setCodeToolCallId: string | undefined;
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
const chunk = value;
|
||||
if (chunk.type === 'tool-input-start' && chunk.toolName === 'set_code') {
|
||||
setCodeToolCallId = chunk.toolCallId;
|
||||
} else if (chunk.type === 'tool-input-delta' && chunk.toolCallId === setCodeToolCallId) {
|
||||
if (chunk.type === 'tool-call-delta' && (chunk as { name?: string }).name === 'set_code') {
|
||||
deltaReadTimes.push(Date.now() - start);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,8 +5,10 @@ import {
|
|||
collectStreamChunks,
|
||||
collectTextDeltas,
|
||||
describeIf,
|
||||
findAllToolResults,
|
||||
getModel,
|
||||
} from './helpers';
|
||||
import type { StreamChunk } from '../../index';
|
||||
import { Agent } from '../../index';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
|
@ -31,7 +33,10 @@ describe('sub-agent (asTool) integration', () => {
|
|||
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
const text = collectTextDeltas(chunks);
|
||||
const toolResults = chunksOfType(chunks, 'tool-result');
|
||||
const messageChunks = chunksOfType(chunks, 'message') as Array<
|
||||
StreamChunk & { type: 'message' }
|
||||
>;
|
||||
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
// The orchestrator should have called the sub-agent tool
|
||||
expect(toolResults.length).toBeGreaterThan(0);
|
||||
|
|
@ -39,7 +44,7 @@ describe('sub-agent (asTool) integration', () => {
|
|||
expect(mathCall).toBeDefined();
|
||||
|
||||
// The output should contain the sub-agent's response
|
||||
expect(mathCall!.output).toBeDefined();
|
||||
expect(mathCall!.result).toBeDefined();
|
||||
|
||||
// The final text should reference 60
|
||||
expect(text).toBeTruthy();
|
||||
|
|
@ -75,7 +80,10 @@ describe('sub-agent (asTool) integration', () => {
|
|||
'Translate "hello" to French and then make it uppercase.',
|
||||
);
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
const toolResults = chunksOfType(chunks, 'tool-result');
|
||||
const messageChunks = chunksOfType(chunks, 'message') as Array<
|
||||
StreamChunk & { type: 'message' }
|
||||
>;
|
||||
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
// Should have called both tools
|
||||
expect(toolResults.length).toBeGreaterThanOrEqual(2);
|
||||
|
|
|
|||
|
|
@ -63,12 +63,11 @@ describe('toModelOutput integration', () => {
|
|||
expect(rawOutput.total).toBe(3);
|
||||
expect(rawOutput.records[0].data).toBe('x'.repeat(200));
|
||||
|
||||
// Tool-call block in messages stores the transformed output (what the LLM saw)
|
||||
// ContentToolResult in messages stores the transformed output (what the LLM saw)
|
||||
const toolResults = findAllToolResults(result.messages);
|
||||
const searchToolResult = toolResults.find((tr) => tr.toolName === 'search_db');
|
||||
expect(searchToolResult).toBeDefined();
|
||||
expect(searchToolResult!.state).toBe('resolved');
|
||||
const modelOutput = (searchToolResult as unknown as { output: { summary: string } }).output;
|
||||
const modelOutput = searchToolResult!.result as { summary: string };
|
||||
expect(modelOutput.summary).toContain('Found 3 records');
|
||||
expect(modelOutput.summary).toContain('Widget A');
|
||||
});
|
||||
|
|
@ -107,14 +106,15 @@ describe('toModelOutput integration', () => {
|
|||
const { stream } = await agent.stream('Get report RPT-001');
|
||||
const chunks = await collectStreamChunks(stream);
|
||||
|
||||
// The discrete tool-result chunks in the stream contain the transformed output
|
||||
const toolResults = chunksOfType(chunks, 'tool-result');
|
||||
// The tool result messages in the stream contain the transformed output
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
const reportResult = toolResults.find((tr) => tr.toolName === 'fetch_report');
|
||||
expect(reportResult).toBeDefined();
|
||||
|
||||
// The model output (transformed) should have the truncated fields
|
||||
const modelOutput = reportResult!.output as { id: string; title: string; pageCount: number };
|
||||
const modelOutput = reportResult!.result as { id: string; title: string; pageCount: number };
|
||||
expect(modelOutput.id).toBe('RPT-001');
|
||||
expect(modelOutput.title).toBe('Q4 Sales Report');
|
||||
expect(modelOutput.pageCount).toBe(42);
|
||||
|
|
@ -140,14 +140,11 @@ describe('toModelOutput integration', () => {
|
|||
|
||||
const result = await agent.generate('Echo the message "hello world"');
|
||||
|
||||
// Without toModelOutput, tool-call block in messages has the raw output
|
||||
// Without toModelOutput, tool result in messages should have the raw output
|
||||
const toolResults = findAllToolResults(result.messages);
|
||||
const echoResult = toolResults.find((tr) => tr.toolName === 'echo');
|
||||
expect(echoResult).toBeDefined();
|
||||
expect(echoResult!.state).toBe('resolved');
|
||||
expect((echoResult as unknown as { output: { echoed: string } }).output.echoed).toBe(
|
||||
'hello world',
|
||||
);
|
||||
expect((echoResult!.result as { echoed: string }).echoed).toBe('hello world');
|
||||
|
||||
// And toolCalls should also have the same raw output
|
||||
expect(result.toolCalls).toBeDefined();
|
||||
|
|
@ -199,14 +196,11 @@ describe('toModelOutput integration', () => {
|
|||
expect(multiplyEntry).toBeDefined();
|
||||
expect((multiplyEntry!.output as { result: number }).result).toBe(56);
|
||||
|
||||
// Tool-call block in messages stores the transformed output for the LLM
|
||||
// Tool result in messages stores the transformed output for the LLM
|
||||
const toolResults = findAllToolResults(result.messages);
|
||||
const multiplyToolResult = toolResults.find((tr) => tr.toolName === 'multiply');
|
||||
expect(multiplyToolResult).toBeDefined();
|
||||
expect(multiplyToolResult!.state).toBe('resolved');
|
||||
const modelOutput = (
|
||||
multiplyToolResult as unknown as { output: { answer: number; note: string } }
|
||||
).output;
|
||||
const modelOutput = multiplyToolResult!.result as { answer: number; note: string };
|
||||
expect(modelOutput.answer).toBe(56);
|
||||
expect(modelOutput.note).toBe('multiplication complete');
|
||||
|
||||
|
|
|
|||
|
|
@ -1,222 +0,0 @@
|
|||
/**
|
||||
* Upsert contract: after a HITL suspend/resume cycle backed by SqliteMemory,
|
||||
* the thread must contain exactly ONE assistant message with the tool-call
|
||||
* block (no duplicate rows), and that block must have state: 'resolved'.
|
||||
*
|
||||
* The upsert matters because on resume the runtime calls saveToMemory with
|
||||
* turnDelta() which includes the now-resolved assistant message restored from
|
||||
* the checkpoint. Without upsert-by-id, a second row would be inserted for
|
||||
* the same message, breaking the thread ordering contract.
|
||||
*
|
||||
* Note: messages with state:'pending' are transient and are NOT written to
|
||||
* memory during suspension — they only live in the checkpoint. Memory only
|
||||
* receives the final settled state after resume completes.
|
||||
*/
|
||||
import { afterEach, expect, it } from 'vitest';
|
||||
import { z } from 'zod';
|
||||
|
||||
import { describeIf, createSqliteMemory, getModel } from './helpers';
|
||||
import { Agent, filterLlmMessages, Memory, Tool } from '../../index';
|
||||
import type { AgentDbMessage } from '../../index';
|
||||
import type { ContentToolCall, Message } from '../../types/sdk/message';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
describe('tool-call upsert via suspend/resume (SqliteMemory)', () => {
|
||||
const cleanups: Array<() => void> = [];
|
||||
|
||||
afterEach(() => {
|
||||
for (const fn of cleanups) fn();
|
||||
cleanups.length = 0;
|
||||
});
|
||||
|
||||
function extractToolCallBlocks(messages: AgentDbMessage[]): ContentToolCall[] {
|
||||
return filterLlmMessages(messages).flatMap((m) =>
|
||||
m.content.filter((c): c is ContentToolCall => c.type === 'tool-call'),
|
||||
);
|
||||
}
|
||||
|
||||
function buildInterruptibleAgent(memory: ReturnType<typeof createSqliteMemory>['memory']): Agent {
|
||||
const deleteTool = new Tool('delete_file')
|
||||
.description('Delete a file at the given path')
|
||||
.input(z.object({ path: z.string().describe('File path to delete') }))
|
||||
.output(z.object({ deleted: z.boolean(), path: z.string() }))
|
||||
.suspend(z.object({ message: z.string(), severity: z.string() }))
|
||||
.resume(z.object({ approved: z.boolean() }))
|
||||
.handler(async ({ path }, ctx) => {
|
||||
if (!ctx.resumeData) {
|
||||
return await ctx.suspend({ message: `Delete "${path}"?`, severity: 'destructive' });
|
||||
}
|
||||
if (!ctx.resumeData.approved) return { deleted: false, path };
|
||||
return { deleted: true, path };
|
||||
});
|
||||
|
||||
return new Agent('upsert-test-agent')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions(
|
||||
'You are a file manager. When asked to delete a file, use the delete_file tool. Be concise.',
|
||||
)
|
||||
.tool(deleteTool)
|
||||
.memory(new Memory().storage(memory))
|
||||
.checkpoint('memory');
|
||||
}
|
||||
|
||||
it('after resume, thread has exactly one resolved tool-call block (no duplicate rows)', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = 'thread-upsert-resolved';
|
||||
const resourceId = 'res-1';
|
||||
const persistence = { threadId, resourceId };
|
||||
|
||||
const agent = buildInterruptibleAgent(memory);
|
||||
|
||||
// Turn 1: trigger the suspend — messages with pending tool-call are
|
||||
// stored in the checkpoint only, NOT in SqliteMemory yet.
|
||||
const suspendResult = await agent.generate('Please delete /tmp/foo.txt', {
|
||||
persistence,
|
||||
});
|
||||
|
||||
expect(suspendResult.finishReason).toBe('tool-calls');
|
||||
expect(suspendResult.pendingSuspend).toBeDefined();
|
||||
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
|
||||
|
||||
// Before resume: no tool-call blocks in memory (pending stays in checkpoint)
|
||||
const msgsBefore = await memory.getMessages(threadId);
|
||||
const blocksBefore = extractToolCallBlocks(msgsBefore);
|
||||
expect(blocksBefore).toHaveLength(0);
|
||||
|
||||
// Turn 2: resume with approval — on completion saveToMemory is called and
|
||||
// the assistant message (now resolved) is written for the first time.
|
||||
const resumeResult = await agent.resume(
|
||||
'generate',
|
||||
{ approved: true },
|
||||
{
|
||||
runId,
|
||||
toolCallId,
|
||||
},
|
||||
);
|
||||
|
||||
expect(resumeResult.finishReason).toBe('stop');
|
||||
|
||||
// After resume: exactly one resolved tool-call block, no duplicate rows
|
||||
const msgsAfter = await memory.getMessages(threadId);
|
||||
const blocksAfter = extractToolCallBlocks(msgsAfter);
|
||||
|
||||
expect(blocksAfter).toHaveLength(1);
|
||||
expect(blocksAfter[0].state).toBe('resolved');
|
||||
expect(blocksAfter[0].toolCallId).toBe(toolCallId);
|
||||
expect((blocksAfter[0] as ContentToolCall & { state: 'resolved' }).output).toMatchObject({
|
||||
deleted: true,
|
||||
});
|
||||
|
||||
// No duplicate assistant messages with tool-call blocks
|
||||
const assistantMsgsWithToolCalls = filterLlmMessages(msgsAfter).filter(
|
||||
(m) => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
);
|
||||
expect(assistantMsgsWithToolCalls).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('after resume with denial, thread has exactly one resolved tool-call block', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = 'thread-upsert-denied';
|
||||
const resourceId = 'res-2';
|
||||
const persistence = { threadId, resourceId };
|
||||
|
||||
const agent = buildInterruptibleAgent(memory);
|
||||
|
||||
const suspendResult = await agent.generate('Please delete /tmp/bar.txt', {
|
||||
persistence,
|
||||
});
|
||||
expect(suspendResult.finishReason).toBe('tool-calls');
|
||||
const { runId, toolCallId } = suspendResult.pendingSuspend![0];
|
||||
|
||||
// Before resume: no messages in memory
|
||||
const msgsBefore = await memory.getMessages(threadId);
|
||||
expect(extractToolCallBlocks(msgsBefore)).toHaveLength(0);
|
||||
|
||||
const resumeResult = await agent.resume(
|
||||
'generate',
|
||||
{ approved: false },
|
||||
{
|
||||
runId,
|
||||
toolCallId,
|
||||
},
|
||||
);
|
||||
expect(resumeResult.finishReason).toBe('stop');
|
||||
|
||||
const msgsAfter = await memory.getMessages(threadId);
|
||||
const blocksAfter = extractToolCallBlocks(msgsAfter);
|
||||
|
||||
// Tool ran and returned {deleted: false} — still resolved, not rejected
|
||||
expect(blocksAfter).toHaveLength(1);
|
||||
expect(blocksAfter[0].state).toBe('resolved');
|
||||
const output = (blocksAfter[0] as ContentToolCall & { state: 'resolved' }).output;
|
||||
expect(output).toMatchObject({ deleted: false });
|
||||
|
||||
// No duplicate rows
|
||||
const assistantMsgsWithToolCalls = filterLlmMessages(msgsAfter).filter(
|
||||
(m) => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
);
|
||||
expect(assistantMsgsWithToolCalls).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('if same thread is resumed twice (re-suspend then resume again), still no duplicate rows', async () => {
|
||||
const { memory, cleanup } = createSqliteMemory();
|
||||
cleanups.push(cleanup);
|
||||
|
||||
const threadId = 'thread-upsert-double';
|
||||
const resourceId = 'res-3';
|
||||
const persistence = { threadId, resourceId };
|
||||
|
||||
// Use a tool that always re-suspends on first call and approves on second
|
||||
let callCount = 0;
|
||||
const confirmTool = new Tool('confirm')
|
||||
.description('Confirm an action')
|
||||
.input(z.object({ action: z.string() }))
|
||||
.output(z.object({ done: z.boolean() }))
|
||||
.suspend(z.object({ question: z.string() }))
|
||||
.resume(z.object({ yes: z.boolean() }))
|
||||
.handler(async ({ action }, ctx) => {
|
||||
callCount++;
|
||||
if (!ctx.resumeData) {
|
||||
return await ctx.suspend({ question: `Confirm: ${action}?` });
|
||||
}
|
||||
return { done: ctx.resumeData.yes };
|
||||
});
|
||||
|
||||
const agent = new Agent('double-upsert-agent')
|
||||
.model(getModel('anthropic'))
|
||||
.instructions('Use confirm tool for every action. Be concise.')
|
||||
.tool(confirmTool)
|
||||
.memory(new Memory().storage(memory))
|
||||
.checkpoint('memory');
|
||||
|
||||
// Turn 1: suspend
|
||||
const r1 = await agent.generate('confirm action: foo', { persistence });
|
||||
expect(r1.finishReason).toBe('tool-calls');
|
||||
const { runId, toolCallId } = r1.pendingSuspend![0];
|
||||
|
||||
// No messages in memory yet
|
||||
expect(await memory.getMessages(threadId)).toHaveLength(0);
|
||||
|
||||
// Resume: completes
|
||||
const r2 = await agent.resume('generate', { yes: true }, { runId, toolCallId });
|
||||
expect(r2.finishReason).toBe('stop');
|
||||
|
||||
const finalMessages = await memory.getMessages(threadId);
|
||||
const toolCallBlocks = extractToolCallBlocks(finalMessages);
|
||||
|
||||
// Exactly one tool-call block, no duplicates
|
||||
expect(toolCallBlocks).toHaveLength(1);
|
||||
expect(toolCallBlocks[0].state).toBe('resolved');
|
||||
|
||||
// And the assistant message with the tool-call appears exactly once
|
||||
const assistantMsgsWithCalls = filterLlmMessages(finalMessages).filter(
|
||||
(m): m is Message => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
);
|
||||
expect(assistantMsgsWithCalls).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
|
@ -5,6 +5,7 @@ import {
|
|||
collectStreamChunks,
|
||||
chunksOfType,
|
||||
collectTextDeltas,
|
||||
findAllToolResults,
|
||||
createAgentWithAlwaysErrorTool,
|
||||
createAgentWithFlakyTool,
|
||||
} from './helpers';
|
||||
|
|
@ -54,20 +55,20 @@ describe('tool error handling integration', () => {
|
|||
expect(mentionsFailure).toBe(true);
|
||||
});
|
||||
|
||||
it('error tool-result appears in the stream', async () => {
|
||||
it('error tool-result appears in the message list', async () => {
|
||||
const agent = createAgentWithAlwaysErrorTool('anthropic');
|
||||
|
||||
const { stream } = await agent.stream('Fetch the data for id "abc123".');
|
||||
const chunks = await collectStreamChunks(stream);
|
||||
|
||||
// There should be a discrete tool-result chunk for the failed call
|
||||
const toolResults = chunksOfType(chunks, 'tool-result');
|
||||
// There should be a tool-result message in the stream
|
||||
const messageChunks = chunksOfType(chunks, 'message');
|
||||
const toolResults = findAllToolResults(messageChunks.map((c) => c.message));
|
||||
|
||||
// The tool should have been called and produced a result (even if it errored)
|
||||
expect(toolResults.length).toBeGreaterThan(0);
|
||||
const brokenResult = toolResults.find((r) => r.toolName === 'broken_tool');
|
||||
expect(brokenResult).toBeDefined();
|
||||
expect(brokenResult!.isError).toBe(true);
|
||||
});
|
||||
|
||||
it('LLM can self-correct by retrying a flaky tool', async () => {
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import {
|
|||
createAgentWithMixedTools,
|
||||
createAgentWithParallelInterruptibleCalls,
|
||||
} from './helpers';
|
||||
import type { StreamChunk } from '../../index';
|
||||
import { isLlmMessage, type StreamChunk } from '../../index';
|
||||
|
||||
const describe = describeIf('anthropic');
|
||||
|
||||
|
|
@ -36,8 +36,13 @@ describe('tool interrupt integration', () => {
|
|||
);
|
||||
|
||||
// No tool-result should appear (tool is suspended)
|
||||
const toolResultChunks = chunksOfType(chunks, 'tool-result');
|
||||
expect(toolResultChunks).toHaveLength(0);
|
||||
const contentChunks = chunks.filter(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
'content' in c &&
|
||||
(c.content as { type: string }).type === 'tool-result',
|
||||
);
|
||||
expect(contentChunks).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('resumes the stream after resume with approval', async () => {
|
||||
|
|
@ -53,14 +58,19 @@ describe('tool interrupt integration', () => {
|
|||
const resumedStream = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{ runId: suspended.runId, toolCallId: suspended.toolCallId },
|
||||
{ runId: suspended.runId!, toolCallId: suspended.toolCallId! },
|
||||
);
|
||||
|
||||
const resumedChunks = await collectStreamChunks(resumedStream.stream);
|
||||
const resumedTypes = resumedChunks.map((c) => c.type);
|
||||
|
||||
// After approval, a discrete tool-result chunk should appear
|
||||
const toolResultChunks = chunksOfType(resumedChunks, 'tool-result');
|
||||
// After approval, tool-result should appear as content chunk
|
||||
const toolResultChunks = resumedChunks.filter(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
isLlmMessage(c.message) &&
|
||||
c.message.content.some((c) => c.type === 'tool-result'),
|
||||
);
|
||||
expect(toolResultChunks.length).toBeGreaterThan(0);
|
||||
|
||||
expect(resumedTypes).toContain('text-delta');
|
||||
|
|
@ -79,7 +89,7 @@ describe('tool interrupt integration', () => {
|
|||
const resumedStream = await agent.resume(
|
||||
'stream',
|
||||
{ approved: false },
|
||||
{ runId: suspended.runId, toolCallId: suspended.toolCallId },
|
||||
{ runId: suspended.runId!, toolCallId: suspended.toolCallId! },
|
||||
);
|
||||
|
||||
const resumedChunks = await collectStreamChunks(resumedStream.stream);
|
||||
|
|
@ -109,7 +119,7 @@ describe('tool interrupt integration', () => {
|
|||
const stream2 = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{ runId: suspended1.runId, toolCallId: suspended1.toolCallId },
|
||||
{ runId: suspended1.runId!, toolCallId: suspended1.toolCallId! },
|
||||
);
|
||||
|
||||
const chunks2 = await collectStreamChunks(stream2.stream);
|
||||
|
|
@ -126,7 +136,7 @@ describe('tool interrupt integration', () => {
|
|||
const stream3 = await agent.resume(
|
||||
'stream',
|
||||
{ approved: true },
|
||||
{ runId: suspended2.runId, toolCallId: suspended2.toolCallId },
|
||||
{ runId: suspended2.runId!, toolCallId: suspended2.toolCallId! },
|
||||
);
|
||||
|
||||
const chunks3 = await collectStreamChunks(stream3.stream);
|
||||
|
|
@ -152,8 +162,13 @@ describe('tool interrupt integration', () => {
|
|||
|
||||
const chunks = await collectStreamChunks(fullStream);
|
||||
|
||||
// list_files should auto-execute — its result should appear as a discrete tool-result chunk
|
||||
const toolResultChunks = chunksOfType(chunks, 'tool-result');
|
||||
// list_files should auto-execute — its result should appear as content
|
||||
const toolResultChunks = chunks.filter(
|
||||
(c) =>
|
||||
c.type === 'message' &&
|
||||
isLlmMessage(c.message) &&
|
||||
c.message.content.some((c) => c.type === 'tool-result'),
|
||||
);
|
||||
expect(toolResultChunks.length).toBeGreaterThan(0);
|
||||
|
||||
// delete_file should be suspended
|
||||
|
|
|
|||
|
|
@ -69,10 +69,7 @@ describe('workspace agent integration', () => {
|
|||
|
||||
const readResult = toolResults.find((tr) => tr.toolName === 'workspace_read_file');
|
||||
expect(readResult).toBeDefined();
|
||||
expect(readResult!.state).toBe('resolved');
|
||||
expect((readResult as unknown as { output: { content: string } }).output.content).toContain(
|
||||
'Hello from n8n!',
|
||||
);
|
||||
expect((readResult!.result as { content: string }).content).toContain('Hello from n8n!');
|
||||
|
||||
expect(memFs.getFileContent('/greeting.txt')).toBe('Hello from n8n!');
|
||||
});
|
||||
|
|
@ -106,8 +103,7 @@ describe('workspace agent integration', () => {
|
|||
const toolResults = findAllToolResults(result.messages);
|
||||
const execResult = toolResults.find((tr) => tr.toolName === 'workspace_execute_command');
|
||||
expect(execResult).toBeDefined();
|
||||
expect(execResult!.state).toBe('resolved');
|
||||
expect((execResult as unknown as { output: { success: boolean } }).output.success).toBe(true);
|
||||
expect((execResult!.result as { success: boolean }).success).toBe(true);
|
||||
});
|
||||
|
||||
it('agent uses workspace_mkdir and workspace_list_files together', async () => {
|
||||
|
|
@ -134,8 +130,7 @@ describe('workspace agent integration', () => {
|
|||
const toolResults = findAllToolResults(result.messages);
|
||||
const listResult = toolResults.find((tr) => tr.toolName === 'workspace_list_files');
|
||||
expect(listResult).toBeDefined();
|
||||
expect(listResult!.state).toBe('resolved');
|
||||
const entries = (listResult as unknown as { output: { entries: FileEntry[] } }).output.entries;
|
||||
const entries = (listResult!.result as unknown as { entries: FileEntry[] }).entries;
|
||||
const names = entries.map((e) => e.name);
|
||||
expect(names).toContain('index.ts');
|
||||
expect(names).toContain('README.md');
|
||||
|
|
@ -206,8 +201,7 @@ describe('workspace agent integration', () => {
|
|||
const toolResults = findAllToolResults(result.messages);
|
||||
const statResult = toolResults.find((tr) => tr.toolName === 'workspace_file_stat');
|
||||
expect(statResult).toBeDefined();
|
||||
expect(statResult!.state).toBe('resolved');
|
||||
const stat = (statResult as unknown as { output: { type: string; size: number } }).output;
|
||||
const stat = statResult!.result as { type: string; size: number };
|
||||
expect(stat.type).toBe('file');
|
||||
expect(stat.size).toBe(29);
|
||||
});
|
||||
|
|
@ -239,10 +233,7 @@ describe('workspace agent integration', () => {
|
|||
|
||||
const readResult = toolResults.find((tr) => tr.toolName === 'workspace_read_file');
|
||||
expect(readResult).toBeDefined();
|
||||
expect(readResult!.state).toBe('resolved');
|
||||
expect((readResult as unknown as { output: { content: string } }).output.content).toContain(
|
||||
'export default {}',
|
||||
);
|
||||
expect((readResult!.result as { content: string }).content).toContain('export default {}');
|
||||
|
||||
expect(memFs.getFileContent('/app/config.ts')).toBe('export default {}');
|
||||
});
|
||||
|
|
|
|||
|
|
@ -45,12 +45,12 @@ describe('Zod validation errors surface to LLM and allow self-correction', () =>
|
|||
expect(result.finishReason).toBe('stop');
|
||||
expect(result.error).toBeUndefined();
|
||||
|
||||
// At least two tool-call messages: one rejected, one resolved
|
||||
// At least two tool-result messages: one error, one success
|
||||
const allMessages = filterLlmMessages(result.messages);
|
||||
const toolCallMessages = allMessages.filter((m) =>
|
||||
m.content.some((c) => c.type === 'tool-call'),
|
||||
const toolResultMessages = allMessages.filter((m) =>
|
||||
m.content.some((c) => c.type === 'tool-result'),
|
||||
);
|
||||
expect(toolCallMessages.length).toBeGreaterThanOrEqual(2);
|
||||
expect(toolResultMessages.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// The final response should mention a user (age 25 or similar)
|
||||
const text = findLastTextContent(result.messages);
|
||||
|
|
|
|||
|
|
@ -1,201 +0,0 @@
|
|||
const mockExporterConfigs: unknown[] = [];
|
||||
const mockBatchProcessorInputs: unknown[] = [];
|
||||
const mockBatchProcessorInstances: Array<{
|
||||
forceFlush: jest.Mock<Promise<void>, []>;
|
||||
onStart: jest.Mock<void, [unknown, unknown]>;
|
||||
onEnd: jest.Mock<void, [unknown]>;
|
||||
shutdown: jest.Mock<Promise<void>, []>;
|
||||
}> = [];
|
||||
const mockProviderConfigs: unknown[] = [];
|
||||
const mockAwaitPendingTraceBatches = jest.fn(async () => await Promise.resolve());
|
||||
const mockTracer = { startSpan: jest.fn() };
|
||||
const mockProvider = {
|
||||
getTracer: jest.fn(() => mockTracer),
|
||||
register: jest.fn(),
|
||||
forceFlush: jest.fn(),
|
||||
shutdown: jest.fn(),
|
||||
};
|
||||
|
||||
jest.mock('langsmith/experimental/otel/exporter', () => ({
|
||||
LangSmithOTLPTraceExporter: jest.fn((config: unknown) => {
|
||||
mockExporterConfigs.push(config);
|
||||
return { type: 'exporter' };
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@opentelemetry/sdk-trace-base', () => ({
|
||||
BatchSpanProcessor: jest.fn((exporter: unknown) => {
|
||||
mockBatchProcessorInputs.push(exporter);
|
||||
const processor = {
|
||||
forceFlush: jest.fn(async () => await Promise.resolve()),
|
||||
onStart: jest.fn(),
|
||||
onEnd: jest.fn(),
|
||||
shutdown: jest.fn(async () => await Promise.resolve()),
|
||||
};
|
||||
mockBatchProcessorInstances.push(processor);
|
||||
return processor;
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('langsmith', () => ({
|
||||
RunTree: {
|
||||
getSharedClient: jest.fn(() => ({
|
||||
awaitPendingTraceBatches: mockAwaitPendingTraceBatches,
|
||||
})),
|
||||
},
|
||||
}));
|
||||
|
||||
jest.mock('@opentelemetry/sdk-trace-node', () => ({
|
||||
NodeTracerProvider: jest.fn((config: unknown) => {
|
||||
mockProviderConfigs.push(config);
|
||||
return mockProvider;
|
||||
}),
|
||||
}));
|
||||
|
||||
import { LangSmithTelemetry } from '../integrations/langsmith';
|
||||
|
||||
describe('LangSmithTelemetry', () => {
|
||||
const previousTracingV2 = process.env.LANGCHAIN_TRACING_V2;
|
||||
|
||||
beforeEach(() => {
|
||||
mockExporterConfigs.length = 0;
|
||||
mockBatchProcessorInputs.length = 0;
|
||||
mockBatchProcessorInstances.length = 0;
|
||||
mockProviderConfigs.length = 0;
|
||||
mockAwaitPendingTraceBatches.mockClear();
|
||||
mockProvider.getTracer.mockClear();
|
||||
mockProvider.register.mockClear();
|
||||
mockProvider.forceFlush.mockClear();
|
||||
mockProvider.shutdown.mockClear();
|
||||
delete process.env.LANGCHAIN_TRACING_V2;
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
if (previousTracingV2 === undefined) {
|
||||
delete process.env.LANGCHAIN_TRACING_V2;
|
||||
} else {
|
||||
process.env.LANGCHAIN_TRACING_V2 = previousTracingV2;
|
||||
}
|
||||
});
|
||||
|
||||
it('passes proxy headers and derived OTLP URL to the LangSmith exporter', async () => {
|
||||
const transformExportedSpan = (span: unknown) => span;
|
||||
const getHeaders = jest.fn(async () => {
|
||||
await Promise.resolve();
|
||||
return { Authorization: 'Bearer proxy-token' } satisfies Record<string, string>;
|
||||
});
|
||||
const built = await new LangSmithTelemetry({
|
||||
apiKey: '-',
|
||||
project: 'instance-ai',
|
||||
endpoint: 'https://ai-proxy.test/langsmith',
|
||||
headers: getHeaders,
|
||||
transformExportedSpan,
|
||||
}).build();
|
||||
|
||||
expect(getHeaders).toHaveBeenCalledTimes(1);
|
||||
expect(mockExporterConfigs).toEqual([
|
||||
{
|
||||
apiKey: '-',
|
||||
projectName: 'instance-ai',
|
||||
headers: { Authorization: 'Bearer proxy-token' },
|
||||
transformExportedSpan,
|
||||
url: 'https://ai-proxy.test/langsmith/otel/v1/traces',
|
||||
},
|
||||
]);
|
||||
expect(mockBatchProcessorInputs).toEqual([{ type: 'exporter' }]);
|
||||
expect(mockProviderConfigs).toHaveLength(1);
|
||||
const providerConfig = mockProviderConfigs[0] as { spanProcessors: unknown[] };
|
||||
expect(providerConfig.spanProcessors).toHaveLength(1);
|
||||
const spanProcessor = providerConfig.spanProcessors[0] as Record<string, unknown>;
|
||||
expect(typeof spanProcessor.forceFlush).toBe('function');
|
||||
expect(typeof spanProcessor.onStart).toBe('function');
|
||||
expect(typeof spanProcessor.onEnd).toBe('function');
|
||||
expect(typeof spanProcessor.shutdown).toBe('function');
|
||||
expect(mockProvider.register).toHaveBeenCalledWith({ propagator: null });
|
||||
expect(mockProvider.getTracer).toHaveBeenCalledWith('@n8n/agents');
|
||||
expect(built.tracer).toBe(mockTracer);
|
||||
expect(built.provider).toBe(mockProvider);
|
||||
expect(process.env.LANGCHAIN_TRACING_V2).toBe('true');
|
||||
});
|
||||
|
||||
it('does not allow endpoint overrides when using an engine-resolved key', async () => {
|
||||
const telemetry = new LangSmithTelemetry({
|
||||
project: 'instance-ai',
|
||||
endpoint: 'https://should-not-be-used.test',
|
||||
});
|
||||
telemetry.resolvedApiKey = 'resolved-key';
|
||||
|
||||
await telemetry.build();
|
||||
|
||||
expect(mockExporterConfigs).toEqual([
|
||||
{
|
||||
apiKey: 'resolved-key',
|
||||
projectName: 'instance-ai',
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('filters noisy AI SDK operation wrappers while preserving provider and tool spans', async () => {
|
||||
await new LangSmithTelemetry({
|
||||
apiKey: 'ls-test-key',
|
||||
project: 'instance-ai',
|
||||
}).build();
|
||||
|
||||
const processor = mockProviderConfigs[0] as {
|
||||
spanProcessors: Array<{
|
||||
onStart(span: unknown, parentContext: unknown): void;
|
||||
onEnd(span: unknown): void;
|
||||
}>;
|
||||
};
|
||||
const filteredProcessor = processor.spanProcessors[0];
|
||||
const delegate = mockBatchProcessorInstances[0];
|
||||
const makeSpan = (
|
||||
spanId: string,
|
||||
attributes: Record<string, unknown>,
|
||||
parentSpanId?: string,
|
||||
) => ({
|
||||
attributes,
|
||||
spanContext: () => ({ traceId: 'trace-1', spanId }),
|
||||
...(parentSpanId ? { parentSpanContext: { spanId: parentSpanId } } : {}),
|
||||
});
|
||||
|
||||
const root = makeSpan('1111111111111111', { 'langsmith.traceable': 'true' });
|
||||
const streamWrapper = makeSpan(
|
||||
'2222222222222222',
|
||||
{ 'ai.operationId': 'ai.streamText' },
|
||||
'1111111111111111',
|
||||
);
|
||||
const providerRequest = makeSpan(
|
||||
'3333333333333333',
|
||||
{ 'ai.operationId': 'ai.streamText.doStream' },
|
||||
'2222222222222222',
|
||||
);
|
||||
const toolCall = makeSpan(
|
||||
'4444444444444444',
|
||||
{ 'ai.operationId': 'ai.toolCall' },
|
||||
'2222222222222222',
|
||||
);
|
||||
|
||||
filteredProcessor.onStart(root, {});
|
||||
filteredProcessor.onStart(streamWrapper, {});
|
||||
filteredProcessor.onStart(providerRequest, {});
|
||||
filteredProcessor.onStart(toolCall, {});
|
||||
filteredProcessor.onEnd(toolCall);
|
||||
filteredProcessor.onEnd(providerRequest);
|
||||
filteredProcessor.onEnd(streamWrapper);
|
||||
filteredProcessor.onEnd(root);
|
||||
|
||||
expect(delegate.onStart).toHaveBeenCalledTimes(3);
|
||||
expect(delegate.onStart).toHaveBeenNthCalledWith(1, root, {});
|
||||
expect(delegate.onStart).toHaveBeenNthCalledWith(2, providerRequest, {});
|
||||
expect(delegate.onStart).toHaveBeenNthCalledWith(3, toolCall, {});
|
||||
expect(providerRequest.attributes).toEqual(
|
||||
expect.objectContaining({
|
||||
'langsmith.span.parent_id': '00000000-0000-0000-1111-111111111111',
|
||||
'langsmith.traceable_parent_otel_span_id': '1111111111111111',
|
||||
}),
|
||||
);
|
||||
expect(delegate.onEnd).toHaveBeenCalledTimes(3);
|
||||
expect(delegate.onEnd).not.toHaveBeenCalledWith(streamWrapper);
|
||||
});
|
||||
});
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
import type {
|
||||
BuiltMemory,
|
||||
MemoryConfig,
|
||||
ObservationCapableMemory,
|
||||
ObservationalMemoryConfig,
|
||||
} from '../types';
|
||||
|
||||
type AssertMemoryConfig<T extends MemoryConfig> = T;
|
||||
|
||||
type PlainMemoryConfig = AssertMemoryConfig<{
|
||||
memory: BuiltMemory;
|
||||
lastMessages: 10;
|
||||
}>;
|
||||
|
||||
type ObservationCapableMemoryConfig = AssertMemoryConfig<{
|
||||
memory: ObservationCapableMemory;
|
||||
lastMessages: 10;
|
||||
observationalMemory: ObservationalMemoryConfig;
|
||||
}>;
|
||||
|
||||
// @ts-expect-error Observational memory requires a backend that also implements BuiltObservationStore.
|
||||
type InvalidObservationalMemoryConfig = AssertMemoryConfig<{
|
||||
memory: BuiltMemory;
|
||||
lastMessages: 10;
|
||||
observationalMemory: ObservationalMemoryConfig;
|
||||
}>;
|
||||
|
||||
export type { InvalidObservationalMemoryConfig, ObservationCapableMemoryConfig, PlainMemoryConfig };
|
||||
|
|
@ -1,11 +1,6 @@
|
|||
import { isLlmMessage } from '../../sdk/message';
|
||||
import type {
|
||||
AgentDbMessage,
|
||||
AgentMessage,
|
||||
ContentToolCall,
|
||||
Message,
|
||||
} from '../../types/sdk/message';
|
||||
import { AgentMessageList } from '../message-list';
|
||||
import { AgentMessageList } from '../runtime/message-list';
|
||||
import { isLlmMessage } from '../sdk/message';
|
||||
import type { AgentDbMessage, AgentMessage, Message } from '../types/sdk/message';
|
||||
|
||||
function makeUserMsg(text: string): AgentMessage {
|
||||
return { role: 'user', content: [{ type: 'text', text }] };
|
||||
|
|
@ -179,118 +174,3 @@ describe('AgentMessageList — deserialize', () => {
|
|||
expect(newMsg.createdAt.getTime()).toBeGreaterThan(futureTs.getTime());
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// setToolCallResult / setToolCallError
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makePendingToolCallMsg(toolCallId: string): AgentMessage {
|
||||
return {
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId,
|
||||
toolName: 'my_tool',
|
||||
input: { x: 1 },
|
||||
state: 'pending',
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
describe('AgentMessageList — setToolCallResult', () => {
|
||||
it('sets state and output on the matching tool-call block', () => {
|
||||
const list = new AgentMessageList();
|
||||
list.addResponse([makePendingToolCallMsg('id-1')]);
|
||||
|
||||
const host = list.setToolCallResult('id-1', { ok: true });
|
||||
expect(host).toBeDefined();
|
||||
|
||||
const block = (host as Message).content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(block.state).toBe('resolved');
|
||||
expect((block as ContentToolCall & { state: 'resolved' }).output).toEqual({ ok: true });
|
||||
});
|
||||
|
||||
it('promotes a history-only message into responseDelta after setToolCallResult', () => {
|
||||
const list = new AgentMessageList();
|
||||
const histMsg: AgentDbMessage = {
|
||||
id: 'hist-1',
|
||||
createdAt: new Date('2024-01-01T00:00:01.000Z'),
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'tc-hist',
|
||||
toolName: 'my_tool',
|
||||
input: {},
|
||||
state: 'pending',
|
||||
},
|
||||
],
|
||||
};
|
||||
list.addHistory([histMsg]);
|
||||
|
||||
// Before: not in responseDelta (history only)
|
||||
expect(list.responseDelta()).toHaveLength(0);
|
||||
|
||||
list.setToolCallResult('tc-hist', { done: true });
|
||||
|
||||
// After: promoted to responseDelta
|
||||
const delta = list.responseDelta();
|
||||
expect(delta).toHaveLength(1);
|
||||
const block = (delta[0] as Message).content.find(
|
||||
(c) => c.type === 'tool-call',
|
||||
) as ContentToolCall;
|
||||
expect(block.state).toBe('resolved');
|
||||
});
|
||||
|
||||
it('is a no-op when toolCallId is unknown', () => {
|
||||
const list = new AgentMessageList();
|
||||
list.addResponse([makePendingToolCallMsg('id-1')]);
|
||||
|
||||
const result = list.setToolCallResult('unknown-id', { x: 1 });
|
||||
expect(result).toBeUndefined();
|
||||
// List unchanged
|
||||
expect(list.responseDelta()).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('Set semantics make repeated calls idempotent (no duplicate messages)', () => {
|
||||
const list = new AgentMessageList();
|
||||
list.addResponse([makePendingToolCallMsg('id-1')]);
|
||||
|
||||
list.setToolCallResult('id-1', { ok: true });
|
||||
list.setToolCallResult('id-1', { ok: true });
|
||||
|
||||
expect(list.responseDelta()).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('AgentMessageList — setToolCallError', () => {
|
||||
it('stringifies errors and clears any prior output', () => {
|
||||
const list = new AgentMessageList();
|
||||
list.addResponse([
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-call',
|
||||
toolCallId: 'id-1',
|
||||
toolName: 'my_tool',
|
||||
input: {},
|
||||
state: 'resolved',
|
||||
output: { prev: true },
|
||||
},
|
||||
],
|
||||
},
|
||||
]);
|
||||
|
||||
const host = list.setToolCallError('id-1', new Error('boom'));
|
||||
expect(host).toBeDefined();
|
||||
|
||||
const block = (host as Message).content.find((c) => c.type === 'tool-call') as ContentToolCall;
|
||||
expect(block.state).toBe('rejected');
|
||||
expect((block as ContentToolCall & { state: 'rejected' }).error).toBe('Error: boom');
|
||||
// output should be gone
|
||||
expect((block as unknown as { output?: unknown }).output).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
import type { AgentMessage } from '../../types/sdk/message';
|
||||
import { getCreatedAt } from '../message';
|
||||
import { getCreatedAt } from '../sdk/message';
|
||||
import type { AgentMessage } from '../types/sdk/message';
|
||||
|
||||
function userMessage(partial: Partial<AgentMessage> & { createdAt?: unknown }): AgentMessage {
|
||||
return partial as AgentMessage;
|
||||
133
packages/@n8n/agents/src/__tests__/model-factory.test.ts
Normal file
133
packages/@n8n/agents/src/__tests__/model-factory.test.ts
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
import type { LanguageModel } from 'ai';
|
||||
|
||||
import { createModel } from '../runtime/model-factory';
|
||||
|
||||
type ProviderOpts = {
|
||||
apiKey?: string;
|
||||
baseURL?: string;
|
||||
fetch?: typeof globalThis.fetch;
|
||||
headers?: Record<string, string>;
|
||||
};
|
||||
|
||||
jest.mock('@ai-sdk/anthropic', () => ({
|
||||
createAnthropic: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'anthropic',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
headers: opts?.headers,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/openai', () => ({
|
||||
createOpenAI: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'openai',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
headers: opts?.headers,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
const mockProxyAgent = jest.fn();
|
||||
jest.mock('undici', () => ({
|
||||
ProxyAgent: mockProxyAgent,
|
||||
}));
|
||||
|
||||
describe('createModel', () => {
|
||||
const originalEnv = process.env;
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv };
|
||||
delete process.env.HTTPS_PROXY;
|
||||
delete process.env.HTTP_PROXY;
|
||||
mockProxyAgent.mockClear();
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
it('should accept a string config', () => {
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('anthropic');
|
||||
expect(model.modelId).toBe('claude-sonnet-4-5');
|
||||
});
|
||||
|
||||
it('should accept an object config with url', () => {
|
||||
const model = createModel({
|
||||
id: 'openai/gpt-4o',
|
||||
apiKey: 'sk-test',
|
||||
url: 'https://custom.endpoint.com/v1',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('openai');
|
||||
expect(model.modelId).toBe('gpt-4o');
|
||||
expect(model.apiKey).toBe('sk-test');
|
||||
expect(model.baseURL).toBe('https://custom.endpoint.com/v1');
|
||||
});
|
||||
|
||||
it('should pass through a prebuilt LanguageModel', () => {
|
||||
const prebuilt = {
|
||||
doGenerate: jest.fn(),
|
||||
doStream: jest.fn(),
|
||||
specificationVersion: 'v2' as const,
|
||||
modelId: 'custom-model',
|
||||
provider: 'custom',
|
||||
defaultObjectGenerationMode: undefined,
|
||||
} as unknown as LanguageModel;
|
||||
|
||||
const result = createModel(prebuilt);
|
||||
expect(result).toBe(prebuilt);
|
||||
});
|
||||
|
||||
it('should handle model IDs with multiple slashes', () => {
|
||||
const model = createModel('openai/ft:gpt-4o:my-org:custom:abc123') as unknown as Record<
|
||||
string,
|
||||
unknown
|
||||
>;
|
||||
expect(model.provider).toBe('openai');
|
||||
expect(model.modelId).toBe('ft:gpt-4o:my-org:custom:abc123');
|
||||
});
|
||||
|
||||
it('should not pass fetch when no proxy env vars are set', () => {
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should pass proxy-aware fetch when HTTPS_PROXY is set', () => {
|
||||
process.env.HTTPS_PROXY = 'http://proxy:8080';
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeInstanceOf(Function);
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:8080');
|
||||
});
|
||||
|
||||
it('should pass proxy-aware fetch when HTTP_PROXY is set', () => {
|
||||
process.env.HTTP_PROXY = 'http://proxy:9090';
|
||||
const model = createModel('openai/gpt-4o') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeInstanceOf(Function);
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:9090');
|
||||
});
|
||||
|
||||
it('should forward custom headers to the provider factory', () => {
|
||||
const model = createModel({
|
||||
id: 'anthropic/claude-sonnet-4-5',
|
||||
apiKey: 'sk-test',
|
||||
headers: { 'x-proxy-auth': 'Bearer abc', 'anthropic-beta': 'tools-2024' },
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.headers).toEqual({
|
||||
'x-proxy-auth': 'Bearer abc',
|
||||
'anthropic-beta': 'tools-2024',
|
||||
});
|
||||
});
|
||||
|
||||
it('should prefer HTTPS_PROXY over HTTP_PROXY', () => {
|
||||
process.env.HTTPS_PROXY = 'http://https-proxy:8080';
|
||||
process.env.HTTP_PROXY = 'http://http-proxy:9090';
|
||||
createModel('anthropic/claude-sonnet-4-5');
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://https-proxy:8080');
|
||||
});
|
||||
});
|
||||
|
|
@ -578,7 +578,7 @@ describe('SqliteMemory — queryEmbeddings', () => {
|
|||
describe('SqliteMemory — namespace', () => {
|
||||
it('rejects invalid namespace characters', () => {
|
||||
expect(() => new SqliteMemory({ url: 'file::memory:', namespace: 'bad-ns!' })).toThrow(
|
||||
/invalid_string/,
|
||||
/Invalid namespace/,
|
||||
);
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,150 @@
|
|||
import { stripOrphanedToolMessages } from '../runtime/strip-orphaned-tool-messages';
|
||||
import type { AgentMessage, Message } from '../types/sdk/message';
|
||||
|
||||
describe('stripOrphanedToolMessages', () => {
|
||||
it('returns messages unchanged when all tool pairs are complete', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{ role: 'user', content: [{ type: 'text', text: 'Hello' }] },
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Looking up...' },
|
||||
{ type: 'tool-call', toolCallId: 'c1', toolName: 'lookup', input: {} },
|
||||
],
|
||||
},
|
||||
{
|
||||
role: 'tool',
|
||||
content: [{ type: 'tool-result', toolCallId: 'c1', toolName: 'lookup', result: 42 }],
|
||||
},
|
||||
{ role: 'assistant', content: [{ type: 'text', text: 'Done.' }] },
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages);
|
||||
expect(result).toBe(messages);
|
||||
});
|
||||
|
||||
it('strips orphaned tool-result when matching tool-call is missing', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{
|
||||
role: 'tool',
|
||||
content: [{ type: 'tool-result', toolCallId: 'c1', toolName: 'lookup', result: 42 }],
|
||||
},
|
||||
{ role: 'assistant', content: [{ type: 'text', text: 'There are 42.' }] },
|
||||
{ role: 'user', content: [{ type: 'text', text: 'Thanks' }] },
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages) as Message[];
|
||||
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0].role).toBe('assistant');
|
||||
expect(result[1].role).toBe('user');
|
||||
});
|
||||
|
||||
it('strips orphaned tool-call when matching tool-result is missing', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{ role: 'user', content: [{ type: 'text', text: 'Check it' }] },
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Checking...' },
|
||||
{ type: 'tool-call', toolCallId: 'c1', toolName: 'lookup', input: {} },
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages) as Message[];
|
||||
|
||||
expect(result).toHaveLength(2);
|
||||
const assistantMsg = result[1];
|
||||
expect(assistantMsg.role).toBe('assistant');
|
||||
expect(assistantMsg.content).toHaveLength(1);
|
||||
expect(assistantMsg.content[0].type).toBe('text');
|
||||
});
|
||||
|
||||
it('drops assistant message entirely if it only contained an orphaned tool-call', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{ role: 'user', content: [{ type: 'text', text: 'Do it' }] },
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [{ type: 'tool-call', toolCallId: 'c1', toolName: 'action', input: {} }],
|
||||
},
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages) as Message[];
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].role).toBe('user');
|
||||
});
|
||||
|
||||
it('handles mixed scenario: one complete pair and one orphaned result', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{
|
||||
role: 'tool',
|
||||
content: [
|
||||
{ type: 'tool-result', toolCallId: 'orphan', toolName: 'lookup', result: 'stale' },
|
||||
],
|
||||
},
|
||||
{ role: 'assistant', content: [{ type: 'text', text: 'Old result' }] },
|
||||
{ role: 'user', content: [{ type: 'text', text: 'New question' }] },
|
||||
{
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Looking up...' },
|
||||
{ type: 'tool-call', toolCallId: 'c2', toolName: 'lookup', input: {} },
|
||||
],
|
||||
},
|
||||
{
|
||||
role: 'tool',
|
||||
content: [{ type: 'tool-result', toolCallId: 'c2', toolName: 'lookup', result: 99 }],
|
||||
},
|
||||
{ role: 'assistant', content: [{ type: 'text', text: '99 items' }] },
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages) as Message[];
|
||||
|
||||
expect(result).toHaveLength(5);
|
||||
expect(result[0].role).toBe('assistant');
|
||||
expect(result[0].content[0]).toEqual(
|
||||
expect.objectContaining({ type: 'text', text: 'Old result' }),
|
||||
);
|
||||
|
||||
const toolCallMsg = result.find(
|
||||
(m) => m.role === 'assistant' && m.content.some((c) => c.type === 'tool-call'),
|
||||
);
|
||||
expect(toolCallMsg).toBeDefined();
|
||||
const toolResultMsg = result.find((m) => m.role === 'tool');
|
||||
expect(toolResultMsg).toBeDefined();
|
||||
});
|
||||
|
||||
it('preserves custom (non-LLM) messages', () => {
|
||||
const customMsg: AgentMessage = {
|
||||
id: 'custom-1',
|
||||
type: 'custom',
|
||||
messageType: 'notification',
|
||||
data: { info: 'hello' },
|
||||
} as unknown as AgentMessage;
|
||||
|
||||
const messages: AgentMessage[] = [
|
||||
customMsg,
|
||||
{
|
||||
role: 'tool',
|
||||
content: [{ type: 'tool-result', toolCallId: 'orphan', toolName: 'x', result: null }],
|
||||
},
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0]).toBe(customMsg);
|
||||
});
|
||||
|
||||
it('returns same array reference when no orphans exist (no-op fast path)', () => {
|
||||
const messages: AgentMessage[] = [
|
||||
{ role: 'user', content: [{ type: 'text', text: 'Hi' }] },
|
||||
{ role: 'assistant', content: [{ type: 'text', text: 'Hello!' }] },
|
||||
];
|
||||
|
||||
const result = stripOrphanedToolMessages(messages);
|
||||
expect(result).toBe(messages);
|
||||
});
|
||||
});
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
import type { TelemetryIntegration } from 'ai';
|
||||
|
||||
import { Telemetry } from '../telemetry';
|
||||
import { Telemetry } from '../sdk/telemetry';
|
||||
|
||||
describe('Telemetry builder', () => {
|
||||
it('builds with defaults', async () => {
|
||||
|
|
@ -8,7 +8,6 @@ describe('Telemetry builder', () => {
|
|||
expect(built.enabled).toBe(true);
|
||||
expect(built.recordInputs).toBe(true);
|
||||
expect(built.recordOutputs).toBe(true);
|
||||
expect(built.runtimeRootSpanEnabled).toBe(true);
|
||||
expect(built.functionId).toBeUndefined();
|
||||
expect(built.metadata).toBeUndefined();
|
||||
expect(built.integrations).toEqual([]);
|
||||
|
|
@ -23,7 +22,6 @@ describe('Telemetry builder', () => {
|
|||
.metadata({ team: 'platform', version: 2 })
|
||||
.recordInputs(false)
|
||||
.recordOutputs(false)
|
||||
.runtimeRootSpan(false)
|
||||
.build();
|
||||
|
||||
expect(built.enabled).toBe(false);
|
||||
|
|
@ -31,7 +29,6 @@ describe('Telemetry builder', () => {
|
|||
expect(built.metadata).toEqual({ team: 'platform', version: 2 });
|
||||
expect(built.recordInputs).toBe(false);
|
||||
expect(built.recordOutputs).toBe(false);
|
||||
expect(built.runtimeRootSpanEnabled).toBe(false);
|
||||
});
|
||||
|
||||
it('accepts a pre-built tracer', async () => {
|
||||
|
|
@ -1,12 +1,10 @@
|
|||
import type * as AiImport from 'ai';
|
||||
import type { LanguageModel } from 'ai';
|
||||
|
||||
import type { BuiltTelemetry } from '../../types';
|
||||
import { generateTitleFromMessage } from '../title-generation';
|
||||
import { generateTitleFromMessage } from '../runtime/title-generation';
|
||||
|
||||
type GenerateTextCall = {
|
||||
messages: Array<{ role: string; content: string }>;
|
||||
experimental_telemetry?: Record<string, unknown>;
|
||||
};
|
||||
|
||||
const mockGenerateText = jest.fn<Promise<{ text: string }>, [GenerateTextCall]>();
|
||||
|
|
@ -123,34 +121,6 @@ describe('generateTitleFromMessage', () => {
|
|||
expect(call.messages[0].content).toBe('Custom system prompt');
|
||||
});
|
||||
|
||||
it('passes generic telemetry to the title LLM call', async () => {
|
||||
mockGenerateText.mockResolvedValue({ text: 'Berlin rain alert' });
|
||||
const telemetry: BuiltTelemetry = {
|
||||
enabled: true,
|
||||
functionId: 'instance-ai.thread-title',
|
||||
metadata: { thread_id: 'thread-1' },
|
||||
recordInputs: true,
|
||||
recordOutputs: false,
|
||||
runtimeRootSpanEnabled: false,
|
||||
integrations: [],
|
||||
};
|
||||
|
||||
await generateTitleFromMessage(fakeModel, 'Build a daily Berlin rain alert workflow', {
|
||||
telemetry,
|
||||
});
|
||||
|
||||
const call = mockGenerateText.mock.calls[0][0];
|
||||
expect(call.experimental_telemetry).toEqual({
|
||||
isEnabled: true,
|
||||
functionId: 'instance-ai.thread-title',
|
||||
metadata: { thread_id: 'thread-1' },
|
||||
recordInputs: true,
|
||||
recordOutputs: false,
|
||||
tracer: undefined,
|
||||
integrations: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it('wraps the user message in a title-generation instruction so the model does not answer it', async () => {
|
||||
mockGenerateText.mockResolvedValue({ text: 'Berlin rain alert' });
|
||||
await generateTitleFromMessage(fakeModel, 'Build a daily Berlin rain alert workflow');
|
||||
|
|
@ -1,8 +1,8 @@
|
|||
import type { JSONSchema7 } from 'json-schema';
|
||||
import { z } from 'zod';
|
||||
|
||||
import type { BuiltTool } from '../../types';
|
||||
import { toAiSdkTools } from '../tool-adapter';
|
||||
import { toAiSdkTools } from '../runtime/tool-adapter';
|
||||
import type { BuiltTool } from '../types';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Module mocks
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
import type { BuiltTelemetry, BuiltTool, InterruptibleToolContext, ToolContext } from '../../types';
|
||||
import { Tool, wrapToolForApproval } from '../tool';
|
||||
import { Tool, wrapToolForApproval } from '../sdk/tool';
|
||||
import type { BuiltTelemetry, BuiltTool, InterruptibleToolContext, ToolContext } from '../types';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Test helpers
|
||||
|
|
@ -123,37 +123,6 @@ describe('Tool builder — without approval', () => {
|
|||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tool builder — .systemInstruction()
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Tool builder — .systemInstruction()', () => {
|
||||
it('build() carries the systemInstruction onto the BuiltTool', () => {
|
||||
const tool = new Tool('fetch')
|
||||
.description('Fetch data')
|
||||
.systemInstruction('Always fetch with the cache disabled.')
|
||||
.input(z.object({ id: z.string() }))
|
||||
.handler(async ({ id }) => {
|
||||
return await Promise.resolve({ data: id });
|
||||
})
|
||||
.build();
|
||||
|
||||
expect(tool.systemInstruction).toBe('Always fetch with the cache disabled.');
|
||||
});
|
||||
|
||||
it('build() leaves systemInstruction undefined when not set', () => {
|
||||
const tool = new Tool('fetch')
|
||||
.description('Fetch data')
|
||||
.input(z.object({ id: z.string() }))
|
||||
.handler(async ({ id }) => {
|
||||
return await Promise.resolve({ data: id });
|
||||
})
|
||||
.build();
|
||||
|
||||
expect(tool.systemInstruction).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// wrapToolForApproval — requireApproval: true
|
||||
// ---------------------------------------------------------------------------
|
||||
207
packages/@n8n/agents/src/__tests__/working-memory.test.ts
Normal file
207
packages/@n8n/agents/src/__tests__/working-memory.test.ts
Normal file
|
|
@ -0,0 +1,207 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
import {
|
||||
buildWorkingMemoryInstruction,
|
||||
buildWorkingMemoryTool,
|
||||
templateFromSchema,
|
||||
UPDATE_WORKING_MEMORY_TOOL_NAME,
|
||||
WORKING_MEMORY_DEFAULT_INSTRUCTION,
|
||||
} from '../runtime/working-memory';
|
||||
|
||||
describe('buildWorkingMemoryInstruction', () => {
|
||||
it('mentions the updateWorkingMemory tool name', () => {
|
||||
const result = buildWorkingMemoryInstruction('# Context\n- Name:', false);
|
||||
expect(result).toContain(UPDATE_WORKING_MEMORY_TOOL_NAME);
|
||||
});
|
||||
|
||||
it('instructs the model to call the tool only when something changed', () => {
|
||||
const result = buildWorkingMemoryInstruction('# Context\n- Name:', false);
|
||||
expect(result).toContain('Only call it when something has actually changed');
|
||||
});
|
||||
|
||||
it('includes the template in the instruction', () => {
|
||||
const template = '# Context\n- Name:\n- City:';
|
||||
const result = buildWorkingMemoryInstruction(template, false);
|
||||
expect(result).toContain(template);
|
||||
});
|
||||
|
||||
it('mentions JSON for structured variant', () => {
|
||||
const result = buildWorkingMemoryInstruction('{"name": ""}', true);
|
||||
expect(result).toContain('JSON');
|
||||
});
|
||||
|
||||
describe('custom instruction', () => {
|
||||
it('replaces the default instruction body when provided', () => {
|
||||
const custom = 'Always update working memory after every message.';
|
||||
const result = buildWorkingMemoryInstruction('# Template', false, custom);
|
||||
expect(result).toContain(custom);
|
||||
expect(result).not.toContain(WORKING_MEMORY_DEFAULT_INSTRUCTION);
|
||||
});
|
||||
|
||||
it('still includes the ## Working Memory heading', () => {
|
||||
const result = buildWorkingMemoryInstruction('# Template', false, 'Custom text.');
|
||||
expect(result).toContain('## Working Memory');
|
||||
});
|
||||
|
||||
it('still includes the template block', () => {
|
||||
const template = '# Context\n- Name:\n- City:';
|
||||
const result = buildWorkingMemoryInstruction(template, false, 'Custom text.');
|
||||
expect(result).toContain(template);
|
||||
});
|
||||
|
||||
it('still includes the format hint for structured memory', () => {
|
||||
const result = buildWorkingMemoryInstruction('{}', true, 'Custom text.');
|
||||
expect(result).toContain('JSON');
|
||||
});
|
||||
|
||||
it('still includes the format hint for freeform memory', () => {
|
||||
const result = buildWorkingMemoryInstruction('# Template', false, 'Custom text.');
|
||||
expect(result).toContain('Update the template with any new information learned');
|
||||
});
|
||||
|
||||
it('uses the default instruction when undefined is passed explicitly', () => {
|
||||
const withDefault = buildWorkingMemoryInstruction('# Template', false, undefined);
|
||||
const withoutArg = buildWorkingMemoryInstruction('# Template', false);
|
||||
expect(withDefault).toBe(withoutArg);
|
||||
});
|
||||
|
||||
it('WORKING_MEMORY_DEFAULT_INSTRUCTION appears in the output when no custom instruction is set', () => {
|
||||
const result = buildWorkingMemoryInstruction('# Template', false);
|
||||
expect(result).toContain(WORKING_MEMORY_DEFAULT_INSTRUCTION);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('templateFromSchema', () => {
|
||||
it('converts Zod schema to JSON template', () => {
|
||||
const schema = z.object({
|
||||
userName: z.string().optional().describe("The user's name"),
|
||||
favoriteColor: z.string().optional().describe('Favorite color'),
|
||||
});
|
||||
const result = templateFromSchema(schema);
|
||||
expect(result).toContain('userName');
|
||||
expect(result).toContain('favoriteColor');
|
||||
let parsed: unknown;
|
||||
try {
|
||||
parsed = JSON.parse(result);
|
||||
} catch {
|
||||
parsed = undefined;
|
||||
}
|
||||
expect(parsed).toHaveProperty('userName');
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildWorkingMemoryTool — freeform', () => {
|
||||
it('returns a BuiltTool with the correct name', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: false,
|
||||
persist: async () => {},
|
||||
});
|
||||
expect(tool.name).toBe(UPDATE_WORKING_MEMORY_TOOL_NAME);
|
||||
});
|
||||
|
||||
it('has a description', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: false,
|
||||
persist: async () => {},
|
||||
});
|
||||
expect(tool.description).toBeTruthy();
|
||||
});
|
||||
|
||||
it('has a freeform input schema with a memory field', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: false,
|
||||
persist: async () => {},
|
||||
});
|
||||
expect(tool.inputSchema).toBeDefined();
|
||||
const schema = tool.inputSchema as z.ZodObject<z.ZodRawShape>;
|
||||
const result = schema.safeParse({ memory: 'hello' });
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('rejects input without memory field', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: false,
|
||||
persist: async () => {},
|
||||
});
|
||||
const schema = tool.inputSchema as z.ZodObject<z.ZodRawShape>;
|
||||
const result = schema.safeParse({ other: 'value' });
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('handler calls persist with the memory string', async () => {
|
||||
const persisted: string[] = [];
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: false,
|
||||
// eslint-disable-next-line @typescript-eslint/require-await
|
||||
persist: async (content) => {
|
||||
persisted.push(content);
|
||||
},
|
||||
});
|
||||
const result = await tool.handler!({ memory: 'test content' }, {} as never);
|
||||
expect(persisted).toEqual(['test content']);
|
||||
expect(result).toMatchObject({ success: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildWorkingMemoryTool — structured', () => {
|
||||
const schema = z.object({
|
||||
userName: z.string().optional().describe("The user's name"),
|
||||
location: z.string().optional().describe('Where the user lives'),
|
||||
});
|
||||
|
||||
it('uses the Zod schema as input schema', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: true,
|
||||
schema,
|
||||
persist: async () => {},
|
||||
});
|
||||
const inputSchema = tool.inputSchema as typeof schema;
|
||||
const result = inputSchema.safeParse({ userName: 'Alice', location: 'Berlin' });
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
|
||||
it('handler serializes input to JSON and calls persist', async () => {
|
||||
const persisted: string[] = [];
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: true,
|
||||
schema,
|
||||
// eslint-disable-next-line @typescript-eslint/require-await
|
||||
persist: async (content) => {
|
||||
persisted.push(content);
|
||||
},
|
||||
});
|
||||
|
||||
const input = { userName: 'Alice', location: 'Berlin' };
|
||||
await tool.handler!(input, {} as never);
|
||||
|
||||
expect(persisted).toHaveLength(1);
|
||||
let parsed: unknown;
|
||||
try {
|
||||
parsed = JSON.parse(persisted[0]) as unknown;
|
||||
} catch {
|
||||
parsed = undefined;
|
||||
}
|
||||
expect(parsed).toMatchObject(input);
|
||||
});
|
||||
|
||||
it('handler returns success confirmation', async () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: true,
|
||||
schema,
|
||||
persist: async () => {},
|
||||
});
|
||||
const result = await tool.handler!({ userName: 'Alice' }, {} as never);
|
||||
expect(result).toMatchObject({ success: true });
|
||||
});
|
||||
|
||||
it('falls back to freeform when no schema provided despite structured:true', () => {
|
||||
const tool = buildWorkingMemoryTool({
|
||||
structured: true,
|
||||
persist: async () => {},
|
||||
});
|
||||
const inputSchema = tool.inputSchema as z.ZodObject<z.ZodRawShape>;
|
||||
const result = inputSchema.safeParse({ memory: 'fallback text' });
|
||||
expect(result.success).toBe(true);
|
||||
});
|
||||
});
|
||||
|
|
@ -45,8 +45,6 @@ describe('Workspace integration with fakes', () => {
|
|||
const names = tools.map((t) => t.name);
|
||||
|
||||
expect(names).toContain('workspace_read_file');
|
||||
expect(names).toContain('workspace_str_replace_file');
|
||||
expect(names).toContain('workspace_batch_str_replace_file');
|
||||
expect(names).toContain('workspace_write_file');
|
||||
expect(names).toContain('workspace_list_files');
|
||||
expect(names).toContain('workspace_file_stat');
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
import { zodToJsonSchema } from '../../utils/zod';
|
||||
import { createWorkspaceTools } from '../../workspace/tools/workspace-tools';
|
||||
import type { WorkspaceFilesystem, WorkspaceSandbox, CommandResult } from '../../workspace/types';
|
||||
|
||||
|
|
@ -63,8 +62,6 @@ describe('createWorkspaceTools', () => {
|
|||
|
||||
expect(names).toEqual([
|
||||
'workspace_read_file',
|
||||
'workspace_str_replace_file',
|
||||
'workspace_batch_str_replace_file',
|
||||
'workspace_write_file',
|
||||
'workspace_list_files',
|
||||
'workspace_file_stat',
|
||||
|
|
@ -100,10 +97,8 @@ describe('createWorkspaceTools', () => {
|
|||
const names = tools.map((t) => t.name);
|
||||
|
||||
expect(names).toContain('workspace_read_file');
|
||||
expect(names).toContain('workspace_str_replace_file');
|
||||
expect(names).toContain('workspace_batch_str_replace_file');
|
||||
expect(names).toContain('workspace_execute_command');
|
||||
expect(names).toHaveLength(13);
|
||||
expect(names).toHaveLength(11);
|
||||
});
|
||||
|
||||
describe('tool handlers', () => {
|
||||
|
|
@ -118,124 +113,6 @@ describe('createWorkspaceTools', () => {
|
|||
expect(result).toEqual({ content: 'file content' });
|
||||
});
|
||||
|
||||
it('targeted edit input schemas serialize with a top-level object type', () => {
|
||||
const tools = createWorkspaceTools({ filesystem: makeFakeFilesystem() });
|
||||
const strReplaceTool = tools.find((t) => t.name === 'workspace_str_replace_file')!;
|
||||
const batchStrReplaceTool = tools.find((t) => t.name === 'workspace_batch_str_replace_file')!;
|
||||
|
||||
expect(zodToJsonSchema(strReplaceTool.inputSchema)).toMatchObject({ type: 'object' });
|
||||
expect(zodToJsonSchema(batchStrReplaceTool.inputSchema)).toMatchObject({
|
||||
type: 'object',
|
||||
});
|
||||
});
|
||||
|
||||
it('str_replace_file handler reads then writes changed content', async () => {
|
||||
const fs = makeFakeFilesystem({
|
||||
readFile: jest.fn().mockResolvedValue('first\nsecond'),
|
||||
});
|
||||
const tools = createWorkspaceTools({ filesystem: fs });
|
||||
const strReplaceTool = tools.find((t) => t.name === 'workspace_str_replace_file')!;
|
||||
|
||||
const result = await strReplaceTool.handler!(
|
||||
{
|
||||
path: '/test.txt',
|
||||
old_str: 'second',
|
||||
new_str: 'changed',
|
||||
},
|
||||
{} as never,
|
||||
);
|
||||
|
||||
expect(fs.writeFile).toHaveBeenCalledWith('/test.txt', 'first\nchanged', {
|
||||
overwrite: true,
|
||||
});
|
||||
expect(result).toEqual({ success: true, result: 'Edit applied successfully.' });
|
||||
});
|
||||
|
||||
it('str_replace_file handler returns errors without writing when replacement is not unique', async () => {
|
||||
const fs = makeFakeFilesystem({
|
||||
readFile: jest.fn().mockResolvedValue('same\nsame'),
|
||||
});
|
||||
const tools = createWorkspaceTools({ filesystem: fs });
|
||||
const strReplaceTool = tools.find((t) => t.name === 'workspace_str_replace_file')!;
|
||||
|
||||
const result = await strReplaceTool.handler!(
|
||||
{
|
||||
path: '/test.txt',
|
||||
old_str: 'same',
|
||||
new_str: 'changed',
|
||||
},
|
||||
{} as never,
|
||||
);
|
||||
|
||||
expect(fs.writeFile).not.toHaveBeenCalled();
|
||||
expect(result).toEqual({
|
||||
success: false,
|
||||
error: 'Found 2 matches. Please provide more context to make the replacement unique.',
|
||||
});
|
||||
});
|
||||
|
||||
it('batch_str_replace_file handler applies all replacements atomically', async () => {
|
||||
const fs = makeFakeFilesystem({
|
||||
readFile: jest.fn().mockResolvedValue('const a = 1;\nconst b = 2;'),
|
||||
});
|
||||
const tools = createWorkspaceTools({ filesystem: fs });
|
||||
const batchStrReplaceTool = tools.find((t) => t.name === 'workspace_batch_str_replace_file')!;
|
||||
|
||||
const result = await batchStrReplaceTool.handler!(
|
||||
{
|
||||
path: '/test.ts',
|
||||
replacements: [
|
||||
{ old_str: 'const a = 1;', new_str: 'const a = 10;' },
|
||||
{ old_str: 'const b = 2;', new_str: 'const b = 20;' },
|
||||
],
|
||||
},
|
||||
{} as never,
|
||||
);
|
||||
|
||||
expect(fs.writeFile).toHaveBeenCalledWith('/test.ts', 'const a = 10;\nconst b = 20;', {
|
||||
overwrite: true,
|
||||
});
|
||||
expect(result).toEqual({
|
||||
success: true,
|
||||
result: 'All 2 replacements applied successfully.',
|
||||
});
|
||||
});
|
||||
|
||||
it('batch_str_replace_file handler does not write when any replacement fails', async () => {
|
||||
const fs = makeFakeFilesystem({
|
||||
readFile: jest.fn().mockResolvedValue('const a = 1;\nconst b = 2;'),
|
||||
});
|
||||
const tools = createWorkspaceTools({ filesystem: fs });
|
||||
const batchStrReplaceTool = tools.find((t) => t.name === 'workspace_batch_str_replace_file')!;
|
||||
|
||||
const result = await batchStrReplaceTool.handler!(
|
||||
{
|
||||
path: '/test.ts',
|
||||
replacements: [
|
||||
{ old_str: 'const a = 1;', new_str: 'const a = 10;' },
|
||||
{ old_str: 'const missing = 0;', new_str: 'const missing = 1;' },
|
||||
],
|
||||
},
|
||||
{} as never,
|
||||
);
|
||||
|
||||
expect(fs.writeFile).not.toHaveBeenCalled();
|
||||
expect(result).toEqual({
|
||||
success: false,
|
||||
error: 'Batch replacement failed.',
|
||||
results: [
|
||||
{ index: 0, old_str: 'const a = 1;', status: 'success' },
|
||||
{
|
||||
index: 1,
|
||||
old_str: 'const missing = 0;',
|
||||
status: 'failed',
|
||||
error:
|
||||
'No exact match found for str_replace. The old_str content was not found in the file.',
|
||||
},
|
||||
],
|
||||
});
|
||||
});
|
||||
|
||||
it('write_file handler calls filesystem.writeFile', async () => {
|
||||
const fs = makeFakeFilesystem();
|
||||
const tools = createWorkspaceTools({ filesystem: fs });
|
||||
|
|
|
|||
|
|
@ -275,8 +275,6 @@ describe('Workspace', () => {
|
|||
|
||||
const names = tools.map((t) => t.name);
|
||||
expect(names).toContain('workspace_read_file');
|
||||
expect(names).toContain('workspace_str_replace_file');
|
||||
expect(names).toContain('workspace_batch_str_replace_file');
|
||||
expect(names).toContain('workspace_write_file');
|
||||
expect(names).toContain('workspace_list_files');
|
||||
expect(names).toContain('workspace_file_stat');
|
||||
|
|
|
|||
217
packages/@n8n/agents/src/codegen/generate-agent-code.ts
Normal file
217
packages/@n8n/agents/src/codegen/generate-agent-code.ts
Normal file
|
|
@ -0,0 +1,217 @@
|
|||
import type prettier from 'prettier';
|
||||
|
||||
import type {
|
||||
AgentSchema,
|
||||
EvalSchema,
|
||||
GuardrailSchema,
|
||||
MemorySchema,
|
||||
ToolSchema,
|
||||
} from '../types/sdk/schema';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function escapeTemplateLiteral(str: string): string {
|
||||
return str.replace(/\\/g, '\\\\').replace(/`/g, '\\`').replace(/\$/g, '\\$');
|
||||
}
|
||||
|
||||
function escapeSingleQuote(str: string): string {
|
||||
return JSON.stringify(str).slice(1, -1).replace(/'/g, "\\'");
|
||||
}
|
||||
|
||||
let prettierInstance: typeof prettier | undefined;
|
||||
|
||||
/**
|
||||
* Format TypeScript source code using Prettier.
|
||||
* Loaded lazily to avoid startup cost when not generating code.
|
||||
*/
|
||||
async function formatCode(code: string): Promise<string> {
|
||||
prettierInstance ??= await import('prettier');
|
||||
return await prettierInstance.format(code, {
|
||||
parser: 'typescript',
|
||||
singleQuote: true,
|
||||
useTabs: true,
|
||||
trailingComma: 'all',
|
||||
printWidth: 100,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Compile-time exhaustive check. If a new property is added to AgentSchema
|
||||
* but not handled in generateAgentCode(), TypeScript will report an error
|
||||
* here because the destructured rest object won't be empty.
|
||||
*/
|
||||
function assertAllHandled(_: Record<string, never>): void {
|
||||
// intentionally empty — this is a compile-time-only check
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Section builders — each returns `.method(...)` chain fragments
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function modelParts(model: AgentSchema['model']): string[] {
|
||||
if (model.provider && model.name) {
|
||||
return [`.model('${escapeSingleQuote(model.provider)}', '${escapeSingleQuote(model.name)}')`];
|
||||
}
|
||||
if (model.name) {
|
||||
return [`.model('${escapeSingleQuote(model.name)}')`];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
function toolPart(tool: ToolSchema): { part: string; usesWorkflowTool: boolean } {
|
||||
if (!tool.editable) {
|
||||
return {
|
||||
part: `.tool(new WorkflowTool('${escapeSingleQuote(tool.name)}'))`,
|
||||
usesWorkflowTool: true,
|
||||
};
|
||||
}
|
||||
const parts = [`new Tool('${escapeSingleQuote(tool.name)}')`];
|
||||
parts.push(`.description('${escapeSingleQuote(tool.description)}')`);
|
||||
if (tool.inputSchemaSource) parts.push(`.input(${tool.inputSchemaSource})`);
|
||||
if (tool.outputSchemaSource) parts.push(`.output(${tool.outputSchemaSource})`);
|
||||
if (tool.suspendSchemaSource) parts.push(`.suspend(${tool.suspendSchemaSource})`);
|
||||
if (tool.resumeSchemaSource) parts.push(`.resume(${tool.resumeSchemaSource})`);
|
||||
if (tool.handlerSource) parts.push(`.handler(${tool.handlerSource})`);
|
||||
if (tool.toMessageSource) parts.push(`.toMessage(${tool.toMessageSource})`);
|
||||
if (tool.requireApproval) parts.push('.requireApproval()');
|
||||
if (tool.needsApprovalFnSource) parts.push(`.needsApprovalFn(${tool.needsApprovalFnSource})`);
|
||||
return { part: `.tool(${parts.join('')})`, usesWorkflowTool: false };
|
||||
}
|
||||
|
||||
function evalPart(ev: EvalSchema): string {
|
||||
const parts = [`new Eval('${escapeSingleQuote(ev.name)}')`];
|
||||
if (ev.description) parts.push(`.description('${escapeSingleQuote(ev.description)}')`);
|
||||
if (ev.modelId) parts.push(`.model('${escapeSingleQuote(ev.modelId)}')`);
|
||||
if (ev.credentialName) parts.push(`.credential('${escapeSingleQuote(ev.credentialName)}')`);
|
||||
if (ev.handlerSource) {
|
||||
parts.push(ev.type === 'check' ? `.check(${ev.handlerSource})` : `.judge(${ev.handlerSource})`);
|
||||
}
|
||||
return `.eval(${parts.join('')})`;
|
||||
}
|
||||
|
||||
function guardrailPart(g: GuardrailSchema): string {
|
||||
const method = g.position === 'input' ? 'inputGuardrail' : 'outputGuardrail';
|
||||
return `.${method}(${g.source})`;
|
||||
}
|
||||
|
||||
function memoryPart(memory: MemorySchema): string {
|
||||
if (memory.source) {
|
||||
return `.memory(${memory.source})`;
|
||||
}
|
||||
return `.memory(new Memory().lastMessages(${memory.lastMessages ?? 10}))`;
|
||||
}
|
||||
|
||||
function thinkingPart(thinking: NonNullable<AgentSchema['config']['thinking']>): string {
|
||||
const props: string[] = [];
|
||||
if (thinking.budgetTokens !== undefined) props.push(`budgetTokens: ${thinking.budgetTokens}`);
|
||||
if (thinking.reasoningEffort) props.push(`reasoningEffort: '${thinking.reasoningEffort}'`);
|
||||
if (props.length > 0) {
|
||||
return `.thinking('${thinking.provider}', { ${props.join(', ')} })`;
|
||||
}
|
||||
return `.thinking('${thinking.provider}')`;
|
||||
}
|
||||
|
||||
function buildImports(schema: AgentSchema, needsWorkflowTool: boolean): string {
|
||||
const agentImports = new Set<string>(['Agent']);
|
||||
if (schema.tools.some((t) => t.editable)) agentImports.add('Tool');
|
||||
if (needsWorkflowTool) agentImports.add('WorkflowTool');
|
||||
if (schema.memory) agentImports.add('Memory');
|
||||
if (schema.mcp && schema.mcp.length > 0) agentImports.add('McpClient');
|
||||
if (schema.evaluations.length > 0) agentImports.add('Eval');
|
||||
|
||||
const toolsNeedZod = schema.tools.some(
|
||||
(t) =>
|
||||
(t.inputSchemaSource?.includes('z.') ?? false) ||
|
||||
(t.outputSchemaSource?.includes('z.') ?? false),
|
||||
);
|
||||
const structuredOutputNeedsZod =
|
||||
schema.config.structuredOutput.schemaSource?.includes('z.') ?? false;
|
||||
|
||||
let imports = `import { ${Array.from(agentImports).sort().join(', ')} } from '@n8n/agents';`;
|
||||
if (toolsNeedZod || structuredOutputNeedsZod) imports += "\nimport { z } from 'zod';";
|
||||
return imports;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export async function generateAgentCode(schema: AgentSchema, agentName: string): Promise<string> {
|
||||
// Destructure every top-level property. If a new property is added to
|
||||
// AgentSchema, TypeScript will error on assertAllHandled below until
|
||||
// you handle it here AND add it to the destructure.
|
||||
const {
|
||||
model,
|
||||
credential,
|
||||
instructions,
|
||||
description: _description, // entity-level, not in code
|
||||
tools,
|
||||
providerTools,
|
||||
memory,
|
||||
evaluations,
|
||||
guardrails,
|
||||
mcp,
|
||||
telemetry,
|
||||
checkpoint,
|
||||
config,
|
||||
...rest
|
||||
} = schema;
|
||||
|
||||
// If this errors, you added a property to AgentSchema but didn't
|
||||
// destructure it above. Add it to the destructure and handle it below.
|
||||
assertAllHandled(rest);
|
||||
|
||||
const { thinking, toolCallConcurrency, requireToolApproval, structuredOutput, ...configRest } =
|
||||
config;
|
||||
assertAllHandled(configRest);
|
||||
|
||||
// No manual indentation — Prettier formats at the end.
|
||||
const parts: string[] = [];
|
||||
let needsWorkflowTool = false;
|
||||
|
||||
parts.push(`export default new Agent('${escapeSingleQuote(agentName)}')`);
|
||||
parts.push(...modelParts(model));
|
||||
|
||||
if (credential) parts.push(`.credential('${escapeSingleQuote(credential)}')`);
|
||||
if (instructions) parts.push(`.instructions(\`${escapeTemplateLiteral(instructions)}\`)`);
|
||||
|
||||
for (const tool of tools) {
|
||||
const { part, usesWorkflowTool } = toolPart(tool);
|
||||
if (usesWorkflowTool) needsWorkflowTool = true;
|
||||
parts.push(part);
|
||||
}
|
||||
|
||||
for (const pt of providerTools) {
|
||||
parts.push(`.providerTool(${pt.source})`);
|
||||
}
|
||||
|
||||
if (memory) parts.push(memoryPart(memory));
|
||||
|
||||
for (const ev of evaluations) {
|
||||
parts.push(evalPart(ev));
|
||||
}
|
||||
|
||||
for (const g of guardrails) {
|
||||
parts.push(guardrailPart(g));
|
||||
}
|
||||
|
||||
if (mcp && mcp.length > 0) {
|
||||
const configs = mcp.map((s) => s.configSource).join(', ');
|
||||
parts.push(`.mcp(new McpClient([${configs}]))`);
|
||||
}
|
||||
|
||||
if (telemetry) parts.push(`.telemetry(${telemetry.source})`);
|
||||
if (checkpoint) parts.push(`.checkpoint('${escapeSingleQuote(checkpoint)}')`);
|
||||
if (thinking) parts.push(thinkingPart(thinking));
|
||||
if (toolCallConcurrency) parts.push(`.toolCallConcurrency(${toolCallConcurrency})`);
|
||||
if (requireToolApproval) parts.push('.requireToolApproval()');
|
||||
if (structuredOutput.enabled && structuredOutput.schemaSource) {
|
||||
parts.push(`.structuredOutput(${structuredOutput.schemaSource})`);
|
||||
}
|
||||
|
||||
const imports = buildImports(schema, needsWorkflowTool);
|
||||
const raw = `${imports}\n\n${parts.join('')};\n`;
|
||||
return await formatCode(raw);
|
||||
}
|
||||
|
|
@ -28,8 +28,6 @@ export type {
|
|||
SerializableAgentState,
|
||||
AgentRunState,
|
||||
MemoryConfig,
|
||||
MemoryDescriptor,
|
||||
ObservationCapableMemory,
|
||||
TitleGenerationConfig,
|
||||
Thread,
|
||||
SemanticRecallConfig,
|
||||
|
|
@ -41,29 +39,12 @@ export type {
|
|||
PersistedExecutionOptions,
|
||||
BuiltTelemetry,
|
||||
AttributeValue,
|
||||
BuiltObservationStore,
|
||||
CompactFn,
|
||||
NewObservation,
|
||||
Observation,
|
||||
ObservationCategory,
|
||||
ObservationCursor,
|
||||
ObservationGapContext,
|
||||
ObservationLockHandle,
|
||||
ObservationalMemoryConfig,
|
||||
ObservationalMemoryTrigger,
|
||||
ObserveFn,
|
||||
ScopeKind,
|
||||
} from './types';
|
||||
export type { ProviderOptions } from '@ai-sdk/provider-utils';
|
||||
export { AgentEvent } from './types';
|
||||
export type { AgentEventData, AgentEventHandler } from './types';
|
||||
export {
|
||||
DEFAULT_OBSERVATION_GAP_THRESHOLD_MS,
|
||||
OBSERVATION_CATEGORIES,
|
||||
OBSERVATION_SCHEMA_VERSION,
|
||||
} from './types';
|
||||
|
||||
export { Tool, wrapToolForApproval } from './sdk/tool';
|
||||
export { Tool } from './sdk/tool';
|
||||
export { Memory } from './sdk/memory';
|
||||
export { Guardrail } from './sdk/guardrail';
|
||||
export { Eval } from './sdk/eval';
|
||||
|
|
@ -74,7 +55,6 @@ export { Telemetry } from './sdk/telemetry';
|
|||
export { LangSmithTelemetry } from './integrations/langsmith';
|
||||
export type { LangSmithTelemetryConfig } from './integrations/langsmith';
|
||||
export { Agent } from './sdk/agent';
|
||||
export type { AgentSnapshot } from './sdk/agent';
|
||||
export type {
|
||||
AgentBuilder,
|
||||
CredentialProvider,
|
||||
|
|
@ -93,6 +73,7 @@ export type {
|
|||
ContentReasoning,
|
||||
ContentText,
|
||||
ContentToolCall,
|
||||
ContentToolResult,
|
||||
Message,
|
||||
MessageContent,
|
||||
MessageRole,
|
||||
|
|
@ -101,10 +82,19 @@ export type {
|
|||
AgentDbMessage,
|
||||
} from './types/sdk/message';
|
||||
export type { HandlerExecutor } from './types/sdk/handler-executor';
|
||||
export {
|
||||
filterLlmMessages,
|
||||
isLlmMessage,
|
||||
} from './sdk/message';
|
||||
export type {
|
||||
AgentSchema,
|
||||
ToolSchema,
|
||||
MemorySchema,
|
||||
EvalSchema,
|
||||
ThinkingSchema,
|
||||
ProviderToolSchema,
|
||||
GuardrailSchema,
|
||||
McpServerSchema,
|
||||
TelemetrySchema,
|
||||
} from './types/sdk/schema';
|
||||
export { generateAgentCode } from './codegen/generate-agent-code';
|
||||
export { filterLlmMessages, isLlmMessage } from './sdk/message';
|
||||
export { fetchProviderCatalog } from './sdk/catalog';
|
||||
export { providerCapabilities } from './sdk/provider-capabilities';
|
||||
export type { ProviderCapability } from './sdk/provider-capabilities';
|
||||
|
|
@ -115,20 +105,14 @@ export type {
|
|||
ModelCost,
|
||||
ModelLimits,
|
||||
} from './sdk/catalog';
|
||||
export { SqliteMemory, SqliteMemoryConfigSchema } from './storage/sqlite-memory';
|
||||
export { WORKING_MEMORY_DEFAULT_INSTRUCTION } from './runtime/working-memory';
|
||||
export { SqliteMemory } from './storage/sqlite-memory';
|
||||
export {
|
||||
DEFAULT_COMPACTOR_PROMPT,
|
||||
DEFAULT_OBSERVER_PROMPT,
|
||||
} from './runtime/observational-cycle';
|
||||
UPDATE_WORKING_MEMORY_TOOL_NAME,
|
||||
WORKING_MEMORY_DEFAULT_INSTRUCTION,
|
||||
} from './runtime/working-memory';
|
||||
export type { SqliteMemoryConfig } from './storage/sqlite-memory';
|
||||
export { PostgresMemory } from './storage/postgres-memory';
|
||||
export type {
|
||||
PostgresConnectionOptions,
|
||||
PostgresConstructorOptions,
|
||||
} from './storage/postgres-memory';
|
||||
export { BaseMemory } from './storage/base-memory';
|
||||
export type { ToolDescriptor } from './types/sdk/tool-descriptor';
|
||||
export type { PostgresMemoryConfig } from './storage/postgres-memory';
|
||||
|
||||
export { createModel } from './runtime/model-factory';
|
||||
export { generateTitleFromMessage } from './runtime/title-generation';
|
||||
|
|
@ -167,7 +151,3 @@ export type {
|
|||
SpawnProcessOptions,
|
||||
ProcessInfo,
|
||||
} from './workspace';
|
||||
|
||||
export type { JSONObject, JSONArray, JSONValue } from './types/utils/json';
|
||||
|
||||
export { isZodSchema, zodToJsonSchema } from './utils/zod';
|
||||
|
|
|
|||
|
|
@ -1,167 +1,6 @@
|
|||
import { Telemetry } from '../sdk/telemetry';
|
||||
import type { BuiltTelemetry, OpaqueTracer, OpaqueTracerProvider } from '../types/telemetry';
|
||||
|
||||
let registeredOtelContext = false;
|
||||
|
||||
const LANGSMITH_TRACEABLE = 'langsmith.traceable';
|
||||
const LANGSMITH_IS_ROOT = 'langsmith.is_root';
|
||||
const LANGSMITH_PARENT_RUN_ID = 'langsmith.span.parent_id';
|
||||
const LANGSMITH_TRACEABLE_PARENT_OTEL_SPAN_ID = 'langsmith.traceable_parent_otel_span_id';
|
||||
const AI_OPERATION_ID = 'ai.operationId';
|
||||
const TRACEABLE_AI_SDK_OPERATIONS = new Set([
|
||||
'ai.generateText.doGenerate',
|
||||
'ai.streamText.doStream',
|
||||
'ai.generateObject.doGenerate',
|
||||
'ai.streamObject.doStream',
|
||||
'ai.toolCall',
|
||||
]);
|
||||
|
||||
interface OtelSpanLike {
|
||||
attributes: Record<string, unknown>;
|
||||
spanContext(): {
|
||||
traceId: string;
|
||||
spanId: string;
|
||||
};
|
||||
parentSpanId?: string;
|
||||
parentSpanContext?: {
|
||||
spanId?: string;
|
||||
};
|
||||
}
|
||||
|
||||
interface SpanProcessorLike {
|
||||
forceFlush(): Promise<void>;
|
||||
onStart(span: unknown, parentContext: unknown): void;
|
||||
onEnd(span: unknown): void;
|
||||
shutdown(): Promise<void>;
|
||||
}
|
||||
|
||||
interface BatchSpanProcessorConstructor {
|
||||
new (exporter: unknown): SpanProcessorLike;
|
||||
}
|
||||
|
||||
interface LangSmithRunTree {
|
||||
getSharedClient(): {
|
||||
awaitPendingTraceBatches(): Promise<void>;
|
||||
};
|
||||
}
|
||||
|
||||
function isOtelSpanLike(value: unknown): value is OtelSpanLike {
|
||||
return (
|
||||
value !== null &&
|
||||
typeof value === 'object' &&
|
||||
typeof Reflect.get(value, 'spanContext') === 'function' &&
|
||||
typeof Reflect.get(value, 'attributes') === 'object'
|
||||
);
|
||||
}
|
||||
|
||||
function getParentSpanId(span: OtelSpanLike): string | undefined {
|
||||
return span.parentSpanId ?? span.parentSpanContext?.spanId;
|
||||
}
|
||||
|
||||
function getUuidFromOtelSpanId(spanId: string): string {
|
||||
const paddedHex = spanId.padStart(16, '0');
|
||||
return `00000000-0000-0000-${paddedHex.substring(0, 4)}-${paddedHex.substring(4, 16)}`;
|
||||
}
|
||||
|
||||
function isTraceableSpan(span: OtelSpanLike): boolean {
|
||||
const operationId = span.attributes[AI_OPERATION_ID];
|
||||
return (
|
||||
span.attributes[LANGSMITH_TRACEABLE] === 'true' ||
|
||||
(typeof operationId === 'string' && TRACEABLE_AI_SDK_OPERATIONS.has(operationId))
|
||||
);
|
||||
}
|
||||
|
||||
function createLangSmithSpanProcessor(options: {
|
||||
exporter: unknown;
|
||||
BatchSpanProcessor: BatchSpanProcessorConstructor;
|
||||
RunTree: LangSmithRunTree;
|
||||
}): SpanProcessorLike {
|
||||
const delegate = new options.BatchSpanProcessor(options.exporter);
|
||||
const traceMap: Record<
|
||||
string,
|
||||
{
|
||||
spanCount: number;
|
||||
spanInfo: Record<string, { isTraceable: boolean; parentSpanId?: string }>;
|
||||
}
|
||||
> = {};
|
||||
|
||||
return {
|
||||
async forceFlush() {
|
||||
await delegate.forceFlush();
|
||||
},
|
||||
|
||||
onStart(span, parentContext) {
|
||||
if (!isOtelSpanLike(span)) {
|
||||
delegate.onStart(span, parentContext);
|
||||
return;
|
||||
}
|
||||
|
||||
const spanContext = span.spanContext();
|
||||
traceMap[spanContext.traceId] ??= {
|
||||
spanCount: 0,
|
||||
spanInfo: {},
|
||||
};
|
||||
|
||||
const traceInfo = traceMap[spanContext.traceId];
|
||||
traceInfo.spanCount++;
|
||||
const traceable = isTraceableSpan(span);
|
||||
const parentSpanId = getParentSpanId(span);
|
||||
traceInfo.spanInfo[spanContext.spanId] = {
|
||||
isTraceable: traceable,
|
||||
...(parentSpanId ? { parentSpanId } : {}),
|
||||
};
|
||||
|
||||
let currentCandidateParentSpanId = parentSpanId;
|
||||
let traceableParentSpanId: string | undefined;
|
||||
while (currentCandidateParentSpanId) {
|
||||
const currentSpanInfo = traceInfo.spanInfo[currentCandidateParentSpanId];
|
||||
if (currentSpanInfo?.isTraceable) {
|
||||
traceableParentSpanId = currentCandidateParentSpanId;
|
||||
break;
|
||||
}
|
||||
currentCandidateParentSpanId = currentSpanInfo?.parentSpanId;
|
||||
}
|
||||
|
||||
if (!traceableParentSpanId) {
|
||||
span.attributes[LANGSMITH_IS_ROOT] = true;
|
||||
} else {
|
||||
span.attributes[LANGSMITH_PARENT_RUN_ID] = getUuidFromOtelSpanId(traceableParentSpanId);
|
||||
span.attributes[LANGSMITH_TRACEABLE_PARENT_OTEL_SPAN_ID] = traceableParentSpanId;
|
||||
}
|
||||
|
||||
if (traceable) {
|
||||
delegate.onStart(span, parentContext);
|
||||
}
|
||||
},
|
||||
|
||||
onEnd(span) {
|
||||
if (!isOtelSpanLike(span)) {
|
||||
delegate.onEnd(span);
|
||||
return;
|
||||
}
|
||||
|
||||
const spanContext = span.spanContext();
|
||||
const traceInfo = traceMap[spanContext.traceId];
|
||||
const spanInfo = traceInfo?.spanInfo[spanContext.spanId];
|
||||
if (!traceInfo || !spanInfo) return;
|
||||
|
||||
traceInfo.spanCount--;
|
||||
if (traceInfo.spanCount <= 0) {
|
||||
delete traceMap[spanContext.traceId];
|
||||
}
|
||||
|
||||
if (spanInfo.isTraceable) {
|
||||
delegate.onEnd(span);
|
||||
}
|
||||
},
|
||||
|
||||
async shutdown() {
|
||||
await options.RunTree.getSharedClient().awaitPendingTraceBatches();
|
||||
await delegate.shutdown();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export interface LangSmithTelemetryConfig {
|
||||
/** LangSmith API key. If omitted, resolved via `.credential()` or LANGSMITH_API_KEY env var. */
|
||||
apiKey?: string;
|
||||
|
|
@ -174,10 +13,6 @@ export interface LangSmithTelemetryConfig {
|
|||
* as `${endpoint}/otel/v1/traces`. Use this for custom collectors or testing.
|
||||
*/
|
||||
url?: string;
|
||||
/** Default headers to send with LangSmith OTLP export requests. */
|
||||
headers?: Record<string, string> | (() => Promise<Record<string, string>>);
|
||||
/** Optional hook for redacting or annotating spans before LangSmith export. */
|
||||
transformExportedSpan?: (span: unknown) => unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -194,7 +29,6 @@ async function createLangSmithTracer(
|
|||
spanProcessors?: unknown[];
|
||||
}) => OpaqueTracerProvider & {
|
||||
getTracer(name: string): OpaqueTracer;
|
||||
register(config?: { propagator?: null }): void;
|
||||
};
|
||||
};
|
||||
|
||||
|
|
@ -202,16 +36,14 @@ async function createLangSmithTracer(
|
|||
LangSmithOTLPTraceExporter: new (cfg?: {
|
||||
apiKey?: string;
|
||||
projectName?: string;
|
||||
url?: string;
|
||||
headers?: Record<string, string>;
|
||||
transformExportedSpan?: (span: unknown) => unknown;
|
||||
endpoint?: string;
|
||||
}) => unknown;
|
||||
};
|
||||
const { BatchSpanProcessor } = (await import('@opentelemetry/sdk-trace-base')) as {
|
||||
BatchSpanProcessor: BatchSpanProcessorConstructor;
|
||||
};
|
||||
const { RunTree } = (await import('langsmith')) as {
|
||||
RunTree: LangSmithRunTree;
|
||||
|
||||
const { LangSmithOTLPSpanProcessor } = (await import(
|
||||
'langsmith/experimental/otel/processor'
|
||||
)) as {
|
||||
LangSmithOTLPSpanProcessor: new (exporter: unknown) => unknown;
|
||||
};
|
||||
|
||||
// SECURITY: When the engine-resolved credential is the active key (i.e. no
|
||||
|
|
@ -223,34 +55,19 @@ async function createLangSmithTracer(
|
|||
? undefined
|
||||
: (config?.url ??
|
||||
(config?.endpoint ? `${config.endpoint.replace(/\/$/, '')}/otel/v1/traces` : undefined));
|
||||
const headers = typeof config?.headers === 'function' ? await config.headers() : config?.headers;
|
||||
|
||||
const exporter = new LangSmithOTLPTraceExporter({
|
||||
apiKey,
|
||||
projectName: config?.project,
|
||||
...(headers ? { headers } : {}),
|
||||
...(config?.transformExportedSpan
|
||||
? { transformExportedSpan: config.transformExportedSpan }
|
||||
: {}),
|
||||
...(url ? { url } : {}),
|
||||
});
|
||||
|
||||
const processor = createLangSmithSpanProcessor({
|
||||
exporter,
|
||||
BatchSpanProcessor,
|
||||
RunTree,
|
||||
});
|
||||
const processor = new LangSmithOTLPSpanProcessor(exporter);
|
||||
|
||||
const provider = new NodeTracerProvider({
|
||||
spanProcessors: [processor],
|
||||
});
|
||||
if (!registeredOtelContext) {
|
||||
// AI SDK creates nested operation/provider/tool spans through the active
|
||||
// OpenTelemetry context. Without the Node context manager these spans are
|
||||
// exported as separate root traces even when an explicit tracer is passed.
|
||||
provider.register({ propagator: null });
|
||||
registeredOtelContext = true;
|
||||
}
|
||||
// Do NOT call provider.register() — avoid polluting the global tracer provider.
|
||||
|
||||
return { tracer: provider.getTracer('@n8n/agents'), provider };
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,71 +0,0 @@
|
|||
import { BackgroundTaskTracker } from '../background-task-tracker';
|
||||
|
||||
describe('BackgroundTaskTracker', () => {
|
||||
it('flushes a single in-flight promise', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
let resolveInner!: () => void;
|
||||
const inner = new Promise<void>((resolve) => {
|
||||
resolveInner = resolve;
|
||||
});
|
||||
tracker.track(inner);
|
||||
expect(tracker.pendingCount).toBe(1);
|
||||
|
||||
const flush = tracker.flush();
|
||||
resolveInner();
|
||||
await flush;
|
||||
expect(tracker.pendingCount).toBe(0);
|
||||
});
|
||||
|
||||
it('waits for all tracked promises in flush()', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
const events: string[] = [];
|
||||
const a = new Promise<void>((resolve) =>
|
||||
setTimeout(() => {
|
||||
events.push('a');
|
||||
resolve();
|
||||
}, 10),
|
||||
);
|
||||
const b = new Promise<void>((resolve) =>
|
||||
setTimeout(() => {
|
||||
events.push('b');
|
||||
resolve();
|
||||
}, 5),
|
||||
);
|
||||
tracker.track(a);
|
||||
tracker.track(b);
|
||||
|
||||
await tracker.flush();
|
||||
expect(events.sort()).toEqual(['a', 'b']);
|
||||
});
|
||||
|
||||
it('flush() does not throw on rejected tracked promises', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
const rejected = Promise.reject(new Error('boom'));
|
||||
// Suppress unhandled-rejection warning by attaching a no-op handler before track.
|
||||
rejected.catch(() => {});
|
||||
tracker.track(rejected);
|
||||
await expect(tracker.flush()).resolves.toBeUndefined();
|
||||
});
|
||||
|
||||
it('flush() is a no-op when nothing is tracked', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
await expect(tracker.flush()).resolves.toBeUndefined();
|
||||
});
|
||||
|
||||
it('removes promises from pendingCount after they settle', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
const inner = Promise.resolve();
|
||||
tracker.track(inner);
|
||||
await inner;
|
||||
// One microtask is needed for the .then cleanup to run.
|
||||
await Promise.resolve();
|
||||
expect(tracker.pendingCount).toBe(0);
|
||||
});
|
||||
|
||||
it('flush() called twice in a row both resolve', async () => {
|
||||
const tracker = new BackgroundTaskTracker();
|
||||
tracker.track(Promise.resolve());
|
||||
await tracker.flush();
|
||||
await expect(tracker.flush()).resolves.toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
|
@ -1,95 +0,0 @@
|
|||
import type { AgentDbMessage, AgentMessage, Message } from '../../types/sdk/message';
|
||||
import { InMemoryMemory } from '../memory-store';
|
||||
|
||||
function makeMsg(role: 'user' | 'assistant', text: string, createdAt = new Date()): AgentDbMessage {
|
||||
return {
|
||||
id: crypto.randomUUID(),
|
||||
createdAt,
|
||||
role,
|
||||
content: [{ type: 'text', text }],
|
||||
};
|
||||
}
|
||||
|
||||
function textOf(msg: AgentMessage): string {
|
||||
const m = msg as Message;
|
||||
return (m.content[0] as { text: string }).text;
|
||||
}
|
||||
|
||||
describe('InMemoryMemory — message keyset reads', () => {
|
||||
it('returns messages ordered by (createdAt, id) ascending', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await mem.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one', new Date(t)), makeMsg('assistant', 'two', new Date(t + 1))],
|
||||
});
|
||||
await mem.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'three', new Date(t + 2))],
|
||||
});
|
||||
|
||||
const all = await mem.getMessages('t-1');
|
||||
expect(all.map(textOf)).toEqual(['one', 'two', 'three']);
|
||||
});
|
||||
|
||||
it('upsert by id preserves identity (re-saving the same id does not duplicate)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const original = makeMsg('user', 'original');
|
||||
await mem.saveMessages({ threadId: 't-1', resourceId: 'u-1', messages: [original] });
|
||||
|
||||
const edited: AgentDbMessage = {
|
||||
id: original.id,
|
||||
createdAt: original.createdAt,
|
||||
role: 'user',
|
||||
content: [{ type: 'text', text: 'edited' }],
|
||||
};
|
||||
await mem.saveMessages({ threadId: 't-1', resourceId: 'u-1', messages: [edited] });
|
||||
|
||||
const all = await mem.getMessages('t-1');
|
||||
expect(all).toHaveLength(1);
|
||||
expect(textOf(all[0])).toBe('edited');
|
||||
});
|
||||
|
||||
it('filters by since (createdAt, id) keyset', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await mem.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [
|
||||
makeMsg('user', 'a', new Date(t)),
|
||||
makeMsg('assistant', 'b', new Date(t + 1)),
|
||||
makeMsg('user', 'c', new Date(t + 2)),
|
||||
],
|
||||
});
|
||||
|
||||
const all = await mem.getMessages('t-1');
|
||||
|
||||
const tail = await mem.getMessages('t-1', {
|
||||
since: { sinceCreatedAt: all[0].createdAt, sinceMessageId: all[0].id },
|
||||
});
|
||||
expect(tail.map(textOf)).toEqual(['b', 'c']);
|
||||
|
||||
const empty = await mem.getMessages('t-1', {
|
||||
since: { sinceCreatedAt: all[2].createdAt, sinceMessageId: all[2].id },
|
||||
});
|
||||
expect(empty).toEqual([]);
|
||||
});
|
||||
|
||||
it('keyset since includes rows sharing createdAt with the anchor when id is greater', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const at = new Date();
|
||||
const m1 = makeMsg('user', 'a', at);
|
||||
const m2 = makeMsg('user', 'b', at);
|
||||
await mem.saveMessages({ threadId: 't-1', resourceId: 'u-1', messages: [m1, m2] });
|
||||
|
||||
const [low, high] = [m1, m2].sort((a, b) => (a.id < b.id ? -1 : 1));
|
||||
const tail = await mem.getMessages('t-1', {
|
||||
since: { sinceCreatedAt: low.createdAt, sinceMessageId: low.id },
|
||||
});
|
||||
expect(tail).toHaveLength(1);
|
||||
expect(tail[0].id).toBe(high.id);
|
||||
});
|
||||
});
|
||||
|
|
@ -1,221 +0,0 @@
|
|||
import type { AgentDbMessage, Message } from '../../types/sdk/message';
|
||||
import { InMemoryMemory } from '../memory-store';
|
||||
|
||||
describe('InMemoryMemory working memory', () => {
|
||||
it('returns null for unknown key', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
expect(
|
||||
await mem.getWorkingMemory({
|
||||
threadId: 'thread-x',
|
||||
resourceId: 'unknown',
|
||||
scope: 'resource',
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by resourceId', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' },
|
||||
'# Context\n- Name: Alice',
|
||||
);
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' }),
|
||||
).toBe('# Context\n- Name: Alice');
|
||||
});
|
||||
|
||||
it('overwrites on subsequent save', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' },
|
||||
'v1',
|
||||
);
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' },
|
||||
'v2',
|
||||
);
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-1', resourceId: 'user-1', scope: 'resource' }),
|
||||
).toBe('v2');
|
||||
});
|
||||
|
||||
it('isolates by resourceId (resource scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-a', resourceId: 'user-1', scope: 'resource' },
|
||||
'Alice data',
|
||||
);
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-b', resourceId: 'user-2', scope: 'resource' },
|
||||
'Bob data',
|
||||
);
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-a', resourceId: 'user-1', scope: 'resource' }),
|
||||
).toBe('Alice data');
|
||||
expect(
|
||||
await mem.getWorkingMemory({ threadId: 'thread-b', resourceId: 'user-2', scope: 'resource' }),
|
||||
).toBe('Bob data');
|
||||
});
|
||||
|
||||
it('returns null for unknown threadId (thread scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
expect(await mem.getWorkingMemory({ threadId: 'unknown', scope: 'thread' })).toBeNull();
|
||||
});
|
||||
|
||||
it('saves and retrieves working memory keyed by threadId', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1', scope: 'thread' }, '# Thread Notes');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1', scope: 'thread' })).toBe(
|
||||
'# Thread Notes',
|
||||
);
|
||||
});
|
||||
|
||||
it('isolates by threadId (thread scope)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-1', scope: 'thread' }, 'data for thread 1');
|
||||
await mem.saveWorkingMemory({ threadId: 'thread-2', scope: 'thread' }, 'data for thread 2');
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-1', scope: 'thread' })).toBe(
|
||||
'data for thread 1',
|
||||
);
|
||||
expect(await mem.getWorkingMemory({ threadId: 'thread-2', scope: 'thread' })).toBe(
|
||||
'data for thread 2',
|
||||
);
|
||||
});
|
||||
|
||||
it('isolates entries by scope when threadId and resourceId match', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.saveWorkingMemory({ threadId: 'shared-id', scope: 'thread' }, 'thread memory');
|
||||
await mem.saveWorkingMemory(
|
||||
{ threadId: 'thread-1', resourceId: 'shared-id', scope: 'resource' },
|
||||
'resource memory',
|
||||
);
|
||||
|
||||
expect(await mem.getWorkingMemory({ threadId: 'shared-id', scope: 'thread' })).toBe(
|
||||
'thread memory',
|
||||
);
|
||||
expect(
|
||||
await mem.getWorkingMemory({
|
||||
threadId: 'thread-1',
|
||||
resourceId: 'shared-id',
|
||||
scope: 'resource',
|
||||
}),
|
||||
).toBe('resource memory');
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Message persistence — createdAt correctness
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeDbMsg(id: string, createdAt: Date, text: string): AgentDbMessage {
|
||||
return { id, createdAt, role: 'user', content: [{ type: 'text', text }] };
|
||||
}
|
||||
|
||||
describe('InMemoryMemory — message createdAt', () => {
|
||||
it('before filter uses each message createdAt, not a shared batch timestamp', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
|
||||
// Use dates clearly in the past so the batch wall-clock time (≈ now)
|
||||
// never accidentally falls inside the range we're filtering.
|
||||
const t1 = new Date('2020-01-01T00:00:01.000Z');
|
||||
const t2 = new Date('2020-01-01T00:00:02.000Z');
|
||||
const t3 = new Date('2020-01-01T00:00:03.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [
|
||||
makeDbMsg('m1', t1, 'first'),
|
||||
makeDbMsg('m2', t2, 'second'),
|
||||
makeDbMsg('m3', t3, 'third'),
|
||||
],
|
||||
});
|
||||
|
||||
// before: t3 should return only the two earlier messages
|
||||
const result = await mem.getMessages('t1', { before: t3 });
|
||||
|
||||
// Pre-fix: saveMessages stores StoredMessage.createdAt = new Date() (wall clock,
|
||||
// much later than t3), so the before filter excludes all messages → length 0.
|
||||
// Post-fix: each StoredMessage.createdAt = dbMsg.createdAt, so t1 and t2 pass.
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0].id).toBe('m1');
|
||||
expect(result[1].id).toBe('m2');
|
||||
});
|
||||
|
||||
it('getMessages returns createdAt from the stored record (consistent with before filter)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
|
||||
const t1 = new Date('2020-06-01T10:00:00.000Z');
|
||||
const t2 = new Date('2020-06-01T10:00:01.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [makeDbMsg('a', t1, 'alpha'), makeDbMsg('b', t2, 'beta')],
|
||||
});
|
||||
|
||||
const loaded = await mem.getMessages('t1');
|
||||
|
||||
// Pre-fix: getMessages returns s.message whose createdAt is from toDbMessage
|
||||
// (correct), but StoredMessage.createdAt is 'now' — the two are inconsistent.
|
||||
// Post-fix: both use the same authoritative value, so this is always consistent.
|
||||
expect(loaded[0].createdAt).toBeInstanceOf(Date);
|
||||
expect(loaded[0].createdAt.getTime()).toBe(t1.getTime());
|
||||
expect(loaded[1].createdAt).toBeInstanceOf(Date);
|
||||
expect(loaded[1].createdAt.getTime()).toBe(t2.getTime());
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Upsert contract
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('InMemoryMemory — saveMessages upsert by id', () => {
|
||||
it('upserts by id (no duplicate rows after a re-save)', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t1 = new Date('2020-01-01T00:00:01.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [makeDbMsg('msg-1', t1, 'original')],
|
||||
});
|
||||
|
||||
const updated = { ...makeDbMsg('msg-1', t1, 'updated content') };
|
||||
await mem.saveMessages({ threadId: 't1', messages: [updated] });
|
||||
|
||||
const result = await mem.getMessages('t1');
|
||||
expect(result).toHaveLength(1);
|
||||
expect(((result[0] as Message).content[0] as { type: string; text: string }).text).toBe(
|
||||
'updated content',
|
||||
);
|
||||
});
|
||||
|
||||
it('preserves insertion order on upsert', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t1 = new Date('2020-01-01T00:00:01.000Z');
|
||||
const t2 = new Date('2020-01-01T00:00:02.000Z');
|
||||
const t3 = new Date('2020-01-01T00:00:03.000Z');
|
||||
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [
|
||||
makeDbMsg('m1', t1, 'first'),
|
||||
makeDbMsg('m2', t2, 'second'),
|
||||
makeDbMsg('m3', t3, 'third'),
|
||||
],
|
||||
});
|
||||
|
||||
// Update m2 in place
|
||||
await mem.saveMessages({
|
||||
threadId: 't1',
|
||||
messages: [makeDbMsg('m2', t2, 'second-updated')],
|
||||
});
|
||||
|
||||
const result = await mem.getMessages('t1');
|
||||
expect(result).toHaveLength(3);
|
||||
// Original order preserved
|
||||
expect(result[0].id).toBe('m1');
|
||||
expect(result[1].id).toBe('m2');
|
||||
expect(result[2].id).toBe('m3');
|
||||
// Updated content
|
||||
expect(((result[1] as Message).content[0] as { text: string }).text).toBe('second-updated');
|
||||
});
|
||||
});
|
||||
|
|
@ -1,304 +0,0 @@
|
|||
import {
|
||||
OBSERVATION_SCHEMA_VERSION,
|
||||
type NewObservation,
|
||||
type ObservationCursor,
|
||||
} from '../../types/sdk/observation';
|
||||
import { InMemoryMemory } from '../memory-store';
|
||||
|
||||
function makeRow(overrides: Partial<NewObservation> = {}): NewObservation {
|
||||
return {
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
kind: 'observation',
|
||||
payload: { text: 'hello' },
|
||||
durationMs: null,
|
||||
schemaVersion: OBSERVATION_SCHEMA_VERSION,
|
||||
createdAt: new Date(),
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
describe('InMemoryMemory — observations', () => {
|
||||
it('appends rows with assigned ids', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const persisted = await mem.appendObservations([makeRow(), makeRow(), makeRow()]);
|
||||
|
||||
expect(persisted).toHaveLength(3);
|
||||
const ids = persisted.map((r) => r.id);
|
||||
expect(new Set(ids).size).toBe(3);
|
||||
expect(ids.every((id) => typeof id === 'string' && id.length > 0)).toBe(true);
|
||||
});
|
||||
|
||||
it('getObservations returns rows in (createdAt, id) ascending', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await mem.appendObservations([
|
||||
makeRow({ payload: 'first', createdAt: new Date(t) }),
|
||||
makeRow({ payload: 'second', createdAt: new Date(t + 1) }),
|
||||
]);
|
||||
const rows = await mem.getObservations({ scopeKind: 'thread', scopeId: 't-1' });
|
||||
expect(rows.map((r) => r.payload)).toEqual(['first', 'second']);
|
||||
});
|
||||
|
||||
it('filters by since (keyset), kindIs, schemaVersionAtMost, limit', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
const [r1, r2, r3, r4] = await mem.appendObservations([
|
||||
makeRow({ kind: 'observation', payload: 'one', createdAt: new Date(t) }),
|
||||
makeRow({ kind: 'summary', payload: 'mid', createdAt: new Date(t + 1) }),
|
||||
makeRow({
|
||||
kind: 'observation',
|
||||
payload: 'two',
|
||||
schemaVersion: 99,
|
||||
createdAt: new Date(t + 2),
|
||||
}),
|
||||
makeRow({ kind: 'observation', payload: 'three', createdAt: new Date(t + 3) }),
|
||||
]);
|
||||
|
||||
expect(
|
||||
(
|
||||
await mem.getObservations({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
since: { sinceCreatedAt: r1.createdAt, sinceObservationId: r1.id },
|
||||
})
|
||||
).map((r) => r.payload),
|
||||
).toEqual(['mid', 'two', 'three']);
|
||||
|
||||
expect(
|
||||
(await mem.getObservations({ scopeKind: 'thread', scopeId: 't-1', kindIs: 'summary' })).map(
|
||||
(r) => r.payload,
|
||||
),
|
||||
).toEqual(['mid']);
|
||||
|
||||
expect(
|
||||
(
|
||||
await mem.getObservations({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
schemaVersionAtMost: OBSERVATION_SCHEMA_VERSION,
|
||||
})
|
||||
).map((r) => r.payload),
|
||||
).toEqual(['one', 'mid', 'three']);
|
||||
|
||||
expect(
|
||||
(await mem.getObservations({ scopeKind: 'thread', scopeId: 't-1', limit: 2 })).map(
|
||||
(r) => r.payload,
|
||||
),
|
||||
).toEqual(['one', 'mid']);
|
||||
|
||||
expect(r2.id).toBeDefined();
|
||||
expect(r3.id).toBeDefined();
|
||||
expect(r4.id).toBeDefined();
|
||||
});
|
||||
|
||||
it('keyset since includes rows sharing createdAt with the anchor when id is greater', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const t = new Date();
|
||||
const [first, second] = await mem.appendObservations([
|
||||
makeRow({ payload: 'a', createdAt: t }),
|
||||
makeRow({ payload: 'b', createdAt: t }),
|
||||
]);
|
||||
// Sort the two by id so we know which is the anchor.
|
||||
const [low, high] = [first, second].sort((a, b) => (a.id < b.id ? -1 : 1));
|
||||
const rows = await mem.getObservations({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
since: { sinceCreatedAt: low.createdAt, sinceObservationId: low.id },
|
||||
});
|
||||
expect(rows).toHaveLength(1);
|
||||
expect(rows[0].id).toBe(high.id);
|
||||
});
|
||||
|
||||
it('deleteObservations removes the named rows and is idempotent', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const [r1, r2] = await mem.appendObservations([makeRow(), makeRow()]);
|
||||
|
||||
await mem.deleteObservations([r1.id, 'unknown-id']);
|
||||
await mem.deleteObservations([r1.id]);
|
||||
|
||||
const remaining = await mem.getObservations({ scopeKind: 'thread', scopeId: 't-1' });
|
||||
expect(remaining.map((r) => r.id)).toEqual([r2.id]);
|
||||
});
|
||||
|
||||
it('deleteObservations is a no-op for an empty id list', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const [r1] = await mem.appendObservations([makeRow()]);
|
||||
await mem.deleteObservations([]);
|
||||
const rows = await mem.getObservations({ scopeKind: 'thread', scopeId: 't-1' });
|
||||
expect(rows.map((r) => r.id)).toEqual([r1.id]);
|
||||
});
|
||||
|
||||
it('deleteThread removes only the deleted thread observation state', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.appendObservations([
|
||||
makeRow({ scopeKind: 'thread', scopeId: 't-1', payload: 'deleted-thread' }),
|
||||
makeRow({ scopeKind: 'thread', scopeId: 't-2', payload: 'other-thread' }),
|
||||
makeRow({ scopeKind: 'resource', scopeId: 't-1', payload: 'resource-scope' }),
|
||||
]);
|
||||
await mem.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
lastObservedMessageId: 'm-1',
|
||||
lastObservedAt: new Date(),
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
await mem.acquireObservationLock('thread', 't-1', { ttlMs: 60_000, holderId: 'A' });
|
||||
|
||||
await mem.deleteThread('t-1');
|
||||
|
||||
await expect(mem.getObservations({ scopeKind: 'thread', scopeId: 't-1' })).resolves.toEqual([]);
|
||||
await expect(mem.getCursor('thread', 't-1')).resolves.toBeNull();
|
||||
await expect(
|
||||
mem.acquireObservationLock('thread', 't-1', { ttlMs: 60_000, holderId: 'B' }),
|
||||
).resolves.toEqual(expect.objectContaining({ holderId: 'B' }));
|
||||
await expect(mem.getObservations({ scopeKind: 'thread', scopeId: 't-2' })).resolves.toEqual([
|
||||
expect.objectContaining({ payload: 'other-thread' }),
|
||||
]);
|
||||
await expect(mem.getObservations({ scopeKind: 'resource', scopeId: 't-1' })).resolves.toEqual([
|
||||
expect.objectContaining({ payload: 'resource-scope' }),
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('InMemoryMemory — cursors', () => {
|
||||
it('returns null when no cursor has been written', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
expect(await mem.getCursor('thread', 't-1')).toBeNull();
|
||||
});
|
||||
|
||||
it('round-trips cursor-advance fields and overwrites on re-set', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const first: ObservationCursor = {
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
lastObservedMessageId: 'm-1',
|
||||
lastObservedAt: new Date(2026, 0, 1, 0, 0, 0, 5),
|
||||
updatedAt: new Date(2026, 0, 1),
|
||||
};
|
||||
await mem.setCursor(first);
|
||||
expect(await mem.getCursor('thread', 't-1')).toEqual(first);
|
||||
|
||||
const second: ObservationCursor = {
|
||||
...first,
|
||||
lastObservedMessageId: 'm-2',
|
||||
lastObservedAt: new Date(2026, 0, 2),
|
||||
updatedAt: new Date(),
|
||||
};
|
||||
await mem.setCursor(second);
|
||||
expect(await mem.getCursor('thread', 't-1')).toEqual(second);
|
||||
});
|
||||
|
||||
it('isolates cursors by scope', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
await mem.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 'A',
|
||||
lastObservedMessageId: 'm-A',
|
||||
lastObservedAt: new Date(),
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
expect(await mem.getCursor('thread', 'B')).toBeNull();
|
||||
});
|
||||
|
||||
it('returns cursor copies so callers cannot mutate stored state', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const cursor: ObservationCursor = {
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
lastObservedMessageId: 'm-1',
|
||||
lastObservedAt: new Date(2026, 0, 1),
|
||||
updatedAt: new Date(2026, 0, 2),
|
||||
};
|
||||
await mem.setCursor(cursor);
|
||||
|
||||
const loaded = await mem.getCursor('thread', 't-1');
|
||||
expect(loaded).not.toBeNull();
|
||||
loaded!.lastObservedMessageId = 'mutated';
|
||||
loaded!.lastObservedAt.setTime(new Date(2030, 0, 1).getTime());
|
||||
|
||||
expect(await mem.getCursor('thread', 't-1')).toEqual(cursor);
|
||||
});
|
||||
});
|
||||
|
||||
describe('InMemoryMemory — observation locks', () => {
|
||||
it('grants the lock when free and refuses a different holder while held', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const a = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'A',
|
||||
});
|
||||
expect(a).not.toBeNull();
|
||||
|
||||
const b = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'B',
|
||||
});
|
||||
expect(b).toBeNull();
|
||||
});
|
||||
|
||||
it('reclaims an expired lock for a new holder', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const a = await mem.acquireObservationLock('thread', 't-1', { ttlMs: 1, holderId: 'A' });
|
||||
expect(a).not.toBeNull();
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 5));
|
||||
|
||||
const b = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'B',
|
||||
});
|
||||
expect(b).not.toBeNull();
|
||||
expect(b?.holderId).toBe('B');
|
||||
});
|
||||
|
||||
it('lets the same holder re-acquire (refresh) an active lock', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const first = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'A',
|
||||
});
|
||||
const second = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'A',
|
||||
});
|
||||
expect(first).not.toBeNull();
|
||||
expect(second).not.toBeNull();
|
||||
expect(second?.heldUntil.getTime()).toBeGreaterThanOrEqual(first!.heldUntil.getTime());
|
||||
});
|
||||
|
||||
it('release frees the lock and tolerates double-release', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const a = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'A',
|
||||
});
|
||||
await mem.releaseObservationLock(a!);
|
||||
await mem.releaseObservationLock(a!);
|
||||
|
||||
const b = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'B',
|
||||
});
|
||||
expect(b).not.toBeNull();
|
||||
});
|
||||
|
||||
it('release by stale handle does not displace a fresh holder', async () => {
|
||||
const mem = new InMemoryMemory();
|
||||
const stale = await mem.acquireObservationLock('thread', 't-1', { ttlMs: 1, holderId: 'A' });
|
||||
await new Promise((resolve) => setTimeout(resolve, 5));
|
||||
const fresh = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'B',
|
||||
});
|
||||
expect(fresh).not.toBeNull();
|
||||
|
||||
await mem.releaseObservationLock(stale!);
|
||||
|
||||
const bClaim = await mem.acquireObservationLock('thread', 't-1', {
|
||||
ttlMs: 60_000,
|
||||
holderId: 'C',
|
||||
});
|
||||
expect(bClaim).toBeNull();
|
||||
});
|
||||
});
|
||||
|
|
@ -1,355 +0,0 @@
|
|||
import type { LanguageModel } from 'ai';
|
||||
|
||||
import { createModel } from '../model-factory';
|
||||
|
||||
type ProviderOpts = {
|
||||
apiKey?: string;
|
||||
baseURL?: string;
|
||||
fetch?: typeof globalThis.fetch;
|
||||
headers?: Record<string, string>;
|
||||
};
|
||||
|
||||
// All providers are mocked via jest.mock so require() inside the registry entries
|
||||
// returns these stubs instead of the real packages.
|
||||
jest.mock('@ai-sdk/anthropic', () => ({
|
||||
createAnthropic: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'anthropic',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
headers: opts?.headers,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/openai', () => ({
|
||||
createOpenAI: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'openai',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
headers: opts?.headers,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/google', () => ({
|
||||
createGoogleGenerativeAI: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'google',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/xai', () => ({
|
||||
createXai: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'xai',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/groq', () => ({
|
||||
createGroq: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'groq',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/deepseek', () => ({
|
||||
createDeepSeek: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'deepseek',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/cohere', () => ({
|
||||
createCohere: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'cohere',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/mistral', () => ({
|
||||
createMistral: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'mistral',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/gateway', () => ({
|
||||
createGateway: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'vercel',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/azure', () => ({
|
||||
createAzure:
|
||||
(opts?: { apiKey?: string; resourceName?: string; apiVersion?: string; baseURL?: string }) =>
|
||||
(model: string) => ({
|
||||
provider: 'azure-openai',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
resourceName: opts?.resourceName,
|
||||
apiVersion: opts?.apiVersion,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@openrouter/ai-sdk-provider', () => ({
|
||||
createOpenRouter: (opts?: ProviderOpts) => (model: string) => ({
|
||||
provider: 'openrouter',
|
||||
modelId: model,
|
||||
apiKey: opts?.apiKey,
|
||||
baseURL: opts?.baseURL,
|
||||
fetch: opts?.fetch,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('@ai-sdk/amazon-bedrock', () => ({
|
||||
createAmazonBedrock:
|
||||
(opts?: {
|
||||
region?: string;
|
||||
accessKeyId?: string;
|
||||
secretAccessKey?: string;
|
||||
sessionToken?: string;
|
||||
}) =>
|
||||
(model: string) => ({
|
||||
provider: 'aws-bedrock',
|
||||
modelId: model,
|
||||
region: opts?.region,
|
||||
accessKeyId: opts?.accessKeyId,
|
||||
secretAccessKey: opts?.secretAccessKey,
|
||||
specificationVersion: 'v3',
|
||||
}),
|
||||
}));
|
||||
|
||||
const mockProxyAgent = jest.fn();
|
||||
jest.mock('undici', () => ({
|
||||
ProxyAgent: mockProxyAgent,
|
||||
}));
|
||||
|
||||
describe('createModel', () => {
|
||||
const originalEnv = process.env;
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv };
|
||||
delete process.env.HTTPS_PROXY;
|
||||
delete process.env.HTTP_PROXY;
|
||||
mockProxyAgent.mockClear();
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
it('should accept a string config', () => {
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('anthropic');
|
||||
expect(model.modelId).toBe('claude-sonnet-4-5');
|
||||
});
|
||||
|
||||
it('should accept an object config with baseURL', () => {
|
||||
const model = createModel({
|
||||
id: 'openai/gpt-4o',
|
||||
apiKey: 'sk-test',
|
||||
baseURL: 'https://custom.endpoint.com/v1',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('openai');
|
||||
expect(model.baseURL).toBe('https://custom.endpoint.com/v1');
|
||||
});
|
||||
|
||||
it('should pass through a prebuilt LanguageModel', () => {
|
||||
const prebuilt = {
|
||||
doGenerate: jest.fn(),
|
||||
doStream: jest.fn(),
|
||||
specificationVersion: 'v2' as const,
|
||||
modelId: 'custom-model',
|
||||
provider: 'custom',
|
||||
defaultObjectGenerationMode: undefined,
|
||||
} as unknown as LanguageModel;
|
||||
|
||||
const result = createModel(prebuilt);
|
||||
expect(result).toBe(prebuilt);
|
||||
});
|
||||
|
||||
it('should handle model IDs with multiple slashes', () => {
|
||||
const model = createModel('openai/ft:gpt-4o:my-org:custom:abc123') as unknown as Record<
|
||||
string,
|
||||
unknown
|
||||
>;
|
||||
expect(model.provider).toBe('openai');
|
||||
expect(model.modelId).toBe('ft:gpt-4o:my-org:custom:abc123');
|
||||
});
|
||||
|
||||
it('should not pass fetch when no proxy env vars are set', () => {
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should pass proxy-aware fetch when HTTPS_PROXY is set', () => {
|
||||
process.env.HTTPS_PROXY = 'http://proxy:8080';
|
||||
const model = createModel('anthropic/claude-sonnet-4-5') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeInstanceOf(Function);
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:8080');
|
||||
});
|
||||
|
||||
it('should pass proxy-aware fetch when HTTP_PROXY is set', () => {
|
||||
process.env.HTTP_PROXY = 'http://proxy:9090';
|
||||
const model = createModel('openai/gpt-4o') as unknown as Record<string, unknown>;
|
||||
expect(model.fetch).toBeInstanceOf(Function);
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://proxy:9090');
|
||||
});
|
||||
|
||||
it('should forward custom headers to the provider factory', () => {
|
||||
const model = createModel({
|
||||
id: 'anthropic/claude-sonnet-4-5',
|
||||
apiKey: 'sk-test',
|
||||
headers: { 'x-proxy-auth': 'Bearer abc', 'anthropic-beta': 'tools-2024' },
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.headers).toEqual({
|
||||
'x-proxy-auth': 'Bearer abc',
|
||||
'anthropic-beta': 'tools-2024',
|
||||
});
|
||||
});
|
||||
|
||||
it('should prefer HTTPS_PROXY over HTTP_PROXY', () => {
|
||||
process.env.HTTPS_PROXY = 'http://https-proxy:8080';
|
||||
process.env.HTTP_PROXY = 'http://http-proxy:9090';
|
||||
createModel('anthropic/claude-sonnet-4-5');
|
||||
expect(mockProxyAgent).toHaveBeenCalledWith('http://https-proxy:8080');
|
||||
});
|
||||
|
||||
describe('standard providers', () => {
|
||||
it.each(['groq', 'deepseek', 'cohere', 'mistral', 'google', 'xai'])(
|
||||
'should create model for %s',
|
||||
(provider) => {
|
||||
const model = createModel({
|
||||
id: `${provider}/some-model`,
|
||||
apiKey: 'test-key',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe(provider);
|
||||
expect(model.modelId).toBe('some-model');
|
||||
expect(model.apiKey).toBe('test-key');
|
||||
},
|
||||
);
|
||||
|
||||
it('should create model for vercel gateway', () => {
|
||||
const model = createModel({
|
||||
id: 'vercel/gpt-4o',
|
||||
apiKey: 'vk-test',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('vercel');
|
||||
expect(model.modelId).toBe('gpt-4o');
|
||||
});
|
||||
|
||||
it('should create model for openrouter', () => {
|
||||
const model = createModel({
|
||||
id: 'openrouter/openai/gpt-4o',
|
||||
apiKey: 'or-test',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('openrouter');
|
||||
expect(model.modelId).toBe('openai/gpt-4o');
|
||||
expect(model.apiKey).toBe('or-test');
|
||||
});
|
||||
});
|
||||
|
||||
describe('azure-openai', () => {
|
||||
it('should create model with resourceName', () => {
|
||||
const model = createModel({
|
||||
id: 'azure-openai/gpt-4o',
|
||||
apiKey: 'az-key',
|
||||
resourceName: 'my-resource',
|
||||
apiVersion: '2024-02-01',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('azure-openai');
|
||||
expect(model.modelId).toBe('gpt-4o');
|
||||
expect(model.apiKey).toBe('az-key');
|
||||
expect(model.resourceName).toBe('my-resource');
|
||||
expect(model.apiVersion).toBe('2024-02-01');
|
||||
});
|
||||
|
||||
it('should throw if resourceName is missing', () => {
|
||||
expect(() => createModel({ id: 'azure-openai/gpt-4o', apiKey: 'az-key' })).toThrow(
|
||||
/Invalid credentials for provider "azure-openai"/,
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('aws-bedrock', () => {
|
||||
it('should create model with AWS credentials', () => {
|
||||
const model = createModel({
|
||||
id: 'aws-bedrock/amazon.titan-text-lite-v1',
|
||||
region: 'us-east-1',
|
||||
accessKeyId: 'AKIAIOSFODNN7EXAMPLE',
|
||||
secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
|
||||
}) as unknown as Record<string, unknown>;
|
||||
expect(model.provider).toBe('aws-bedrock');
|
||||
expect(model.modelId).toBe('amazon.titan-text-lite-v1');
|
||||
expect(model.region).toBe('us-east-1');
|
||||
expect(model.accessKeyId).toBe('AKIAIOSFODNN7EXAMPLE');
|
||||
});
|
||||
|
||||
it('should throw if region is missing', () => {
|
||||
expect(() =>
|
||||
createModel({
|
||||
id: 'aws-bedrock/amazon.titan-text-lite-v1',
|
||||
accessKeyId: 'AKIAIOSFODNN7EXAMPLE',
|
||||
secretAccessKey: 'secret',
|
||||
}),
|
||||
).toThrow(/Invalid credentials for provider "aws-bedrock"/);
|
||||
});
|
||||
|
||||
it('should throw if accessKeyId is missing', () => {
|
||||
expect(() =>
|
||||
createModel({
|
||||
id: 'aws-bedrock/amazon.titan-text-lite-v1',
|
||||
region: 'us-east-1',
|
||||
secretAccessKey: 'secret',
|
||||
}),
|
||||
).toThrow(/Invalid credentials for provider "aws-bedrock"/);
|
||||
});
|
||||
});
|
||||
|
||||
describe('unsupported provider', () => {
|
||||
it('should throw for ollama', () => {
|
||||
expect(() => createModel('ollama/llama3')).toThrow(/Unsupported provider: "ollama"/);
|
||||
});
|
||||
|
||||
it('should include supported providers in the error message', () => {
|
||||
expect(() => createModel('unknown-provider/some-model')).toThrow(/Supported providers:/);
|
||||
});
|
||||
|
||||
it('should throw when no model ID is provided', () => {
|
||||
expect(() => createModel('')).toThrow(/Model ID is required/);
|
||||
});
|
||||
|
||||
it('should throw when model has no slash', () => {
|
||||
expect(() => createModel('anthropic-only')).toThrow(/expected "provider\/model-name"/);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -1,170 +0,0 @@
|
|||
import type { AgentDbMessage, AgentMessage, Message } from '../../types/sdk/message';
|
||||
import { InMemoryMemory } from '../memory-store';
|
||||
import { advanceCursor, getDeltaSinceCursor } from '../observation-cursor';
|
||||
|
||||
function makeMsg(role: 'user' | 'assistant', text: string, createdAt = new Date()): AgentDbMessage {
|
||||
return {
|
||||
id: crypto.randomUUID(),
|
||||
createdAt,
|
||||
role,
|
||||
content: [{ type: 'text', text }],
|
||||
};
|
||||
}
|
||||
|
||||
function textOf(msg: AgentMessage): string {
|
||||
const m = msg as Message;
|
||||
return (m.content[0] as { text: string }).text;
|
||||
}
|
||||
|
||||
describe('getDeltaSinceCursor', () => {
|
||||
it('returns the full thread history when no cursor exists', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one', new Date(t)), makeMsg('assistant', 'two', new Date(t + 1))],
|
||||
});
|
||||
|
||||
const { messages, cursor } = await getDeltaSinceCursor(store, 'thread', 't-1');
|
||||
expect(cursor).toBeNull();
|
||||
expect(messages.map(textOf)).toEqual(['one', 'two']);
|
||||
});
|
||||
|
||||
it('returns only messages strictly after the cursor keyset', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one', new Date(t)), makeMsg('assistant', 'two', new Date(t + 1))],
|
||||
});
|
||||
const [first] = await store.getMessages('t-1');
|
||||
await store.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
lastObservedMessageId: first.id,
|
||||
lastObservedAt: first.createdAt,
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'three', new Date(t + 2))],
|
||||
});
|
||||
|
||||
const { messages, cursor } = await getDeltaSinceCursor(store, 'thread', 't-1');
|
||||
expect(cursor?.lastObservedMessageId).toBe(first.id);
|
||||
expect(messages.map(textOf)).toEqual(['two', 'three']);
|
||||
});
|
||||
|
||||
it('returns an empty delta when the cursor is at the latest message', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one')],
|
||||
});
|
||||
const [only] = await store.getMessages('t-1');
|
||||
await store.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-1',
|
||||
lastObservedMessageId: only.id,
|
||||
lastObservedAt: only.createdAt,
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
|
||||
const { messages } = await getDeltaSinceCursor(store, 'thread', 't-1');
|
||||
expect(messages).toEqual([]);
|
||||
});
|
||||
|
||||
it('isolates cursors by scope', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await store.saveThread({ id: 't-A', resourceId: 'u-1' });
|
||||
await store.saveThread({ id: 't-B', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-A',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'a-1', new Date(t)), makeMsg('user', 'a-2', new Date(t + 1))],
|
||||
});
|
||||
await store.saveMessages({
|
||||
threadId: 't-B',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'b-1', new Date(t + 2))],
|
||||
});
|
||||
const aMessages = await store.getMessages('t-A');
|
||||
await store.setCursor({
|
||||
scopeKind: 'thread',
|
||||
scopeId: 't-A',
|
||||
lastObservedMessageId: aMessages[0].id,
|
||||
lastObservedAt: aMessages[0].createdAt,
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
|
||||
const aDelta = await getDeltaSinceCursor(store, 'thread', 't-A');
|
||||
expect(aDelta.messages.map(textOf)).toEqual(['a-2']);
|
||||
|
||||
// Thread B has no cursor; should still return its full history.
|
||||
const bDelta = await getDeltaSinceCursor(store, 'thread', 't-B');
|
||||
expect(bDelta.cursor).toBeNull();
|
||||
expect(bDelta.messages.map(textOf)).toEqual(['b-1']);
|
||||
});
|
||||
});
|
||||
|
||||
describe('advanceCursor', () => {
|
||||
it('writes a cursor row matching the message id and createdAt', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one')],
|
||||
});
|
||||
const [only] = await store.getMessages('t-1');
|
||||
|
||||
const written = await advanceCursor(store, 'thread', 't-1', only);
|
||||
expect(written.lastObservedMessageId).toBe(only.id);
|
||||
expect(written.lastObservedAt.getTime()).toBe(only.createdAt.getTime());
|
||||
|
||||
const reread = await store.getCursor('thread', 't-1');
|
||||
expect(reread?.lastObservedMessageId).toBe(only.id);
|
||||
expect(reread?.lastObservedAt.getTime()).toBe(only.createdAt.getTime());
|
||||
});
|
||||
|
||||
it('uses the provided `now` for updatedAt', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one')],
|
||||
});
|
||||
const [only] = await store.getMessages('t-1');
|
||||
const now = new Date('2026-05-05T12:00:00Z');
|
||||
|
||||
const cursor = await advanceCursor(store, 'thread', 't-1', only, now);
|
||||
expect(cursor.updatedAt.getTime()).toBe(now.getTime());
|
||||
});
|
||||
|
||||
it('overwrites a prior cursor (advance is upsert, not append)', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const t = Date.now();
|
||||
await store.saveThread({ id: 't-1', resourceId: 'u-1' });
|
||||
await store.saveMessages({
|
||||
threadId: 't-1',
|
||||
resourceId: 'u-1',
|
||||
messages: [makeMsg('user', 'one', new Date(t)), makeMsg('user', 'two', new Date(t + 1))],
|
||||
});
|
||||
const [first, second] = await store.getMessages('t-1');
|
||||
|
||||
await advanceCursor(store, 'thread', 't-1', first);
|
||||
await advanceCursor(store, 'thread', 't-1', second);
|
||||
|
||||
const reread = await store.getCursor('thread', 't-1');
|
||||
expect(reread?.lastObservedMessageId).toBe(second.id);
|
||||
});
|
||||
});
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
import { InMemoryMemory } from '../memory-store';
|
||||
import { withObservationLock } from '../observation-lock';
|
||||
|
||||
describe('withObservationLock', () => {
|
||||
it('runs fn and returns its value when the lock is free', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const result = await withObservationLock(
|
||||
store,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000 },
|
||||
async () => await Promise.resolve(42),
|
||||
);
|
||||
expect(result).toEqual({ status: 'ran', value: 42 });
|
||||
});
|
||||
|
||||
it('skips when another holder is currently holding the lock', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
await store.acquireObservationLock('thread', 't-1', { ttlMs: 60_000, holderId: 'external' });
|
||||
|
||||
const fn = jest.fn().mockResolvedValue(undefined);
|
||||
const result = await withObservationLock(store, 'thread', 't-1', { ttlMs: 60_000 }, fn);
|
||||
|
||||
expect(result).toEqual({ status: 'skipped' });
|
||||
expect(fn).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('releases the lock so a subsequent caller can acquire it', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
await withObservationLock(
|
||||
store,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000 },
|
||||
async () => await Promise.resolve(),
|
||||
);
|
||||
const second = await withObservationLock(
|
||||
store,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000 },
|
||||
async () => await Promise.resolve('after'),
|
||||
);
|
||||
expect(second).toEqual({ status: 'ran', value: 'after' });
|
||||
});
|
||||
|
||||
it('releases the lock even when fn throws', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const boom = new Error('boom');
|
||||
await expect(
|
||||
withObservationLock(store, 'thread', 't-1', { ttlMs: 60_000 }, async () => {
|
||||
await Promise.resolve();
|
||||
throw boom;
|
||||
}),
|
||||
).rejects.toBe(boom);
|
||||
|
||||
// Lock should be released — a fresh acquire by a different holder succeeds.
|
||||
const followup = await withObservationLock(
|
||||
store,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000 },
|
||||
async () => await Promise.resolve('post-throw'),
|
||||
);
|
||||
expect(followup).toEqual({ status: 'ran', value: 'post-throw' });
|
||||
});
|
||||
|
||||
it('tolerates the lock having already been released by the time fn returns', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const failing = {
|
||||
...store,
|
||||
releaseObservationLock: jest.fn().mockRejectedValue(new Error('already gone')),
|
||||
} as unknown as InMemoryMemory;
|
||||
Object.setPrototypeOf(failing, InMemoryMemory.prototype);
|
||||
|
||||
const result = await withObservationLock(
|
||||
failing,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000 },
|
||||
async () => await Promise.resolve('done'),
|
||||
);
|
||||
expect(result).toEqual({ status: 'ran', value: 'done' });
|
||||
});
|
||||
|
||||
it('passes the granted handle to fn', async () => {
|
||||
const store = new InMemoryMemory();
|
||||
const result = await withObservationLock(
|
||||
store,
|
||||
'thread',
|
||||
't-1',
|
||||
{ ttlMs: 60_000, holderId: 'caller-A' },
|
||||
async (handle) => await Promise.resolve(handle.holderId),
|
||||
);
|
||||
expect(result).toEqual({ status: 'ran', value: 'caller-A' });
|
||||
});
|
||||
});
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user