n8n and Notion: A Developer's Descent into the Automation Rabbit Hole
Table of Contents
- 1. Preamble
- 2. Low-Code and No-Code: A View from the Other Side
- 3. Setting Up n8n on NixOS
- 4. Notion as a Data Backend: The Good Parts First
- 5. Building the Workflow: API Plumbing with a Visual Interface
- 6. Introspecting the Runtime from the JSON
- 7. The V8/Node.js Decision
- 8. Zapier vs. n8n: What Each Actually Provides
- 9. The Vision That Would Have Been Interesting
- 10. Remaining Questions, Unresolved
- 11. Footnotes and Further Reading
1. Preamble
This article is not a tutorial. It is a field report from someone who wanted
to wire a Notion database to a Telegram bot, found n8n, and spent the next
two weeks reading source code, CVE advisories, and NixOS module patches instead.
The question I started with was simple: does n8n actually close the gap between "I want automation" and "I want to write a script for it"? Spoiler: it depends on which side of that gap you stand on.
The question I ended with was different: what does the V8/Node.js runtime actually buy us that justifies its costs? This article is the search for that answer. Each section peels back another layer, hoping to find the killer feature that makes the architecture make sense. The facts are presented without evaluative adjectives — the implications should be clear enough.
The article is written for people who have opinions about build systems and find the phrase "low-code platform" mildly suspicious. If you just want to ship a Zap, close the tab.
2. Low-Code and No-Code: A View from the Other Side
The pitch is compelling. Non-technical users can build automation workflows without writing code. Technical users get a visual canvas that saves time compared to gluing APIs together with curl. Both groups supposedly meet in the middle.
In practice, the category splits into two very different products sold under the same label.
2.1. What the category is solving
The actual problem is integration. Modern SaaS products each implement their own API, authentication scheme, and data model. To make two of them talk to each other requires:
- Auth (OAuth flows, API keys, token refresh)
- HTTP client code (request, error handling, retry)
- Data transformation (shape A → shape B)
- Scheduling or event triggering
- State persistence if needed
This is not intellectually interesting work. It is also not zero work. Low-code platforms offer a pre-built library of connectors that handle (1) and (2), a visual node graph for (3) and (4), and managed infrastructure for (5). The value proposition is real.
2.2. The cost structure most articles skip
| Concern | Zapier (Cloud) | n8n (Self-hosted) | Custom Script |
|---|---|---|---|
| Vendor lock-in | High | Medium | None |
| Portability | None | Low (JSON + runtime) | Full |
| Versionability | Limited | Limited |
Git |
| Testability | Manual | Manual | Unit tests possible |
| CI integration | No | No 1 | Yes |
| Cost at scale | Per-task pricing | Infrastructure cost | Time |
| Debugging experience | Logs in UI | Logs in UI | Your tools |
2.3. The identity crisis: No-Code or Low-Code?
n8n's marketing often positions it as a "no-code" alternative to Zapier. The product reality is different.
No-Code aspects: Low-Code aspects:
───────────────── ─────────────────
Visual canvas Code nodes (JS/Python)
Dropdown configuration Expression syntax: {{ $json.field }}
Pre-built integrations Custom nodes via npm
No programming required Self-hosting (requires ops knowledge)
The result is a product that markets to non-developers but provides features only developers would want — and then withholds the developer features (Git integration, CI, headless execution) that would make it genuinely useful for that audience.
The phrase "sweet spot between developers and non-developers" appears in n8n marketing. In practice, it may be a gap rather than a spot:
- Non-developers hit walls when the visual nodes don't cover their use case
- Developers get a visual canvas but lose their toolchain
This is not necessarily a failure — it may simply be that the "sweet spot" audience is smaller than anticipated, or that the product is in transition between two identities.
2.4. The Emacs parallel
Emacs is the canonical example of software where the distinction between
"user" and "developer" collapses. The configuration language is the extension
language is the runtime. You can inspect any function at runtime with C-h f,
modify it, and the change takes effect immediately without restart.
;; This is not pseudocode. Evaluating this block redefines a live function. (defun my/greeting (name) (format "Hello, %s. You are now in the rabbit hole." name)) (my/greeting "n8n evaluator")
The point is not that everyone should use Emacs. The point is that malleable software — software that adapts to the user rather than forcing the user into predefined paths — is a real design principle with an established research history 2. Notion approaches this for a certain class of users. n8n claims to approach it for automation. Whether it succeeds is what this article is about.
3. Setting Up n8n on NixOS
Pragmatics first. I run NixOS on my server. The standard installation path is Docker, but NixOS has a native module.
# /etc/nixos/n8n.nix — the complete service configuration
_: {
services.n8n = {
enable = true;
environment = {
N8N_HOST = "127.0.0.1";
N8N_PORT = "5678";
N8N_PROTOCOL = "http";
WEBHOOK_URL = "https://n8n.wolfhard.net";
N8N_EDITOR_BASE_URL = "https://n8n.wolfhard.net";
};
};
services.nginx.virtualHosts."n8n.wolfhard.net" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://127.0.0.1:5678";
proxyWebsockets = true;
};
};
}
Fifteen lines. Compare to the official Docker Compose setup which involves environment variable files, volume mounts, and explicit port mapping. The NixOS module handles systemd unit generation, service dependencies, and basic hardening automatically.
3.1. What the NixOS module actually does
The module at nixos/modules/services/misc/n8n.nix contains some systemd
hardening that is worth reading:
# From nixpkgs: nixos/modules/services/misc/n8n.nix
serviceConfig = {
NoNewPrivileges = "yes";
PrivateTmp = "yes";
PrivateDevices = "yes";
DynamicUser = "true";
ProtectSystem = "strict";
ProtectHome = "read-only";
RestrictNamespaces = "yes";
RestrictRealtime = "yes";
# ...
MemoryDenyWriteExecute = "no"; # v8 JIT requires W^X exception
};
The comment on MemoryDenyWriteExecute is doing a lot of work. A Node.js
process running V8 requires memory pages that are simultaneously writable and
executable for JIT compilation. This is a known trade-off in the
W^X security model — it is not configurable away.
DynamicUser = true means n8n runs under a dynamically allocated UID that may
differ between restarts. The state directory /var/lib/n8n handles persistence
via StateDirectory, but any manual permission management on files in that
directory will be fragile.
3.2. The patch that ships with every NixOS n8n installation
The module includes this block unconditionally when n8n is enabled:
# workaround for CVE-2026-0863
# https://github.com/NixOS/nixpkgs/pull/477422
services.n8n.environment = {
N8N_RUNNERS_ENABLED = "true";
N8N_NATIVE_PYTHON_RUNNER = "true";
};
These two environment variables enable the Task Runner architecture — a separate sandboxed process for executing JavaScript and Python code nodes. The CVE in question (CVE-2025-68668, CVSS 9.9) was a sandbox bypass in the Python execution path:
An authenticated user with permission to create or modify workflows can exploit this vulnerability to execute arbitrary commands on the host system running n8n, using the same privileges as the n8n process. — Cyera Research Labs
The Task Runner was introduced in n8n 1.111.0 as an optional feature precisely to address this class of vulnerability. It became the default in 2.0.0. The NixOS module applies it as a workaround for older packaged versions.
This is not unusual for a complex Node.js application, but it is an interesting detail to encounter before writing a single workflow. We note it and move on, because surely the killer feature is just around the corner.
4. Notion as a Data Backend: The Good Parts First
Before getting into n8n's execution model, the use case: I maintain a Notion database of job applications. Each row is a company, with columns for application date, contact, status, notes, and a URL. When a row is added or updated, I want a Telegram notification.
This is, on paper, about as simple as automation gets.
4.1. Why Notion at all
Notion publicly launched around 2018 after a private beta starting around 2016. The core primitive is "everything is a block, everything is a page." From this single abstraction, it builds databases, wikis, and task lists.
What makes Notion databases interesting from a data modeling perspective is where they sit on the spectrum:
Schemaless (MongoDB) Notion Databases Normalized RDBMS
|------------------------------|-------------------------|----------------|
Any shape, no enforcement Typed columns + free Fixed schema,
page content per row enforced FK
A Notion database row has typed columns (text, number, date, multi-select, relation) that give you type safety and UI affordances — a multi-select field renders as a tag picker, not a text input. But each row also is a page, so anything that does not fit the schema goes into the page body. This is genuinely useful. A job application with an unusual note does not require a schema change.
For technical users with org-mode habits, the comparison is obvious:
* TODO Apply to Acme Corp :PROPERTIES: :company: Acme Corp :date: 2026-03-12 :status: Applied :END: Debug session notes from the technical interview: [...three pages of stream-of-consciousness]
The org-mode version gives you everything: executable code blocks, full version control, no vendor dependency. The Notion version gives your mother a reasonable chance of using the same document.
This is not a joke. Collaboration accessibility is a real constraint. I will not be editing org-mode files with people whose workflow does not involve a terminal.
4.2. The Internal Integration: An API Key with Extra Steps
To access a Notion database from any external tool, you create an Internal Integration. This generates a secret token — functionally an API key — scoped to your Notion workspace.
The detail that surprises most people: creating the integration is not sufficient to read a database. You must then navigate to each database you want to access and explicitly add the integration as a "connection" via the database's context menu.
The functional effect: a read-only API key that cannot read a database until the database explicitly grants access to the key.
This design has a security rationale — granular permission scoping. In practice for a single-user setup, it means remembering to connect every new database to your integration before any automation touching it will work.
4.2.1. Internal vs. Public Integration: The Auth Architecture Difference
Notion offers two integration types. The difference is not cosmetic.
An internal integration is tied to a single workspace. It authenticates via
a static secret token in the Authorization: Bearer header. The token does not
expire, does not refresh, and carries exactly the permissions of the integration
as configured. If you rotate the token, every system using it breaks immediately.
A public integration follows OAuth 2.0. The authorization flow:
- Redirect the user to
https://api.notion.com/v1/oauth/authorizewith yourclient_idandredirect_uri - The user selects which pages to grant access to via a page picker
- Notion redirects back with a temporary authorization
code - Your server exchanges the
codefor anaccess_tokenandrefresh_tokenviaPOST /v1/oauth/tokenusing HTTP Basic auth withCLIENT_ID:CLIENT_SECRET - The
access_tokenis used for subsequent API requests; therefresh_tokengenerates new tokens when the access token expires
sequenceDiagram
participant User
participant App as Your App
participant Notion as Notion OAuth
User->>App: Click "Connect Notion"
App->>Notion: Redirect to /oauth/authorize
Note over Notion: User selects pages
Notion->>App: Redirect with ?code=xxx
App->>Notion: POST /oauth/token (code + client_secret)
Notion->>App: access_token + refresh_token
App->>Notion: API calls with Bearer token
The key operational difference: the OAuth flow lets the user select which pages to grant access to during authorization, rather than requiring per-database manual connection after the fact. For multi-user integrations (n8n used by multiple workspace members), OAuth is the appropriate model. For personal automation of your own workspace, internal integrations are simpler.
Internal Integration Public Integration (OAuth 2.0) ───────────────────── ────────────────────────────── Static token access_token + refresh_token Single workspace Any user's workspace Per-database manual grant Page picker during auth flow No token rotation API Token refresh endpoint Simple setup Redirect URI, client secret mgmt
4.2.2. The Per-Database Grant: Why It Exists
The requirement to manually connect each database to your integration is Notion's implementation of the least-privilege principle at the resource level. The integration token carries workspace-level credentials; the per-database connection grant is the authorization step that says "this specific resource is accessible."
The practical consequence: a Notion workspace can have dozens of databases. An integration that should only read your job application tracker cannot accidentally read your private notes, your financial planning database, or anything else — unless you explicitly connect those databases. API requests to unconnected databases return a 404 rather than a 403, which obscures whether the database exists at all.
This is more granular than Airtable's PAT model. Airtable Personal Access Tokens scope by workspace and base at token creation time:
| Auth model | Notion (Internal) | Airtable (PAT) | Google Sheets API |
|---|---|---|---|
| Auth mechanism | Bearer token (static) | Bearer token (scoped) | OAuth 2.0 |
| Scope granularity | Per-database grant | Per-base at token time | Drive file picker |
| Token expiry | Never | Never (revocable) | access_token expires |
| Token rotation | Manual (regenerate) | Manual (regenerate) | refresh_token flow |
| Multi-user support | Public integration | OAuth for third-party | Standard OAuth |
| Legacy auth status | N/A | API keys removed 2024 | Service accounts |
Two notes on the Airtable column: Airtable deprecated and fully removed user API keys in February 2024. The migration path is PATs for personal use and OAuth 2.0 for third-party integrations. Additionally, Airtable webhooks created via PAT/OAuth expire after 7 days without a refresh call — an operational detail requiring active lifecycle management.
4.3. Push vs. Poll: What the Notion API actually supports
Before choosing any automation tool, the question to answer is: does the data source support push notification, or must the client poll?
| Method | Trigger | Latency | Server load |
|---|---|---|---|
| Webhooks (push) | Provider initiates | ~ms | Minimal |
| Server-Sent Events | Provider streams | ~ms | Connection |
| WebSocket | Bidirectional | ~ms | Connection |
| Polling | Client requests | ~delay | Linear |
Notion's API supports integration webhooks. The Notion API reference documents
a full webhook subscription flow: register an endpoint, receive a
verification_token challenge, and Notion will subsequently deliver events
such as page.content_updated and page.created as HTTP POST requests within
roughly a minute of the change. Event payloads contain metadata — page ID,
event type, timestamp — but not full page content; the receiving server fetches
current state via the REST API after the event arrives. This is the standard
push-then-pull pattern used by most production webhook systems.
Despite this, n8n's official Notion Trigger node does not use these webhooks. Both available triggers remain polling-based:
On page added to database— polls every N minutes for new rowsOn page updated in database— polls every N minutes for changed rows
A feature request to implement Notion webhook support in the n8n trigger node was opened in the n8n community forum in March 2025 and has received no response from the n8n team as of this writing 3. The capability gap is on the n8n side, not the Notion side.
5. Building the Workflow: API Plumbing with a Visual Interface
The workflow I want:
[Notion DB update] ──→ [Telegram message] [Notion DB insert] ──→ [Telegram message]
Two triggers, one action. The workflow JSON representation is instructive:
{
"nodes": [
{
"type": "n8n-nodes-base.notionTrigger",
"parameters": {
"pollTimes": {
"item": [{ "mode": "everyX", "value": 5, "unit": "minutes" }]
},
"databaseId": {
"__rl": true,
"value": "4c0bdb50-3389-4d97-a105-9591fa5fc0cb",
"mode": "list",
"cachedResultName": "Stellenangebote",
"cachedResultUrl": "https://www.notion.so/4c0bdb5033894d97a1059591fa5fc0cb"
}
}
}
]
}
Two observations from the JSON:
cachedResultNameandcachedResultUrlare UI state embedded in the workflow definition. When you select a database from the dropdown in the editor, n8n writes the display name and URL into the JSON to avoid re-fetching on every editor load. This is presentation state living in the domain model. It does not affect execution. It does mean the JSON is not a pure declarative description of the workflow.- The credential reference is an opaque ID (
"id": "OMAhOdhnbEmwT3KM") pointing to n8n's internal credential store. The workflow JSON without a running n8n instance that contains this credential ID is not executable. It is a description of a workflow that could run if the right infrastructure is present.
5.1. What "Publishing" does
Workflows in n8n exist in two states: draft and active (published). Draft workflows do not execute their trigger nodes. Active workflows do.
The activation mechanism for polling triggers is: n8n registers the workflow in its scheduler. Every N minutes, n8n calls the Notion API with the integration secret, compares the result to the previous poll, and fires the downstream nodes for any detected changes.
This runs inside the n8n server process. There is no separate execution context
per workflow. In the default configuration (EXECUTIONS_MODE=regular), all
workflow executions run in-process within the main server.
n8n main process (Node.js / V8)
├── HTTP server (Express)
├── WebSocket server (editor live updates)
├── Workflow scheduler (Bull)
└── WorkflowRunner
├── Execution 1: Notion poll → Telegram
├── Execution 2: some other workflow
└── ...
The consequence: a runaway workflow that consumes CPU or memory affects the entire n8n instance, including the editor UI.
Queue mode changes this. In queue mode, the main process dispatches workflow executions to separate worker processes via Redis/Bull. Each worker is a full Node.js process that can handle multiple concurrent executions. Horizontal scaling is possible by adding workers. This is the production-ready architecture — and it requires Redis as a dependency, drops SQLite support, and increases operational complexity considerably.
For a single-user Hetzner setup running a five-node Telegram notification workflow, queue mode is overkill. But the architectural question stands: why is the execution boundary a process and not something with stronger isolation guarantees?
graph TB
subgraph "Regular Mode (Default)"
A[n8n Main Process] --> B[HTTP Server]
A --> C[Scheduler]
A --> D[WorkflowRunner]
D --> E[Execution 1]
D --> F[Execution 2]
D --> G[Execution N]
style A fill:#f96,stroke:#333
style E fill:#ff9,stroke:#333
style F fill:#ff9,stroke:#333
style G fill:#ff9,stroke:#333
end
subgraph "Queue Mode (Production)"
H[Main Process] --> I[Redis/Bull Queue]
I --> J[Worker 1]
I --> K[Worker 2]
I --> L[Worker N]
J --> M[Executions]
K --> N[Executions]
L --> O[Executions]
style H fill:#9f9,stroke:#333
style J fill:#9ff,stroke:#333
style K fill:#9ff,stroke:#333
style L fill:#9ff,stroke:#333
end
Interesting observation: The n8n architecture documentation is comprehensive and well-written. The DeepWiki analysis of the workflow execution engine provides source-level references. Whatever the design choices, the codebase is not undocumented.
6. Introspecting the Runtime from the JSON
The AI-enhanced workflow example I found in the n8n template gallery (38 nodes, 12 of which are sticky notes — documentation, because without it the graph is not comprehensible) contains this node:
{
"type": "n8n-nodes-base.httpRequest",
"name": "Create Notion Page"
}
Not n8n-nodes-base.notion. A raw HTTP request. Because the 14 official Notion
actions and 2 triggers did not cover the required operation. The abstraction
leaked at precisely the point it was supposed to save time.
This is the general pattern. The low-code layer is valuable when your use case fits within the provided node's scope. When it does not, you drop to an HTTP request node — which requires knowing the API anyway — except now you are constructing the request inside a JSON editor embedded in a canvas instead of in your editor of choice.
6.1. Comparing execution models
The functional abstraction n8n provides can be described mathematically. A workflow is a directed acyclic graph where each node is a function:
\[f: \text{Item}[] \rightarrow \text{Item}[]\]
Data flows as arrays of items. Each node receives the output of its predecessors and produces output for its successors. Branching is explicit.
This is dataflow programming, a paradigm with a history going back to the 1970s. n8n's contribution is a visual editor for constructing these graphs and a library of pre-built node types.
The model maps cleanly to functional programming concepts. A pure node function has no side effects and its output depends only on its input — analogous to a pure function. The Notion trigger node is not pure (it performs IO), but the Set node or Filter node can be. The distinction matters for testability.
In Emacs Lisp, the equivalent of a pure n8n transformation node:
;; A pure transformation: add a field to each item (defun workflow/add-timestamp (items) (mapcar (lambda (item) (append item `((timestamp . ,(current-time-string))))) items)) (workflow/add-timestamp '(((company . "Acme") (status . "Applied")) ((company . "Globex") (status . "Interview"))))
The hypothetical: if n8n workflows were representable as composable pure functions — where each node type is a function with a typed signature — you would get:
- Static analysis: type-check the graph before running it
- Unit testing: test each node function in isolation
- Linting: detect unreachable nodes, type mismatches, missing credentials
- Headless execution:
workflow-runner run my-workflow.jsonwithout a web server - CI integration: run workflows in a pipeline against a test environment
- WASM compilation: execute workflows in a browser, on an embedded device, or in a serverless function without a Node.js runtime
None of these exist in n8n as of this writing. The JSON representation looks like it could support them. The runtime model does not.
7. The V8/Node.js Decision
n8n is named "nodemation" — Node.js automation. The runtime choice is not incidental; it is in the name.
7.1. The pragmatic origin
Jan Oberhauser came from visual effects, where he built tools to simplify artists' workflows. He started n8n as a side project in 2018, released the prototype on GitHub in June 2019, and launched on Product Hunt in October. The name "nodemation" (Node + Automation, abbreviated like Kubernetes → k8s) reflects what it was: a JavaScript developer building an API integration tool with the stack he knew.
This is the standard startup trajectory. The early technical decisions are made by whoever is capable of shipping, not by an architecture committee. The decisions that would be different with hindsight are generally not the ones that determined success or failure. Node.js was pragmatic, not strategic.
The question this article keeps asking — what do we gain from V8 that justifies its costs? — may have no satisfying answer. The architecture was not chosen for its benefits; it was chosen because it was available. The benefits would need to be discovered post-hoc, and they may not exist in the form a developer would want.
That said: the V8/Node.js choice has concrete downstream consequences.
7.2. What V8 as a runtime requires
| Property | V8/Node.js | WASM runtime | Native binary |
|---|---|---|---|
| W^X exception needed | Yes (JIT) | No | No |
| Startup time | ~100-500ms | <10ms 4 | <1ms |
| Memory baseline | ~50-100MB | ~1-5MB | Minimal |
| Cross-platform | Yes (via Node) | Yes (by spec) | Requires cross-comp |
| Browser execution | No | Yes | No |
| Sandboxing | Process-level | By spec | OS-level |
| Code node isolation | Patched (CVE-driven) | By design | N/A |
7.3. The sandbox problem
The Code node in n8n allows writing arbitrary JavaScript (or Python) in a workflow. This code runs in the n8n process. The history of CVEs around this feature:
- CVE-2025-68668 (CVSS 9.9): Python Code node sandbox bypass via Pyodide, allowing OS command execution with n8n process privileges. Fixed in v2.0.0 by making the Task Runner the default.
- CVE-2026-25049 (CVSS 9.4): Bypass for the fix of CVE-2025-68613 via expression evaluation. The researcher who found it noted: "they could be considered the same vulnerability, as the second one is just a bypass for the initial fix."
- CVE-2026-21858 (CVSS 10.0): Unauthenticated RCE via Content-Type confusion in form-based workflows, enabling file read and credential extraction from the SQLite database.
The CVE-2026-21858 exploit chain illustrates the blast radius:
graph LR
A[Unauthenticated Request] --> B[Content-Type Confusion]
B --> C[File Read Primitive]
C --> D[Read SQLite Database]
D --> E[Extract Encryption Key]
E --> F[Decrypt All Credentials]
F --> G[Access to All Connected Services]
style A fill:#f99,stroke:#333
style G fill:#f00,stroke:#333,color:#fff
In 2012, a conference talk called Wat catalogued the surprising behavior of
JavaScript's type coercion — results like {} + [] evaluating to 0 and
[] + {} to "[object Object]". Three minutes of examples that should not
work. The n8n expression evaluator CVEs suggest that JavaScript's surprises
extend beyond type coercion to security boundaries. The pattern of
CVE-2026-25049 being characterized as "the same vulnerability" as
CVE-2025-68613 — because it was a bypass of the fix rather than a distinct
attack surface — is characteristic of patching evaluation-layer vulnerabilities
on an unsafe foundation rather than restructuring the isolation architecture.
The Task Runner (documented here) is the architectural response. Code nodes run in a separate process with limited capabilities. The NixOS CVE workaround we saw earlier activates this feature for all installations.
This is a reasonable mitigation. It is also a pattern: the isolation that should have been a design primitive became a bolt-on security feature driven by a series of critical CVEs.
A WASM-based execution model would have had this isolation by specification. Each workflow execution would be a WASM module instance with explicit capability grants — file system access, network access — rather than running with the full privileges of the parent process. The WASI specification (WebAssembly System Interface) exists precisely to define these host bindings in a portable way.
This is not a criticism of the n8n team. The WASI ecosystem was not mature in
- But it is worth noting that the security architecture of 2026-era n8n is
largely a series of patches on a 2019 design decision.
7.4. The WASM alternative: current state (2026)
The argument for WASM as a workflow execution runtime rests on three properties: isolation by specification, cross-platform portability, and startup performance. Each deserves examination against the current state of the ecosystem.
7.4.1. The Component Model
The WASM Component Model standardizes how separate WASM modules expose and consume interfaces. Instead of each module being an isolated binary communicating via shared memory or raw integers, components declare typed interfaces using WebAssembly Interface Types (WIT):
package myorg:notifier;
interface notifier {
use wasi:http/outgoing-handler.{outgoing-request, future-incoming-response};
send-notification: func(message: string) -> result<_, string>;
}
The runtime resolves these imports at composition time. A Notion-fetch component
declares it needs wasi:http and a credential of a specific type; the runtime
provides exactly that, nothing else. A component that declares no filesystem
import cannot access the filesystem — not because it is configured not to,
but because the capability was never granted and the module's instruction set
has no path to it.
The current development state:
| Milestone | Status (2026) |
|---|---|
| WASI 0.1 | Shipped 2019, widely supported |
| WASI 0.2 | Stable, production-ready (Wasmtime, Wasmer) |
| WASI 0.3 (Preview 3) | In active development; async I/O focus |
| WASI 1.0 | Planned 2026, W3C standardization track |
WASI 0.2 includes wasi-http and wasi-cli worlds. WASI 0.3's primary feature
is native async: futures and streams built into the component model. The
"Docker moment" for WebAssembly — when WASM+WASI becomes the obvious unit of
portable deployment — has not arrived. The gap between the theoretical
architecture and practical availability has narrowed considerably since 2019.
Production-grade WASM runtimes exist (Wasmtime, Wasmer). Both support module instantiation in microseconds and per-instance isolation by specification. Yet no production workflow automation tool comparable to n8n or Zapier uses WASM as its execution runtime. The nearest examples are Spin (Fermyon) for serverless microservices and Cloudflare Workers using V8 isolates with similar per-request isolation properties.
The likely reasons: the connector library of any competitive tool requires significant effort to reimplement as WASM components, and the pre-2024 WASM tooling was not mature enough to make the investment worthwhile. The 2026 ecosystem is more capable. The investment cost has not decreased.
8. Zapier vs. n8n: What Each Actually Provides
The comparison is frequently made in terms of "open source vs. proprietary" or "self-hosted vs. cloud." These are real differences. There are also architectural differences that matter.
8.1. Execution isolation
Zapier runs each Zap execution in an isolated cloud environment. The isolation model is Zapier's infrastructure concern, not the user's. You cannot affect it, but you also cannot compromise it from within a workflow.
n8n in default mode runs all executions in one process. In queue mode, each worker handles multiple executions in one process. Code nodes now run in Task Runner processes (post-2.0.0). Isolation is present but layered, and the boundary between layers has historically been the attack surface.
8.2. Credential handling
In Zapier, OAuth is the primary mechanism. You authorize Zapier to act on your behalf through the provider's OAuth flow. Zapier stores the tokens in its managed infrastructure.
In n8n, credentials are stored in the n8n database, encrypted with an encryption key that must be shared across all worker nodes in queue mode and kept in the main instance for single-node deployments. A compromised n8n instance exposes all stored credentials. The Cyera research on CVE-2026-21858 demonstrated this exploit path end-to-end: read the SQLite database, extract the encryption key, decrypt credentials for all connected services. The blast radius, as Cyera noted, is not one system — it is every service whose credentials are stored in n8n.
Self-hosting moves the security responsibility to the operator. This is not unique to n8n, but it is worth stating plainly before recommending n8n as a "more secure" alternative to Zapier.
8.3. The architectural contrast
Zapier made different decisions at the foundation level:
- No full runtime exposure: Zapier's code steps are sandboxed JavaScript
and Python with limited capabilities. You cannot
require('child_process'). This is a deliberate restriction, not a limitation — it closes the attack surface that n8n's Code node opened. - No self-hosting option: By keeping all execution on Zapier's infrastructure, they own the security model end-to-end. The user cannot misconfigure it. This trades flexibility for safety.
- OAuth-first credential model: Credentials flow through Zapier's OAuth proxy. The user never sees or stores raw API tokens. A compromised Zapier account does not yield extractable secrets.
The trade-off is real: Zapier is less flexible, more expensive at scale, and fully proprietary. But the security posture is fundamentally different — not because Zapier developers are better, but because the architecture has fewer moving parts exposed to the user.
The question for n8n is whether the flexibility of full Node.js runtime access is worth the security surface it creates. For the use case of "poll Notion, send Telegram message," it is not clear that it is.
8.4. Feature comparison
| Feature | Zapier | n8n Community | n8n Enterprise |
|---|---|---|---|
| Self-hosting | ❌ | ✅ | ✅ |
| Source available | ❌ | ✅ (fair-code) | ✅ |
| SSO / SAML / OIDC | ✅ (Teams+) | ❌ | ✅ |
| Credential sharing | ✅ | ❌ | ✅ |
| Multiple admin users | ✅ | ❌ | ✅ |
| External secret vaults | ❌ | ❌ | ✅ |
| Git-based version control | ❌ | ❌ | ✅ |
| Execution isolation | Cloud-managed | Process-level | Process-level |
| Webhook triggers | ✅ | ✅ | ✅ |
| Code nodes | Limited | ✅ (JS/Python) | ✅ (JS/Python) |
| Custom nodes | No | ✅ (npm) | ✅ (npm) |
| Headless CLI execution | ❌ | ❌ | ❌ |
The column that stands out for a developer context: Git-based version control is an Enterprise feature. In 2026. For a tool whose primary artifact is a JSON file.
For the self-hosted Community Edition: you can export the JSON manually and commit it yourself. n8n will not help you with conflict resolution, diffing, or branching. The JSON contains opaque credential IDs and UI cache that make diffs noisy:
@@ -12,7 +12,7 @@
"databaseId": {
"__rl": true,
"value": "4c0bdb50-3389-4d97-a105-9591fa5fc0cb",
- "cachedResultName": "Stellenangebote",
+ "cachedResultName": "Job Applications",
"cachedResultUrl": "https://www.notion.so/4c0bdb5033894d97a1059591fa5fc0cb"
}
The actual change: someone renamed a Notion database. The diff: five lines of context, one of which is a cached display name that has no effect on execution. Multiply this by every node in a complex workflow.
9. The Vision That Would Have Been Interesting
What follows is speculative. It is also the answer to "what would have made this category genuinely compelling for developers."
9.1. Workflows as pure functions
The conceptual model from React is useful here. React's distinction between pure functional components and stateful components maps cleanly onto workflow node types:
// Pure node: output depends only on input // No credentials, no IO, fully testable function FilterNode({ items, predicate }) { return items.filter(predicate); } // Impure node: performs IO, has side effects // Requires credential injection, cannot be unit tested function NotionFetchNode({ items, credentials, databaseId }) { return items.flatMap(item => fetch(`https://api.notion.com/v1/databases/${databaseId}/query`, { headers: { Authorization: `Bearer ${credentials.secret}` } }) ); }
If the workflow runtime enforced this distinction — pure nodes declared as pure, IO nodes declared with their required capabilities — you would have:
- A type system for workflows
- Testable node units
- Auditable capability grants (this workflow requests: Notion read, Telegram write)
- The basis for static analysis and linting
The capability-based security model from the 1970s (Dennis and Van Horn, 1966; later Hydra OS, Eros) provides the theoretical framework. A workflow node should receive only the capabilities it declares, not ambient access to the host environment.
This is what WASM's component model is building toward. Each component declares
its imports and exports. Capabilities are granted explicitly at composition time.
A Notion-fetch component declares: "I need an HTTP client and a credential of
type notion-api-key." The runtime provides exactly that, nothing more.
9.2. Headless execution
# This does not exist, but it could. n8n-runner execute --workflow ./my-workflow.json \ --credentials ./credentials.enc \ --input '{"company": "Acme", "status": "Applied"}' \ --output json
With a headless runner:
- CI pipelines could execute workflows in test mode
- Embedded devices could run workflows without a web server
- Multiple instances could execute the same workflow in parallel without shared state
- Linting and type-checking could run before deployment
The JSON representation exists. The runtime is coupled to the web server by design. These two facts are unrelated to each other technically, but currently inseparable in practice.
9.3. The browser as workflow runtime
Most workflows are API plumbing: read from Notion, transform, write to Telegram. The data never needs to touch a server. If the workflow runtime were browser-native:
- Credentials stay in the browser (or a local credential vault)
- No server to maintain, no attack surface to harden
- Offline-capable for cached data
- The "self-hosted vs. cloud" question dissolves
This is not hypothetical architecture. Service Workers exist. IndexedDB exists. The Fetch API exists. The browser is already a capable runtime for exactly this class of task. For the 80% of workflows that are "poll API A, transform, call API B" — why does a server need to be involved at all?
The n8n architecture chose differently. The visual editor runs in the browser; the execution engine runs on Node.js. This split creates the security surface we've been examining throughout this article.
9.4. The WASM path
A WASM-based workflow runner would have the following properties:
Workflow definition (JSON/WASM component)
│
▼
WASM module (compiled from node definitions)
│
┌─────┴────────────────────┐
│ │
▼ ▼
Browser (WASM runtime) Server (wasmtime/wasmer)
│ │
▼ ▼
Offline workflows Production execution
Scheduled in browser Horizontally scalable
No server required True isolation per run
The isolation is by specification: a WASM module cannot access memory outside its linear memory. Capability grants are explicit imports. There is no analog to the n8n Code node sandbox escape because there is no ambient access to the host to escape to.
10. Remaining Questions, Unresolved
10.1. TL;DR for skimmers
If you scrolled here first: n8n is a visual workflow automation tool that is more flexible than Zapier but inherits significant security surface from its Node.js runtime. The community edition lacks Git integration, CI support, and headless execution — features behind the enterprise paywall. For developers, writing a script may be faster and more maintainable. For non-developers, n8n delivers on its promise if your use case fits the available nodes.
10.2. The full picture
After all of this, the workflow works. Notion database changes arrive in Telegram within five minutes. The setup took roughly two hours:
- Reading n8n documentation and understanding the node model: ~30 minutes
- Creating the Notion internal integration and connecting it to the database: ~20 minutes
- Configuring the n8n workflow (trigger, transformation, Telegram node): ~25 minutes
- Debugging why polling wasn't triggering (stale cache, credential issues): ~35 minutes
- Researching why webhooks weren't available (leading to this article): ~10 minutes
Two hours for a five-node workflow is not a win for the "saves time" argument. A Python script using the Notion API directly would have taken forty minutes and been trivially testable — the API documentation is clear, the authentication is a single header, and the polling logic is ten lines of code.
The honest summary:
- n8n delivers on its core promise for non-developers who need API integration without writing code
- For developers, it trades code flexibility for a visual canvas without compensating with testability, CI integration, or headless execution
- The security history reflects the V8/Node.js architectural choice: sandbox isolation has been the recurring challenge
- The community edition feature set is significantly below what the enterprise edition offers, including the most basic developer workflow (version control)
- Zapier provides similar capabilities with better managed security and worse flexibility; the trade-off is real
What n8n could be — a composable, versionable, headless-capable workflow runtime with capability-based security and a visual editor as one of several interfaces — is more interesting than what n8n currently is. The gap between the two is not unbridgeable. It is, however, large enough to notice.
11. Footnotes and Further Reading
- n8n documentation: https://docs.n8n.io/
- n8n architecture (DeepWiki): https://deepwiki.com/n8n-io/n8n/1.2-architecture-overview
- CVE-2025-68668 (N8scape): https://thehackernews.com/2026/01/new-n8n-vulnerability-99-cvss-lets.html
- CVE-2026-21858 (Ni8mare, CVSS 10.0): https://www.cyera.com/research/ni8mare-unauthenticated-remote-code-execution-in-n8n-cve-2026-21858
- NixOS n8n module: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/n8n.nix
- Notion API documentation: https://developers.notion.com/
- Notion webhooks reference: https://developers.notion.com/reference/webhooks
- Notion authorization guide: https://developers.notion.com/docs/authorization
- WASI specification: https://wasi.dev/
- WASM component model: https://github.com/WebAssembly/component-model
- Wasmtime performance (Bytecode Alliance): https://bytecodealliance.org/articles/wasmtime-10-performance
- Capability-based security: https://en.wikipedia.org/wiki/Capability-based_security
- Dataflow programming: https://en.wikipedia.org/wiki/Dataflow_programming
- Malleable software (Ink & Switch): https://www.inkandswitch.com/malleable-software/
- Malleable software thesis (Tchernavskij, 2019): https://theses.hal.science/tel-02612943
- Wat (Gary Bernhardt, 2012): https://www.destroyallsoftware.com/talks/wat
- Sequoia interview with Jan Oberhauser: https://sequoiacap.com/podcast/training-data-jan-oberhauser/
Footnotes:
n8n workflows can be exported as JSON and theoretically managed in git, but there is no official CLI to execute them outside the server runtime. See 9.2 for what this could look like.
The "vendor lock-in" row for n8n deserves nuance. You own the data and can self-host. But your workflows are coupled to n8n's execution model, credential store, and node API. Migrating to a different tool means rebuilding workflows, not porting code.
Philip Tchernavskij's 2019 PhD thesis "Designing and Programming Malleable Software" coined the term. The Ink & Switch research lab has since published accessible work on the concept: https://www.inkandswitch.com/malleable-software/
Link may require verification — search the n8n community forum for "Notion webhook trigger" if the link has changed.
The updated picture:
| Trigger method | Notion API | n8n Notion Trigger node |
|---|---|---|
| Integration webhooks | ✅ Supported | ❌ Not implemented |
| Polling fallback | ✅ Supported | ✅ Only available option |
A workaround exists: configure a generic n8n Webhook node as the trigger, then register that endpoint manually as a Notion webhook subscription via the integration settings. You receive the raw Notion event payload and must parse it yourself — no built-in credential mapping, no dropdown database selector, but genuine push latency instead of N-minute polling intervals.
Airtable, for comparison, has supported webhooks since 2021, though with a 7-day expiry requiring periodic renewal. Neither Airtable's expiry requirement nor n8n's missing integration is elegant. They are inconvenient in different directions.
These figures conflate two different metrics. The Node.js figure is process cold-start time (V8 init + module loading). The WASM figure refers to module instantiation time within an already-running runtime process — a measurement the Bytecode Alliance reduced from milliseconds to microseconds in Wasmtime via copy-on-write virtual memory mapping of the initial heap. A fairer comparison for per-execution workflow isolation: if each workflow run were a fresh process, Node.js pays ~100–500ms per cold start; a WASM runtime embedded in a persistent server process can instantiate a new module in microseconds, making per-execution isolation practically free. No public benchmark replicating the specific n8n execution model versus a WASM equivalent was found for this article.
The MemoryDenyWriteExecute = "no" line we saw earlier in the NixOS module
(see 3.1) is the direct consequence of V8
JIT — an architectural property that cannot be hardened away.