Back to Nodes

Everything AI

Last updated Nov 27, 2025

Multi-input, multi-output, natural language-driven AI node that automatically generates and executes code based on natural language instructions

122 Weekly Downloads
12,104 Monthly Downloads

Included Nodes

Everything AI

Description

n8n-nodes-everything-ai

A multi-input, multi-output, natural language-driven AI node for n8n that automatically generates and executes code based on your requirements.

Features

  • Natural Language Processing: Describe your data processing requirements in natural language
  • Dynamic Input/Output Ports: Configure 1-10 input and output ports dynamically
  • Automatic Code Generation: Uses LLM to generate JavaScript code based on your requirements
  • Smart Caching: Automatically caches generated code and regenerates only when requirements change
  • Multiple LLM Support: Supports OpenAI-compatible APIs (OpenAI, Anthropic, etc.)
  • Extended Package Support: Supports Node.js built-in modules and external NPM packages (currently includes Cheerio for HTML/DOM parsing)

Roadmap / Milestones

We have several exciting features planned for future releases:

  • 📊 Data Complexity Level Control: Allow users to control how much actual input data the AI can see when generating code. Provides 6 levels (0-5) from structure-only to full data, with automatic text truncation to manage token usage. This helps balance code generation quality with API costs.

  • 🎯 AI Code Viewer: Implement a real-time viewer that allows users to view the current generated code for all AI nodes. Add code wrapping functionality so users can easily copy the generated code to their own code nodes for verification and reliability testing. This will help users understand what code is being generated by the AI and verify its reliability before using it in production workflows.

  • 🎨 Logo Design: Find or create a more attractive and professional logo for the project to improve the visual identity.

  • 👥 Community Building: Recruit interested developers and contributors to join the project and build a community around it to share ideas, report issues, and contribute improvements.

  • 🌍 Multi-language Support: Support multiple languages for natural language instructions (Chinese, English, and more). Localize the node interface and documentation to support users from different language backgrounds, making the node accessible to a global audience.

  • 🔍 AI Code Review & Validation: Implement AI-powered code review to check generated code for potential issues, bugs, and security vulnerabilities. Add comprehensive validation to ensure generated code is safe, efficient, and follows best practices. Since code is AI-generated, add an additional layer of AI review to catch errors and improve reliability.

  • 🔌 Additional LLM Provider Support: Support integration with other major LLM providers (Anthropic Claude, Google Gemini, local models, etc.). Allow users to choose from different LLM providers based on their needs and preferences, providing a unified interface for different LLM providers while maintaining compatibility.

  • 📦 Extended NPM Package Support: Support more popular NPM packages in generated code.

    • Lightweight Libraries (e.g., DOM parsing libraries like Cheerio, jsdom): These will be compiled and bundled locally for easy use.
    • Heavy Libraries – Playwright: ✅ Completed! Playwright browser automation is now supported through a public Docker image (loopsaaage/playwright-remote-execution-server:latest). The Docker image includes Playwright with Chromium browser support. Remote execution is implemented using dnode RPC for connecting to the Docker container. This enables AI-generated code to perform web scraping, browser automation, and DOM manipulation tasks. See Playwright Documentation for setup instructions.
    • Other Heavy Libraries (e.g., Puppeteer): Similar Docker-based approach planned for future releases.
  • 🔧 Intelligent Data Preprocessing & Context Optimization: For large structured data (e.g., JSON, XML), implement a two-step intelligent preprocessing pipeline:

    • Step 1 – Data Analysis & Preprocessing Logic Generation: First, call the LLM to analyze a sample of the input data and understand its structure. The LLM will then generate preprocessing logic to simplify and structure the data (e.g., extract key fields, summarize long text, normalize nested structures).
    • Step 2 – Data Preprocessing & Code Generation: Execute the generated preprocessing logic to transform the raw data into a more concise, structured format. Then use this optimized data as context for the main code generation step.
    • This two-step approach will allow the node to handle very large or complex structured data more efficiently by first understanding and simplifying it before generating the final processing code. The preprocessing step is independent from the code generation step, allowing for better token usage and more accurate code generation.

Installation

Install this community node package in your n8n instance:

npm install n8n-nodes-everything-ai

Or install it directly in n8n:

  1. Go to Settings > Community Nodes
  2. Click Install a community node
  3. Enter n8n-nodes-everything-ai
  4. Click Install

Configuration

Credentials

This node requires OpenAI API credentials (or compatible API):

  1. Go to Credentials > Add Credential
  2. Select OpenAI API
  3. Enter your API Key
  4. (Optional) Enter a custom API Base URL if using a compatible service

Node Settings

  • Number of Inputs: Select the number of input ports (1-10)
  • Number of Outputs: Select the number of output ports (1-10)
  • Your Requirement: Describe your data processing requirements in natural language
    • Use numbers 1, 2, 3… to refer to inputs
    • Use uppercase letters A, B, C… to refer to outputs
    • Example: "When status='paid' in input 1 data, send to output A, otherwise send to output B"
  • Data Complexity Level: Control how much actual input data the AI can see (0-5)
    • Level 0 (default): Only data structure/types, no actual values
    • Level 1-4: Sample data with increasing amounts and truncation limits
    • Level 5: Full data (not recommended for large datasets)
  • Model: Select the LLM model to use (dynamically loaded from API)
  • Custom Model Name: Enter custom model name when "Custom" is selected

Advanced Settings

  • Custom Prompt Template: Override the default system prompt template
    • Use {{instruction}}, {{inputCount}}, {{outputCount}} as placeholders
  • Force Reset Node: Force reset and regenerate code (normally auto-regenerates when requirements change)
  • Playwright Session Controls (visible when Playwright is enabled):
    • Keep Browser Instance: When enabled, the Playwright browser session stays alive after execution so downstream nodes can reuse it.
    • Browser Instance ID: Use an existing browser instance ID (returned from a previous node) to reuse the same browser session.
    • When an instance is kept alive, the node adds __playwrightInstanceId to every output item so it can be passed to downstream nodes.

Usage

Basic Example

  1. Add the Everything AI node to your workflow
  2. Configure the number of inputs and outputs
  3. Enter your requirement in natural language:
    When status="paid" in input 1 data, send to output A, otherwise send to output B
    
  4. Select an LLM model
  5. Execute the workflow

The node will:

  • Generate JavaScript code based on your requirement
  • Cache the generated code for future executions
  • Automatically regenerate if you change the requirement

Routing Examples

Route to specific output:

If the first item's language is txt, go to route B (don't forward any data)

Stop workflow:

If the first item's language is txt, stop at this node

Modify data:

Add a new field myNewField=1 to all data from input 1, then output to A

How It Works

  1. First Execution: The node analyzes your requirement and input data structure, then uses LLM to generate JavaScript code
  2. Code Caching: Generated code is saved locally and reused for subsequent executions
  3. Auto-Regeneration: Code is automatically regenerated when:
    • The requirement (instruction) changes
    • The number of inputs/outputs changes
    • Force reset is enabled

Requirements

  • n8n version 1.0.0 or higher
  • Node.js file system access (for local code storage)
  • OpenAI API credentials (or compatible API)

Limitations

  • Designed for self-hosted n8n instances (uses local file system)
  • May not work in n8n Cloud due to file system restrictions
  • Generated code must return an object with output port letters as keys

Development

# Install dependencies
npm install

# Build
npm run build

# Lint
npm run lint

# Development mode
npm run dev

License

MIT

Repository

https://github.com/MaskerPRC/n8n-nodes-everything-ai