Description
Fusion AI Node for n8n





A production-ready n8n community node package that provides seamless integration with Fusion AI’s NeuroSwitch multi-provider orchestration platform.
🚀 Features
- Multi-Provider AI Access: Connect to OpenAI, Anthropic, Google, and other AI providers through a single unified interface
- NeuroSwitch Auto-Routing: Intelligent automatic provider selection based on availability and performance
- LangChain Integration: Full compatibility with n8n’s AI Agent workflows and tool calling
- Secure Credentials: API keys are stored securely with
noData: trueprotection - Production Ready: Built with TypeScript, strict type checking, and comprehensive error handling
📦 Installation
From npm (Recommended)
npm install fusion-node
From Source
git clone https://github.com/Fusionaimcp4/n8n-nodes-fusion.git
cd n8n-nodes-fusion
npm install
npm run build
In n8n
1. Install the package in your n8n instance
2. Restart n8n
3. The Fusion AI nodes will appear in the node palette
🔧 Setup
1. Get Your Fusion API Key
1. Visit Fusion AI Platform
2. Sign up for an account
3. Generate your API key from the dashboard
2. Configure Credentials in n8n
#### Step-by-Step Credential Setup:
1. Open n8n: Navigate to your n8n instance
2. Go to Credentials: Click on “Credentials” in the left sidebar
3. Add New Credential: Click the “+” button or “Add Credential”
4. Search for Fusion: Type “Fusion API” in the search box
5. Select Fusion API: Click on “Fusion API” from the results
6. Enter Your Details:
– API Key: Paste your Fusion AI API key
– Base URL: Leave as default (https://api.mcp4.ai) or enter custom URL
7. Test Connection: Click “Test” to verify your credentials work
8. Save: Click “Save” to store your credentials
9. Name Your Credential: Give it a descriptive name like “Fusion AI Production”
#### Credential Fields Explained:
– Get this from your Fusion AI dashboard
– Keep this secure and never share it publicly
– The field is password-protected for security
– Default: https://api.mcp4.ai
– Only change if using a custom Fusion AI instance
– Must include the protocol (https://)
#### Security Best Practices:
#### Troubleshooting Credentials:
“Invalid API Key” Error:
“Connection Failed” Error:
“Test Connection Failed”:
🎯 Usage
Fusion Chat Model Node (AI Agent Integration)
The primary node for AI Agent workflows:
1. Add Node: Drag “Fusion Chat Model” from the node palette
2. Select Model: Choose from available providers:
– NeuroSwitch (auto routing) – Recommended
– OpenAI: GPT-4, GPT-3.5-turbo
– Anthropic: Claude 3 Sonnet, Claude 3 Haiku
– Google: Gemini Pro, Gemini Pro Vision
3. Configure Options:
– Temperature: 0.0-1.0 (default: 0.3)
– Max Tokens: 1-4096 (default: 1024)
4. Connect to AI Agent: Use as a Language Model in AI Agent workflows
Example AI Agent Workflow
{
"nodes": [
{
"name": "Fusion Chat Model",
"type": "fusionChatModel",
"parameters": {
"model": "neuroswitch",
"options": {
"temperature": 0.3,
"maxTokens": 1024
}
}
},
{
"name": "AI Agent",
"type": "aiAgent",
"parameters": {
"languageModel": "={{ $('Fusion Chat Model').item.json.response }}"
}
}
]
}
🔒 Security
Credential Protection
noData: true protectionBest Practices
1. Environment Variables: Use environment variables for API keys in production
2. Access Control: Limit API key permissions to necessary scopes
3. Monitoring: Monitor API usage and costs through Fusion AI dashboard
4. Rotation: Regularly rotate API keys for enhanced security
Reporting Security Issues
If you discover a security vulnerability, please do not open a public GitHub issue. Instead:
security@mcp4.ai🛠️ Development
Prerequisites
Building from Source
Clone repository
git clone https://github.com/Fusionaimcp4/n8n-nodes-fusion.git
cd n8n-nodes-fusionInstall dependencies
npm installBuild the project
npm run buildRun linting
npm run lintFormat code
npm run format
Project Structure
├── dist/ # Built files (production)
├── nodes/Fusion/ # Source TypeScript files
│ ├── FusionChatModel.node.ts # Main AI Agent node
│ ├── FusionApi.credentials.ts # Credential configuration
│ └── fusion.svg # Node icon
├── package.json # Package configuration
├── tsconfig.json # TypeScript configuration
└── README.md # This file
📊 Supported Models
OpenAI
Anthropic
Auto-Routing (NeuroSwitch)
🐛 Troubleshooting
Common Issues
“Invalid model ID” Error
provider:model_id“Could not resolve parameter dependencies” Error
Connection Timeout
Debug Mode
Enable debug logging in n8n to see detailed error messages:
N8NLOGLEVEL=debug npm start
📈 Performance
Optimization Tips
1. Use NeuroSwitch: Automatic routing optimizes for speed and cost
2. Batch Requests: Process multiple items in a single workflow execution
3. Cache Results: Store frequently used responses
4. Monitor Usage: Track token consumption and costs
Rate Limits
🤝 Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Setup
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
📄 License
This project is licensed under the MIT License – see the LICENSE.txt file for details.
🆘 Support
🔄 Changelog
v0.1.4
—
Made with ❤️ by the Fusion AI team