Description
PromptLock Guard for n8n
🛡️ AI-powered security guardrails for your n8n workflows
PromptLock Guard is a community node that adds content analysis and sensitive data detection to your n8n automations. Detect prompt injection attacks, identify PII/PHI patterns, and route content based on risk assessment.
✨ Features
- 🔍 Sensitive Data Detection: Identifies patterns associated with HIPAA, GDPR, and PCI requirements
- 🚦 4-Output Routing: Route workflows based on risk assessment (Allow/Flag/Redact/Block)
- 🔒 Fail-Closed Security: Secure by default with configurable error handling
- ⚡ Real-time Analysis: Fast API integration with configurable timeouts
- 🎯 Flexible Targeting: Support for nested field paths with dot notation
- 📊 Rich Metadata: Detailed analysis results attached to each item
🚀 Installation
Community Nodes (Recommended)
1. In n8n, go to Settings → Community Nodes
2. Click Install a community node
3. Enter: n8n-nodes-promptlock-guard
4. Click Install
5. Restart n8n
npm Installation
Install globally for n8n
npm install -g n8n-nodes-promptlock-guardOr install in your n8n user directory
cd ~/.n8n/
npm install n8n-nodes-promptlock-guardRestart n8n
⚙️ Setup
1. Create Credentials
In n8n, create new PromptLock API Key credentials:
https://api.promptlock.iops_)X-API-Key (preferred) or Bearer TokenGet your API key at promptlock.io
2. Add the Node
1. Search for “PromptLock Guard” in the node panel
2. Configure:
– Text Field: Path to your text data (e.g., text, payload.message)
– Frameworks: Select detection frameworks (HIPAA, GDPR, PCI)
– Credentials: Select your PromptLock API Key
3. Wire the Outputs
The node provides four distinct outputs:
cleanText field📋 Quick Example
Webhook → PromptLock Guard
├─ Allow → Process Normally
├─ Flag → Send to Review Queue
├─ Redact → Process with Clean Text
└─ Block → Return 403 Error
🔧 Configuration Options
Core Settings
data.message.text)Advanced Settings
cleanText)promptLock)📊 Metadata Structure
The node attaches analysis data to each item:
{
"promptLock": {
"risk_score": 56,
"action_taken": "redact",
"cleantext": "Patient [HIPAAPERSONNAME] (SSN: [HIPAASSN]) needs treatment",
"violations": [
{
"type": "pii_detection",
"category": "person_name",
"confidence": 0.95,
"position": [8, 18],
"text": "John Smith",
"placeholder": "[HIPAAPERSONNAME]",
"compliance_frameworks": ["HIPAA"]
}
],
"compliance_status": {
"HIPAA": 2,
"GDPR": 0,
"PCI": 0
},
"usage": {
"tokens_analyzed": 8,
"processingtimems": 78
}
}
}
🔒 Security Best Practices
⚠️ Important Note
PromptLock is a security tool that helps detect sensitive data patterns and prompt injection attempts. It is not a compliance certification and does not guarantee regulatory compliance. You remain responsible for your own compliance obligations.
📞 Support
📜 License
MIT License – see LICENSE file for details.
—
Built by the PromptLock team