Open Registry
for AI Guardrails
Discover, test, and deploy production-ready guardrails for your AI applications. Community-driven benchmarks. Enterprise-grade security. Fully integrated with EthicalZen.
Browse by Category
PII Protection
SSN, Email, Phone, Credit Card
12 guardrailsPrompt Injection
Jailbreak, instruction override
8 guardrailsToxicity & Harm
Hate speech, harassment
6 guardrailsHallucination
Factuality, citation verification
4 guardrailsCompliance
HIPAA, GDPR, SOC2, PCI-DSS
10 guardrailsIndustry-Specific
Healthcare, Finance, Legal
15 guardrailsFeatured Guardrails
View All โpii-blocker-v1
Detects and blocks PII including SSN, email, phone, credit card, DOB, and IP addresses.
prompt-injection-blocker-v1
Detects jailbreaks, instruction overrides, delimiter injection, and system prompt extraction.
medical-advice-blocker-v2
Prevents AI from providing medical diagnoses or treatment recommendations.
financial-advice-blocker-v1
Blocks unauthorized financial advice, investment recommendations, and trading signals.
๐ Leaderboard
Full Leaderboard โAlex's Journey: From Discovery to Production
See how Alex, a developer building a Pediatric Health Assistant, went from exploring guardrails to deploying a custom Smart Guardrail guardrail in production.
๐ Explore
Search GuardrailHub for "medical advice" and discover existing guardrails
๐งช Try It
Test with safe and unsafe examples to see how the guardrail performs
๐ฆ Clone
Copy the Medical Advice Blocker as a starting point for customization
๐ Describe
Write requirements in plain English: "Block medication advice for children"
๐ค Design
Smart Guardrail AI generates training examples and adversarial variants
๐ง Fine-Tune
Iterate automatically until hitting 90% accuracy and recall targets
โ Validate
Review confusion matrix: 93% recall, 85% specificity achieved
๐ข Publish
Submit to GuardrailHub for others to discover and use
๐ Deploy
One API call to deploy to production gateway
๐ก๏ธ Protect
Pediatric Health Assistant now blocks unsafe medical advice!
Deploy in Minutes
Install guardrails with a single command. Works with any LLM provider.
import { EthicalZen } from '@ethicalzen/sdk';
// Initialize with guardrails from GuardrailHub
const ez = new EthicalZen({
apiKey: 'sk-...',
guardrails: [
'ethicalzen/pii-blocker-v1',
'ethicalzen/prompt-injection-blocker-v1'
]
});
// Wrap your OpenAI client
const openai = ez.protectOpenAI(
new OpenAI()
);
// Use normally - guardrails are automatic!
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userInput }]
});
Ready to Secure Your AI?
Join thousands of developers using GuardrailHub to build safer AI applications.