An AI-powered content analysis and moderation toolkit using Claude API.
ai-driven
is a TypeScript module that provides easy-to-use functions for content moderation, text translation, and image analysis. It leverages the power of Claude AI to perform various tasks such as:
- Text translation
- Offensive language detection
- Profanity checking
- Violence detection in images
- Pornographic content detection in images
To install the ai-driven
module, run the following command:
npm install ai-driven
- Create a
.env
file in the root of your project. - Add your Claude API key and URL to the
.env
file:
CLAUDE_API_KEY=your_api_key_here
CLAUDE_API_URL=https://api.anthropic.com/v1/messages
Here's a basic example of how to use the ai-driven
module:
import Assistant from 'ai-driven';
import fs from 'fs/promises';
async function main() {
const assistant = new Assistant();
// Translate text
const translatedText = await assistant.translateText('Hello, world!');
console.log('Translated text:', translatedText);
// Check for offensive language
const offensiveLevel = await assistant.checkForOffensiveLanguage('You are stupid!');
console.log('Offensive level:', offensiveLevel);
// Check for profanity
const profanityLevel = await assistant.checkForProfanity('Damn it!');
console.log('Profanity level:', profanityLevel);
// Check an image for violence
const imageBuffer = await fs.readFile('path/to/your/image.jpg');
const violenceLevel = await assistant.checkImageForViolence(imageBuffer);
console.log('Violence level in image:', violenceLevel);
// Check an image for pornography
const pornographyLevel = await assistant.checkImageForPornography(imageBuffer);
console.log('Pornography level in image:', pornographyLevel);
}
main().catch(console.error);
The ai-driven
module provides the following methods:
-
translateText(text: string): Promise<string>
- Translates the given text to English.
-
checkForOffensiveLanguage(text: string): Promise<number>
- Checks the given text for offensive language and returns a score from 1 to 10.
-
checkForProfanity(text: string): Promise<number>
- Checks the given text for profanity and returns a score from 1 to 10.
-
checkImageForViolence(imageBuffer: Buffer): Promise<number>
- Analyzes the given image for violent content and returns a score from 1 to 10.
-
checkImageForPornography(imageBuffer: Buffer): Promise<number>
- Analyzes the given image for pornographic content and returns a score from 1 to 10.
This module requires a valid Claude API key to function. Ensure you have the necessary permissions and comply with Claude's terms of service when using this module.