Production-ready React hooks for OpenAI integration with full TypeScript support, secure API key management, and comprehensive error handling.
- 🚀 Production-Ready: Built with enterprise-grade patterns and best practices
- 🔐 Secure API Management: Support for backend proxies to protect API keys
- 📝 Full TypeScript Support: Complete type definitions for excellent DX
- 🔄 Advanced Error Handling: Automatic retries, cancellation, and error recovery
- ⚡ Performance Optimized: Tree-shakeable, minimal bundle size
- 🧪 Well Tested: Comprehensive test coverage
- 🎯 Modern React: Supports React 16.8+, 17, and 18
npm install genai-hooks
# or
yarn add genai-hooks
# or
pnpm add genai-hooks
import { AIHooksProvider, useTextGeneration } from 'genai-hooks';
function App() {
return (
<AIHooksProvider config={{ apiKey: process.env.REACT_APP_OPENAI_API_KEY }}>
<TextGenerator />
</AIHooksProvider>
);
}
function TextGenerator() {
const { generateText, data, isLoading, error } = useTextGeneration();
const handleGenerate = () => {
generateText('Write a short poem about React hooks');
};
return (
<div>
<button onClick={handleGenerate} disabled={isLoading}>
{isLoading ? 'Generating...' : 'Generate Text'}
</button>
{error && <p>Error: {error.message}</p>}
{data && <p>{data}</p>}
</div>
);
}
For production applications, use a backend proxy to secure your API keys:
import { AIHooksProvider } from 'genai-hooks';
function App() {
return (
<AIHooksProvider config={{
baseURL: '/api/ai', // Your backend proxy endpoint
headers: {
'Authorization': `Bearer ${userToken}` // Your auth token
}
}}>
<YourApp />
</AIHooksProvider>
);
}
Generate text using OpenAI's completion models.
const {
generateText, // Function to trigger generation
data, // Generated text
isLoading, // Loading state
error, // Error object
cancel, // Cancel ongoing request
reset // Reset hook state
} = useTextGeneration();
// With options
generateText('Your prompt', {
model: 'gpt-3.5-turbo-instruct',
maxTokens: 150,
temperature: 0.7,
topP: 1,
frequencyPenalty: 0,
presencePenalty: 0,
stop: ['\n']
});
Generate images using DALL-E.
const {
generateImage,
data, // Image URL or base64 data
isLoading,
error,
cancel,
reset
} = useImageGeneration();
// With options
generateImage('A futuristic city at sunset', {
model: 'dall-e-3',
size: '1024x1024',
quality: 'hd',
style: 'vivid',
n: 1,
responseFormat: 'url' // or 'b64_json'
});
Get real-time text suggestions as users type.
const {
suggestions, // Array of suggestions
isLoading,
error,
fetchSuggestions, // Debounced function
cancel,
reset
} = usePredictiveCompletion({
maxSuggestions: 3,
debounceMs: 300,
temperature: 0.7
});
// In your input handler
const handleInputChange = (e) => {
setText(e.target.value);
fetchSuggestions(e.target.value);
};
Translate text between languages.
const {
translateText,
data, // Translated text
isLoading,
error,
cancel,
reset
} = useLanguageTranslation();
// Translate
translateText('Hello world', 'Spanish', {
model: 'gpt-3.5-turbo',
temperature: 0.3
});
<AIHooksProvider config={{
// API Configuration
apiKey: string, // OpenAI API key (use env vars)
baseURL: string, // API base URL (default: OpenAI)
// Request Configuration
timeout: number, // Request timeout in ms (default: 30000)
retries: number, // Number of retries (default: 3)
retryDelay: number, // Delay between retries (default: 1000)
// Custom headers
headers: Record<string, string>,
// Error handler
onError: (error: AIHookError) => void
}}>
function App() {
const { updateConfig } = useAIHooksConfig();
// Update configuration at runtime
const switchToProduction = () => {
updateConfig({
baseURL: 'https://api.mycompany.com/ai',
headers: { 'X-API-Version': 'v2' }
});
};
}
Never expose your OpenAI API key in client-side code. Use a backend proxy:
// Express.js example
app.post('/api/ai/completions', authenticate, async (req, res) => {
const response = await openai.completions.create({
...req.body,
// Override any sensitive fields
api_key: process.env.OPENAI_API_KEY
});
res.json(response);
});
If you must use client-side API keys (development only):
# .env.local
REACT_APP_OPENAI_API_KEY=your_api_key_here
Implement rate limiting on your backend:
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
});
app.use('/api/ai', limiter);
<AIHooksProvider config={{
onError: (error) => {
// Log to error tracking service
console.error('AI Hook Error:', error);
// Show user-friendly message
if (error.statusCode === 429) {
toast.error('Rate limit exceeded. Please try again later.');
}
}
}}>
function InterruptibleGenerator() {
const { generateText, cancel, isLoading } = useTextGeneration();
return (
<>
<button onClick={() => generateText('Long prompt...')}>
Generate
</button>
{isLoading && (
<button onClick={cancel}>Cancel</button>
)}
</>
);
}
const { streamText, data, isStreaming } = useTextStream();
streamText('Write a story...', {
onChunk: (chunk) => console.log('Received:', chunk),
onComplete: (fullText) => console.log('Complete:', fullText)
});
All hooks are fully typed. Import types as needed:
import type {
TextGenerationOptions,
ImageGenerationOptions,
AIHookError,
AIHooksConfig
} from 'genai-hooks';
The library provides detailed error information:
interface AIHookError {
message: string;
code?: string;
statusCode?: number;
details?: unknown;
}
Common error codes:
-
429
: Rate limit exceeded -
401
: Invalid API key -
500
: Server error -
ECONNABORTED
: Request timeout
We welcome contributions! Please see our Contributing Guide for details.
MIT © Harsh Joshi