A plug-and-play React component that lets users search either by typing or using voice input (speech-to-text). Built for reusability, flexibility, and quick integration into any modern React application.
-
🎤 Voice Input Support
Capture audio from the microphone and transcribe it using a backend speech-to-text service (e.g., Vosk). -
⌨️ Text Input Support
Supports both voice and manual text input in the same field. -
🔍 Real-Time Search
Automatically queries a backend search API as you speak or type (with debounce). -
🎯 Keyword Highlighting
Matched words are highlighted in the search results and transcript text. -
⚛️ Modular and Reusable
Packaged as a standalone React component that can be embedded in any React app. -
💬 Error Handling
Includes fallback for unsupported browsers and clear messages when audio isn't captured or no results are found.
Install the component via NPM:
npm install react-voice-search-widget
# or
yarn add react-voice-search-widget
import React from 'react';
import VoiceInput from 'react-voice-search-widget';
function App() {
return (
<div style={{ maxWidth: 800, margin: '0 auto', padding: '1rem' }}>
<VoiceInput />
</div>
);
}
export default App;
The main component that handles:
- Microphone access
- Recording and sending audio to the backend
- Typing + voice input management
- Displaying real-time search results
Displays the result list and highlights matched keywords.
Renders a fallback message if the browser doesn't support MediaRecorder
or microphone access.
Your backend should expose two endpoints:
- Accepts an audio file (
multipart/form-data
) and returns a JSON{ text: "transcribed text" }
.
- Accepts a search query and returns results in the following format:
{
"results": [
{
"id": "1",
"name": "<highlighted text>",
"category": "<highlighted category>",
"matchedWords": ["keyword1", "keyword2"]
}
]
}
🔗 Example backend repo: speech-to-text (GitHub)
To work locally and customize:
# Start dev server
npm run dev
# Lint the code
npm run lint
# Build for production
npm run build
react-voice-search-widget/
├── src/
│ ├──components/
│ │ ├── VoiceInput.tsx
│ │ ├── SearchResults.tsx
│ │ ├── UnsupportedBrowserFallback.tsx
│ ├── types/
│ │ └── types.ts
│ └── assets/
│ │ └── microphone-icon.webp
│ ├── App.tsx
│ ├── main.tsx
│ ├── style.css
├── dist/ # Compiled output
├── package.json
├── tsconfig.json
├── vite.config.ts
└── README.md
Tailwind CSS is used for styling. If your host app uses Tailwind, styles will integrate seamlessly. Otherwise, you may want to adjust or override styles accordingly.
Planned for future versions:
- Props to override API endpoints
- Custom result renderers
- Adjustable debounce delay
- Theming via CSS variables
MIT License
@aarathi01
Contributions and feedback are welcome!
- 🔧 speech-to-text (Backend Repo) – Node.js + Vosk + MongoDB backend for voice search