π Ultimate Standalone Terminal Browser & Web Scraper - Browse any website, extract content & links directly in your terminal with zero file pollution. 100% standalone with smart fallbacks, enhanced by optional terminal browsers. Perfect for documentation reading, API exploration & web scraping.
# Install
npm install -g nascoder-terminal-browser-mcp
# Use immediately
browse https://example.com
browse https://docs.python.org --format summary
browse https://news.ycombinator.com --format links
# Install
npm install -g nascoder-terminal-browser-mcp
Edit ~/.config/amazonq/mcp.json
:
{
"mcpServers": {
"nascoder-terminal-browser": {
"command": "npx",
"args": ["nascoder-terminal-browser-mcp"],
"timeout": 30000,
"disabled": false
}
}
}
# Exit Q CLI
/quit
# Start again
q chat
Browse https://example.com and show me the content
Feature | Standard Tools | NasCoder Terminal Browser |
---|---|---|
File Downloads | β Creates files | β No files - terminal only |
Dependencies | β Requires external tools | β 100% standalone |
Browser Support | Limited | β Multiple engines + fallback |
Fallback Method | None | β Built-in fetch+html-to-text |
Link Extraction | Manual | β Automatic link parsing |
Content Formatting | Raw HTML | β Clean terminal formatting |
Error Handling | Basic | β Advanced retry & fallback |
Output Control | Fixed | β Multiple format options |
- Zero external dependencies - Works on any system with Node.js
- Built-in fallback - Uses fetch+html-to-text when no terminal browsers available
- Smart enhancement - Automatically uses lynx/w3m/links if installed for better formatting
- Always functional - Never fails due to missing system tools
- No file pollution - Everything displayed directly in terminal
- Multiple browser engines - lynx, w3m, links, elinks with auto-selection
- Smart fallback - Uses fetch+html-to-text if no terminal browsers available
- Clean formatting - Optimized for terminal reading
- Link extraction - Automatically find and list all page links
- Content truncation - Prevent overwhelming output with length limits
- Multiple formats - Choose between full, content-only, links-only, or summary
- Error resilience - Multiple fallback methods ensure success
- Zero configuration - Works out of the box
- Comprehensive logging - Debug issues easily
- Flexible options - Customize behavior per request
- MCP standard - Integrates with any MCP-compatible system
Browse websites and display content directly in terminal.
Parameters:
-
url
(required) - Website URL to browse -
browser
- Terminal browser to use (auto, lynx, w3m, links, elinks) -
format
- Output format (full, content-only, links-only, summary) -
extractLinks
- Extract page links (true/false) -
maxLength
- Maximum content length to prevent overwhelming output
Example:
Use terminal_browse to visit https://docs.github.com with format=summary
Check which terminal browsers are available on your system.
Example:
Check what terminal browsers are available
Extract all links from a webpage without showing full content.
Parameters:
-
url
(required) - Website URL to extract links from -
maxLinks
- Maximum number of links to return (default: 50)
Example:
Extract all links from https://news.ycombinator.com
# Browse any website
browse https://example.com
# Get page summary with stats
browse https://docs.python.org --format summary
# Extract all links
browse https://news.ycombinator.com --format links
# Full content with metadata
browse https://github.com/trending --format full
# Limit content length
browse https://very-long-page.com --max-length 1000
# Use specific browser
browse https://example.com --browser lynx
-
content
- Clean page text (default) -
summary
- Brief overview with stats -
links
- All extracted links -
full
- Complete content with links
browse <url> [options]
Options:
--format, -f Output format (content, summary, links, full)
--max-length, -l Maximum content length [default: 2000]
--browser, -b Browser to use (auto, lynx, w3m, links)
--help, -h Show help
Browse https://example.com
Browse https://docs.python.org/3/ with format=content-only
Extract links from https://github.com/trending
Browse https://news.ycombinator.com with format=summary
Browse https://example.com using lynx browser
- Complete page content
- All extracted links
- Metadata and statistics
- Method used for browsing
- Just the page text content
- No links or metadata
- Clean reading experience
- Only the extracted links
- Perfect for navigation
- Numbered list format
- Brief content preview
- Key statistics
- Quick overview
The package includes everything needed to work:
-
@modelcontextprotocol/sdk
- MCP protocol support -
node-fetch
- HTTP requests -
cheerio
- HTML parsing -
html-to-text
- HTML to text conversion -
winston
- Logging
For even better text formatting, install terminal browsers:
# macOS (Homebrew)
brew install lynx w3m links
# Ubuntu/Debian
sudo apt install lynx w3m links elinks
# CentOS/RHEL
sudo yum install lynx w3m links elinks
- First Choice: Uses terminal browsers (lynx, w3m, links) if available
- Automatic Fallback: Uses built-in fetch+html-to-text if no browsers found
- Always Works: Never fails due to missing dependencies
- lynx - Best text formatting, recommended
- w3m - Good table support, images in some terminals
- links - Interactive features, mouse support
- elinks - Enhanced links with more features
# macOS (Homebrew)
brew install lynx w3m links
# Ubuntu/Debian
sudo apt install lynx w3m links elinks
# CentOS/RHEL
sudo yum install lynx w3m links elinks
- lynx (best formatting)
- w3m (good compatibility)
- links (interactive features)
- elinks (enhanced features)
- fetch+html-to-text (always available fallback)
{
"url": "https://very-long-page.com",
"maxLength": 5000
}
{
"url": "https://documentation-site.com",
"format": "links-only",
"maxLinks": 100
}
{
"url": "https://example.com",
"browser": "w3m",
"extractLinks": false
}
# Install at least one terminal browser
brew install lynx # macOS
sudo apt install lynx # Ubuntu
- The tool automatically falls back to fetch+html-to-text
- Check internet connectivity
- Some sites may block terminal browsers
Use maxLength parameter to limit output:
Browse https://long-page.com with maxLength=2000
- Check
~/.config/amazonq/mcp.json
syntax - Restart Q CLI (
/quit
thenq chat
) - Verify package installation:
npm list -g nascoder-terminal-browser-mcp
- No file system caching (by design)
- Memory-efficient processing
- Fast response times
- Multiple fallback methods
- Graceful degradation
- Comprehensive error messages
- 30-second timeout protection
- Memory-conscious content truncation
- Efficient link extraction
"Finally, a way to browse documentation without cluttering my filesystem with temp files!" - Developer
"The automatic fallback from lynx to fetch+html-to-text saved my workflow when lynx wasn't available." - DevOps Engineer
"Perfect for scraping API docs directly in my terminal. The link extraction is incredibly useful." - API Developer
Tool | Files Created | Browser Support | Link Extraction | Fallback Method |
---|---|---|---|---|
NasCoder Terminal Browser | β None | β 4 browsers | β Automatic | β fetch+html-to-text |
curl + html2text | β Temp files | β None | β Manual | β None |
wget + pandoc | β Downloads | β None | β Manual | β None |
lynx alone | β Can save files | β lynx only | β Manual | β None |
- NPM Package: https://www.npmjs.com/package/nascoder-terminal-browser-mcp
- GitHub: https://github.com/freelancernasimofficial/nascoder-terminal-browser-mcp
- Issues: https://github.com/freelancernasimofficial/nascoder-terminal-browser-mcp/issues
- Author: @freelancernasimofficial
MIT - Feel free to use, modify, and distribute
π Ready to browse the web in your terminal without file clutter?
Install now and experience the difference!
npm install -g nascoder-terminal-browser-mcp
Built with β€οΈ by NasCoder (@freelancernasimofficial)