nascoder-terminal-browser-mcp

1.0.5Β β€’Β PublicΒ β€’Β Published

🌐 NasCoder Terminal Browser MCP

πŸš€ Ultimate Standalone Terminal Browser & Web Scraper - Browse any website, extract content & links directly in your terminal with zero file pollution. 100% standalone with smart fallbacks, enhanced by optional terminal browsers. Perfect for documentation reading, API exploration & web scraping.

npm version downloads standalone no-files cross-platform

⚑ Quick Start (2 minutes)

Method 1: Simple CLI Tool (Easiest)

# Install
npm install -g nascoder-terminal-browser-mcp

# Use immediately
browse https://example.com
browse https://docs.python.org --format summary
browse https://news.ycombinator.com --format links

Method 2: Amazon Q CLI Integration

# Install
npm install -g nascoder-terminal-browser-mcp

2. Add to Q CLI

Edit ~/.config/amazonq/mcp.json:

{
  "mcpServers": {
    "nascoder-terminal-browser": {
      "command": "npx",
      "args": ["nascoder-terminal-browser-mcp"],
      "timeout": 30000,
      "disabled": false
    }
  }
}

3. Restart Q CLI

# Exit Q CLI
/quit

# Start again
q chat

4. Try It!

Browse https://example.com and show me the content

πŸ”₯ Why Choose This MCP?

Feature Standard Tools NasCoder Terminal Browser
File Downloads ❌ Creates files βœ… No files - terminal only
Dependencies ❌ Requires external tools βœ… 100% standalone
Browser Support Limited βœ… Multiple engines + fallback
Fallback Method None βœ… Built-in fetch+html-to-text
Link Extraction Manual βœ… Automatic link parsing
Content Formatting Raw HTML βœ… Clean terminal formatting
Error Handling Basic βœ… Advanced retry & fallback
Output Control Fixed βœ… Multiple format options

🎯 What You Get

πŸš€ Standalone Operation

  • Zero external dependencies - Works on any system with Node.js
  • Built-in fallback - Uses fetch+html-to-text when no terminal browsers available
  • Smart enhancement - Automatically uses lynx/w3m/links if installed for better formatting
  • Always functional - Never fails due to missing system tools

Terminal Web Browsing

  • No file pollution - Everything displayed directly in terminal
  • Multiple browser engines - lynx, w3m, links, elinks with auto-selection
  • Smart fallback - Uses fetch+html-to-text if no terminal browsers available
  • Clean formatting - Optimized for terminal reading

Advanced Features

  • Link extraction - Automatically find and list all page links
  • Content truncation - Prevent overwhelming output with length limits
  • Multiple formats - Choose between full, content-only, links-only, or summary
  • Error resilience - Multiple fallback methods ensure success

Developer Friendly

  • Zero configuration - Works out of the box
  • Comprehensive logging - Debug issues easily
  • Flexible options - Customize behavior per request
  • MCP standard - Integrates with any MCP-compatible system

πŸ› οΈ Available Tools

1. terminal_browse

Browse websites and display content directly in terminal.

Parameters:

  • url (required) - Website URL to browse
  • browser - Terminal browser to use (auto, lynx, w3m, links, elinks)
  • format - Output format (full, content-only, links-only, summary)
  • extractLinks - Extract page links (true/false)
  • maxLength - Maximum content length to prevent overwhelming output

Example:

Use terminal_browse to visit https://docs.github.com with format=summary

2. check_browsers

Check which terminal browsers are available on your system.

Example:

Check what terminal browsers are available

3. extract_links

Extract all links from a webpage without showing full content.

Parameters:

  • url (required) - Website URL to extract links from
  • maxLinks - Maximum number of links to return (default: 50)

Example:

Extract all links from https://news.ycombinator.com

πŸš€ Usage Examples

🎯 Simple CLI Commands

# Browse any website
browse https://example.com

# Get page summary with stats
browse https://docs.python.org --format summary

# Extract all links
browse https://news.ycombinator.com --format links

# Full content with metadata
browse https://github.com/trending --format full

# Limit content length
browse https://very-long-page.com --max-length 1000

# Use specific browser
browse https://example.com --browser lynx

πŸ“‹ Available Formats

  • content - Clean page text (default)
  • summary - Brief overview with stats
  • links - All extracted links
  • full - Complete content with links

πŸ”§ CLI Options

browse <url> [options]

Options:
  --format, -f     Output format (content, summary, links, full)
  --max-length, -l Maximum content length [default: 2000]
  --browser, -b    Browser to use (auto, lynx, w3m, links)
  --help, -h       Show help

πŸ€– Amazon Q CLI Integration

Browse https://example.com

Documentation Reading

Browse https://docs.python.org/3/ with format=content-only

Link Discovery

Extract links from https://github.com/trending

Quick Summary

Browse https://news.ycombinator.com with format=summary

Specific Browser

Browse https://example.com using lynx browser

πŸ“Š Output Formats

Full Format (default)

  • Complete page content
  • All extracted links
  • Metadata and statistics
  • Method used for browsing

Content-Only

  • Just the page text content
  • No links or metadata
  • Clean reading experience

Links-Only

  • Only the extracted links
  • Perfect for navigation
  • Numbered list format

Summary

  • Brief content preview
  • Key statistics
  • Quick overview

πŸ”§ Dependencies & Installation

πŸ“¦ What's Included (Standalone)

The package includes everything needed to work:

  • @modelcontextprotocol/sdk - MCP protocol support
  • node-fetch - HTTP requests
  • cheerio - HTML parsing
  • html-to-text - HTML to text conversion
  • winston - Logging

πŸš€ Optional Enhancements

For even better text formatting, install terminal browsers:

# macOS (Homebrew)
brew install lynx w3m links

# Ubuntu/Debian  
sudo apt install lynx w3m links elinks

# CentOS/RHEL
sudo yum install lynx w3m links elinks

πŸ’‘ How It Works

  1. First Choice: Uses terminal browsers (lynx, w3m, links) if available
  2. Automatic Fallback: Uses built-in fetch+html-to-text if no browsers found
  3. Always Works: Never fails due to missing dependencies

πŸ› οΈ Terminal Browser Support

Supported Browsers

  • lynx - Best text formatting, recommended
  • w3m - Good table support, images in some terminals
  • links - Interactive features, mouse support
  • elinks - Enhanced links with more features

Installation Commands

# macOS (Homebrew)
brew install lynx w3m links

# Ubuntu/Debian
sudo apt install lynx w3m links elinks

# CentOS/RHEL
sudo yum install lynx w3m links elinks

Auto-Selection Priority

  1. lynx (best formatting)
  2. w3m (good compatibility)
  3. links (interactive features)
  4. elinks (enhanced features)
  5. fetch+html-to-text (always available fallback)

🎨 Advanced Usage

Custom Content Length

{
  "url": "https://very-long-page.com",
  "maxLength": 5000
}

Links Only Mode

{
  "url": "https://documentation-site.com",
  "format": "links-only",
  "maxLinks": 100
}

Specific Browser

{
  "url": "https://example.com",
  "browser": "w3m",
  "extractLinks": false
}

πŸ” Troubleshooting

"No terminal browsers found"

# Install at least one terminal browser
brew install lynx  # macOS
sudo apt install lynx  # Ubuntu

"Browser failed" errors

  • The tool automatically falls back to fetch+html-to-text
  • Check internet connectivity
  • Some sites may block terminal browsers

Content too long

Use maxLength parameter to limit output:
Browse https://long-page.com with maxLength=2000

Q CLI doesn't see MCP

  1. Check ~/.config/amazonq/mcp.json syntax
  2. Restart Q CLI (/quit then q chat)
  3. Verify package installation: npm list -g nascoder-terminal-browser-mcp

πŸ“ˆ Performance Features

Smart Caching

  • No file system caching (by design)
  • Memory-efficient processing
  • Fast response times

Error Handling

  • Multiple fallback methods
  • Graceful degradation
  • Comprehensive error messages

Resource Management

  • 30-second timeout protection
  • Memory-conscious content truncation
  • Efficient link extraction

πŸŽ‰ Success Stories

"Finally, a way to browse documentation without cluttering my filesystem with temp files!" - Developer

"The automatic fallback from lynx to fetch+html-to-text saved my workflow when lynx wasn't available." - DevOps Engineer

"Perfect for scraping API docs directly in my terminal. The link extraction is incredibly useful." - API Developer

πŸ“‹ Comparison

Tool Files Created Browser Support Link Extraction Fallback Method
NasCoder Terminal Browser βœ… None βœ… 4 browsers βœ… Automatic βœ… fetch+html-to-text
curl + html2text ❌ Temp files ❌ None ❌ Manual ❌ None
wget + pandoc ❌ Downloads ❌ None ❌ Manual ❌ None
lynx alone ❌ Can save files βœ… lynx only ❌ Manual ❌ None

πŸ”— Links

πŸ“„ License

MIT - Feel free to use, modify, and distribute


πŸš€ Ready to browse the web in your terminal without file clutter?

Install now and experience the difference!

npm install -g nascoder-terminal-browser-mcp

Built with ❀️ by NasCoder (@freelancernasimofficial)

Package Sidebar

Install

npm i nascoder-terminal-browser-mcp

Weekly Downloads

18

Version

1.0.5

License

MIT

Unpacked Size

42.6 kB

Total Files

7

Last publish

Collaborators

  • freelancernasim