@evg4b/donkey_linux_amd64

0.0.3 • Public • Published

Donkey 🫏
Go version License GitHub Release

A small utility for batch file rpecessing using AI.

Get started

Prerequisites

  • Install Ollama
  • Download model for processing ollama pull mistral-small (you can change model in ~/.donkey.toml)

Then you can install the application in one of the following ways:

Homebrew (macOS | Linux)

brew install evg4b/tap/donkey

Scoop (Windows)

scoop bucket add evg4b https://github.com/evg4b/scoop-bucket.git
scoop install evg4b/donkey

NPM (Cross-platform)

npx -y @evg4b/donkey ...

Stew (Cross-platform)

stew install evg4b/donkey

Binary (Cross-platform)

Download the appropriate version for your platform from donkey releases page. Once downloaded, the binary can be run from anywhere. You don’t need to install it into a global location. This works well for shared hosts and other systems where you don’t have a privileged account.

Ideally, you should install it somewhere in your PATH for easy use. /usr/local/bin is the most probable location.

[!Caution]

This program is a simple utility for batch processing of files using AI. The final result depends on the model used and your request. By running it, you take responsibility for all changes that were made to your file system.

Package Sidebar

Install

npm i @evg4b/donkey_linux_amd64

Weekly Downloads

2

Version

0.0.3

License

none

Unpacked Size

9.24 MB

Total Files

4

Last publish

Collaborators

  • evg4b