This project was developed with assistance from AI coding tools including Claude Sonnet 4 and Gemini Pro 2.5
DL is a command-line tool written in Go for downloading multiple files concurrently from a list of URLs or a Hugging Face repository. It features a dynamic progress bar display for each download, showing speed, percentage, and downloaded/total size. The tool supports advanced Hugging Face repository handling, including interactive selection of specific .gguf files or series.
Auto-update is available with -update.
-
Build the tool for all major platforms:
./build.sh
Binaries will be placed in the
build/directory for macOS (Intel/ARM), Windows (x64/ARM), and Linux (x64/ARM). -
Download from a URL list:
./dl -f ../download_links.txt -c 4
-
Download from a Hugging Face repo:
./dl -hf "Qwen/Qwen3-30B-A3B" -
Select a GGUF file/series from a Hugging Face repo:
./dl -hf "unsloth/DeepSeek-R1-0528-GGUF" -select -
Download a pre-defined model by alias:
./dl -m qwen3-0.6b
-
Search for models on Hugging Face:
./dl model search llama 7b gguf
-
Install, update, or remove llama.cpp binaries:
./dl install llama-mac-arm ./dl update llama ./dl remove llama-win-cuda
-
Show system hardware info:
./dl -t
-
Self-update the tool:
./dl --update
- Concurrent Downloads: Download multiple files at once, with concurrency caps for file lists and Hugging Face downloads.
- Multiple Input Sources: Download from a URL list (
-f), Hugging Face repo (-hf), or direct URLs. - Model Registry: Use
-m <alias>to download popular models by shortcut (see below). - Model Search: Search Hugging Face models from the command line.
- Resume: Resume supported
- Llama.cpp App Management: Install, update, or remove pre-built llama.cpp binaries for your platform.
- Hugging Face GGUF Selection: Use
-selectto interactively choose.gguffiles or series from Hugging Face repos. - Dynamic Progress Bars: Per-download progress bars with speed, ETA, and more.
- Pre-scanning: HEAD requests to determine file size before download.
- Organized Output: Downloads go to
downloads/, with subfolders for Hugging Face repos and models. - Error Handling: Clear error messages and robust handling of download issues.
- Filename Derivation: Smart filename handling for URLs and Hugging Face files.
- Clean UI: ANSI escape codes for a tidy terminal interface.
- Debug Logging: Enable with
-debug(logs tolog.log). - System Info: Show hardware info with
-t. - Self-Update: Update the tool with
-update. - Cross-Platform: Windows, macOS, and Linux supported.
Note: You must provide only one of the following:
-f,-hf,-m, or direct URLs.
-c <concurrency_level>: (Optional) Number of concurrent downloads. Defaults to3. Capped at 4 for Hugging Face, 100 for file lists.-f <path_to_urls_file>: Download from a text file of URLs (one per line).-hf <repo_input>: Download all files from a Hugging Face repo (owner/repo_nameor full URL).-m <model_alias>: Download a pre-defined model by alias (see Model Registry below).--token: Use theHF_TOKENenvironment variable for Hugging Face API requests and downloads. Necessary for gated or private repositories. TheHF_TOKENvariable must be set in your environment.-select: (Hugging Face only) Interactively select.gguffiles or series.-debug: Enable debug logging tolog.log.-update: Self-update the tool.-t: Show system hardware info.install <app_name>: Install a pre-built llama.cpp binary (see below).update <app_name>: Update a llama.cpp binary.remove <app_name>: Remove a llama.cpp binary.model search <query>: Search Hugging Face models from the command line. Can be used with--token.
You can use the -m flag with the following aliases to quickly download popular models:
qwen3-4b, qwen3-8b, qwen3-14b, qwen3-32b, qwen3-30b-moe, gemma3-27b
Search for models on Hugging Face directly from the command line:
./dl model search llama 7b ggufInstall, update, or remove official pre-built llama.cpp binaries for your platform from github:
./dl install llama-mac-arm
./dl update llama
./dl remove llama-win-cudaShow system hardware information:
./dl -tUpdate the tool to the latest version:
./dl -updateTo build the tool for all supported platforms, run:
./build.shThis will produce binaries for macOS (Intel/ARM), Windows (x64/ARM), and Linux (x64/ARM) in the build/ directory.
This project is licensed under the MIT License. See the LICENSE file for details.
