From first crawl to portable offline library.
argiope has a small command surface, but each command covers a different workflow. This guide follows the same progression as the README, then expands on the hosted install scripts and report-friendly automation use cases.
Getting started
The CLI exposes three commands: `check`, `images`, and `library`. In practice, that maps neatly to site validation, downloading content, and rebuilding the offline browser for an existing folder tree.
argiope check https://example.com
argiope images https://example.com/gallery -o ./images
argiope library ./imagesIf you are brand new to the project, start with `check` first. It gives you immediate output and confirms the install, network access, and crawl defaults in a single run.
Check websites for broken links
`argiope check` crawls a starting URL, stays within the same origin, and reports broken responses, errors, and timeouts. The default mode prints a concise terminal summary, which is enough for local QA.
argiope check https://example.com
argiope check https://example.com --depth 5 --timeout 15
argiope check https://example.com --verboseWhen to use it
- Validate a marketing or docs site before release.
- Check whether a redirect or URL migration left stale links behind.
- Generate a report artifact in CI without noisy console output.
Generate reports for review and CI
Report mode writes directly to a file and suppresses normal console output. That makes it useful for artifact uploads, PR comments, or publishing a human-readable HTML result after a workflow run.
Default format. Good for logs and quick terminal-first review.
--report report.txtUseful for GitHub comments, summaries, or wiki-style output.
--report report.md --report-format markdownSelf-contained file with inline CSS and a presentation-friendly layout.
--report report.html --report-format htmlargiope check https://example.com --report report.md --report-format markdown
argiope check https://example.com --report report.html --report-format html --include-positivesBy default the report contains only broken results. Add `--include-positives` when the point is a complete crawl record instead of a failure-focused summary.
Download images and build an offline browser
`argiope images` stores images in organized directories, then generates a small static browsing layer across the output tree. The result works well for archiving galleries, collecting media from a known site, or producing a portable HTML deliverable.
argiope images https://example.com/gallery -o ./images
xdg-open ./images/library.htmlGenerated files
- `library.html` at the output root as the main landing page.
- `index.html` in nested folders for navigation and thumbnail overviews.
- `reader.html` in image folders for ordered prev/next reading.
The generated pages keep links relative and percent-encode names for local file browsing, so the archive stays usable even after moving it somewhere else.
Download MangaFox chapter ranges
When the input URL points at `fanfox.net`, `images` switches into MangaFox-aware chapter discovery. The primary source is the manga RSS feed, which is especially helpful for titles where the HTML chapter list is incomplete or JavaScript-rendered.
# Download all chapters
argiope images https://fanfox.net/manga/naruto -o ./manga
# Download a filtered range
argiope images https://fanfox.net/manga/naruto --chapters 1-10 -o ./manga
# Inspect chapter discovery output
argiope images https://fanfox.net/manga/naruto --verboseRegenerate the browser for existing folders
Use `argiope library` when you already have a directory of downloaded images and only want to rebuild the HTML browser layer.
argiope library ./images
argiope library ./mangaThis is useful after manual file moves, after upgrading to a newer browser UI, or when the original `images` run is long finished but the folder still exists.
Tune crawl behavior
The crawler keeps the options simple: depth, timeout, delay, verbose mode, and optional parallel crawling.
Maximum crawl depth. Default is `3`.
Request timeout in seconds. Default is `10`.
Delay between requests in milliseconds. Default is `100`.
Enable concurrent crawling for faster runs on larger sites.
argiope check https://example.com --parallel --depth 5 --timeout 20
argiope images https://example.com/archive --delay 250 -o ./archiveTroubleshooting
Chapters are missing
Run the MangaFox download with `--verbose` first. That shows the discovered chapter list and the order it will use.
`argiope` is not found after install
Open a new terminal and recheck the PATH. The Windows installer updates the user PATH, but active shells do not refresh themselves.
I only want a static browser refresh
Use `argiope library ./your-folder` instead of re-running a full download.