RetroCode UK

Published Date May 8, 2021 Reading Time ~1 minutes RSS Feed Linux Command Line

Download All Linked Files From a Web Page

Here is a simple method to download all files linked from a web page:

wget -r -np -l 6 -nd -A zip 'https://example.com/downloads'
  • -r – Recurse subdirectories (follow links)
  • -np – No Parent (do not load any pages from a parent level)
  • -nd – No directories (all fetched files will be in the same folder)
  • -l 6 – Do not go deeper than 6 levels of link recursion
  • -A zip – Specify extensions (in this case, only .zip files)

It should go without saying but be wary of how this is used. Information may be public, but your access might get blocked, and scraping websites without express permission is in dubious legal / ethical territory.