📕 subnode [[@bmann/wget]]
in 📚 node [[wget]]
Download an entire website with wget
Source: Linux Journal
wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--domains example1.com example2.com \
--no-parent \
example1.com
You may also want to add --limit-rate=10k
(or some similarly slow speed) so that you don't trigger the site blocking you.
Checking this into a git repo and putting it on GitHub Pages is a good way of archiving sites.
📖 stoas
- public document at doc.anagora.org/wget
- video call at meet.jit.si/wget