Fang Talks

This is incredibly silly!

Being able to browse documentation and other important and useful files offline is actually a huge convenience!

There’s already a couple applications out there that can provide you with a database of Wikipedia’s content (great stuff, really), but what if you need offline access to a more obscure site? Documentation for your favorite program, a couple long articles you want to read or, God forbid, some dude’s Facebook statuses.
A quick search will reveal a really awesome tool for this: HTTrack. There’s a GUI available, in addition to the usual command line tool, which I’m using right now. And I love it!

It’s great in that it allows for a nice amount of customizability regarding what is and isn’t downloaded. You can go the easy route and have it go with default values, or you can specify a link depth for it to follow, tell it to in- or exclude pages outside of your given website, in- or exclude specific filetypes (don’t want to download a bunch of nasty flash files if all you care for is the text), and so on.

Since I’m in for a long car ride later today (scheduled post ahoy) and I want to remain productive, I went and downloaded a nice “web-book” on game programming patterns. Got myself the latest version of the LÖVE docs as well, will probably want to try a couple ideas out as well.

With great power comes great responsibility though. Be selective in what you download, since ripping all the pages from a site can put quite a bit of strain on its server.
~ Fang


  • 26/04/2014 (4:12 AM)

    I haven’t really found much of a need to access a website offline yet. Typically I just access caches. Of course I’d need a web connection to find those in the first place. I should probably get a dictionary program though. Just in case. Or a dictionary.

Post a comment

Your email will stay hidden, required field are marked with a *.

Experimental anti-spam. You only have to do this once. (Hint: it's "Fang")