Being able to browse documentation and other important and useful files offline is actually a huge convenience!
There’s already a couple applications out there that can provide you with a database of Wikipedia’s content (great stuff, really), but what if you need offline access to a more obscure site? Documentation for your favorite program, a couple long articles you want to read or, God forbid, some dude’s Facebook statuses.
A quick search will reveal a really awesome tool for this: HTTrack. There’s a GUI available, in addition to the usual command line tool, which I’m using right now. And I love it!
It’s great in that it allows for a nice amount of customizability regarding what is and isn’t downloaded. You can go the easy route and have it go with default values, or you can specify a link depth for it to follow, tell it to in- or exclude pages outside of your given website, in- or exclude specific filetypes (don’t want to download a bunch of nasty flash files if all you care for is the text), and so on.
Since I’m in for a long car ride later today (scheduled post ahoy) and I want to remain productive, I went and downloaded a nice “web-book” on game programming patterns. Got myself the latest version of the LÖVE docs as well, will probably want to try a couple ideas out as well.
With great power comes great responsibility though. Be selective in what you download, since ripping all the pages from a site can put quite a bit of strain on its server.