Episode: 4293 Title: HPR4293: HTTrack website copier software Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr4293/hpr4293.mp3 Transcribed: 2025-10-25 22:35:45 --- This is Hacker Public Radio Episode 4293 for Wednesday the 15th of January 2025. Today's show is entitled HD Track website copier software. It is hosted by Henry Cameron and is about four minutes long. It carries a clean flag. The summary is, I use the HD Track software to get my own copy of websites. Welcome to Hacker Public Radio. My name is Henry Cameron and I'm your host today. The way back machine by the Internet Archive is a very good resource for websites no longer existing or older revisions of them. However, sometimes I have also found it's nice and useful to have my own copy of a website. It means I have control over the copy and it can be accessed offline and I have no worldwide weight for the page to load. My most typical use case is for websites that I am manager of myself. For one or another reason I want to keep a snapshot of the site. I have also used it for fact-based sites which I want to always have access to, like a reference book. And one of my recent use cases was a magazine that has closed down and announced the website will also soon be terminated. Although it's available in the way back machine I wanted to have a copy myself for a short period of time. The software I use for this is HT Track. This software is available for Windows, Android, Linux and Unix-like systems. It is at least for some platforms available with a graphical user interface. I have myself only used HT Track with a terminal interface on Linux. HT Track is a free and open source software. In its simplest way to operate it's just to type HT Track followed by the URL to the storage page of the site to be copied. In many cases this works well, I get a perfect copy. In other cases it works less well. First of all, of course I do not copy very big websites, both for the amount of time it takes and for the disk space. What is stated in the robot text file on the site can also matter for the result. Another issue can be the folder structure of the site, HT Track may not find all folders in the default setup, for example how images are stored. I have myself also got issues when menus and links not works normal where I instead have to right click to open the link. The HT Track website has quite a lot of information in the documentation and it also has a forum. And in the terminal there is also good help about all additional available commands. I have in my general use it's found the simple first attempt to copy sites that it gives perfect or good enough result directly without need to research details. So when I want to preserve snapshot of earlier releases of my own sites or when I want to have an offline and preserved copy of an important site, I consider HT Track to be an easy to use and yet powerful tool. I am aware of similar tool exist but this is the one I currently use. Thank you for listening, take care and goodbye. You have been listening to Hacker Public Radio at HackerPublicRadio.org. Today's show was contributed by a HBR listener like yourself. If you ever thought of recording a podcast, then click on our contribute link to find out how easy it really is. Posting for HBR has been kindly provided by an honesthost.com, the internet archive and rsync.net. On this address status, today's show is released under Creative Commons Attribution 4.0 International License.