I recently read through this thread from the IETF HTTP working group, circa 1995, on Netscape’s then-current proposal to add byte-range support to HTTP. What I found fascinating about the thread is that at the time, there was great resistance to the idea. Netscape wanted support mainly so they could retrieve individual pages of embedded PDF documents, but many people objected that general byte range support was too expensive and complex to support this one use, and that byte ranges were the wrong mechanism for this anyway (which is true).
What’s amazing is that while the resumption of interrupted documents—which is probably what 99% of byte range requests are used for today—was mentioned as a potential use (and Netscape Navigator 2.0 did implement it), no one seemed to consider it a worthy goal. Many people pointed out that a lot of documents (server-translated HTML, CGI scripts, etc) could not be reliably byte-served, or would be prohibitively expensive to do so. Static binary files were mentioned, but mostly in the context of, e.g., a 100k image, where it would be a handy convenience rather than a huge time-saver.
It is remarkable today to remember that in the mid-’90s, HTTP simply wasn’t considered a viable way to transfer files. If you wanted to download a large file, you switched to FTP. Nine years later, of course, I find myself routinely using HTTP to transfer multi-gigabyte files, for which the possibility of resuming a failed transfer without starting over is much appreciated.