It’s dangerous to go alone

January 25, 2001

Browsing Style

When you browse into a web site and want to get back to the top, do you use the site’s internal navigation or the “Back” button? Do you tend to check the destination URL for a link before you follow it?

In theory, the only time you need to know the URL of a given page is if you’re visiting it based on a recommendation from outside the web, such as a billboard. Yet I find that I get uncomfortable if I can’t see the destination of a link before I follow it. Why should that be?

It may be a desire to know what sort of file is being pointed at (HTML? PDF? QuickTime?) and how it relates to the current page (Part of the same site? In the same path?). Alternately, it may just be a habit.

I also tend to use “Back” a lot when browsing within a site rather than following return links within the page. This, I think, relates to the less-than-stellar history functionality in modern web browsers. Things are a lot better than the bad old days when Netscape Navigator only remembered sites you had visited in the current session, but browsers still aren’t able to display your browsing history in any form more complex than a sorted list. What I want is a tree that shows how I got to any given page. For example, if I enter a site at page b and follow a link to the home page, a, which then leads me to sub-pages c and d, I would like a tree that shows b pointing to a, and then a pointing to c and d.

Trees don’t deal well with cyclical connections, but it isn’t the purpose of this view to show every connection between the pages, merely to show that a given page was first accessed after following a link on another page. (I say “first accessed” because you may reach a page twice from two different locations. Only the first one is important here.)

The advantage of this scheme is that your whole browsing history is preserved, but it’s still fairly easy to see the connections between pages. All the pages within a site will usually be located closely together in the history, which is a more effective way of grouping things than Internet Explorer’s ability to group pages by domain name. (What about sites that are spread on multiple machines? Or machines that host multiple sites?)

Being able to sort by time is also important, but I often find it less useful than I would hope. This raises the issue of marking pages for later reference. As I see it, these pages fall into three major categories:

  1. Pages that get visited frequently. These are the stuff you put into the “Bookmarks” or “Favorites” menu. They’re sites you visit often enough that you’re willing to spend some time giving them names for the menus and perhaps even organizing them into groups.

  2. Pages that don’t get visited frequently, but that you don’t want to lose track of. The best way to handle these is probably with a searchable list. For it to really work, your bookmark tool should store more information about a page than just its title and address. This is a sort of miniature search engine that runs locally and only returns pages that you have previously marked interesting.

  3. Pages that you want to revisit soon but don’t necessarily care about in the long run. A good example of this is a news article that links to several sites you’re interested in. After browsing around the first site for a while, it may be difficult or time-consuming to get back to the article so you can visit the second. If you could mark pages as checkpoints, you could just say “Go back to the most recent checkpoint” and you’d be back to the news page. These pages might show up in the history as highlighted pages for quick reference.

This sort of functionality is probably unlikely to get implemented in web browsers in the near future. That’s one of the reasons I argued that browsers should be split into modules last September. #


John Gilmore has posted a cleaned-up copy of his essay on the problems of copy protection that prompted my rant on Tuesday.

Curious, I poked around Mr Gilmore’s site and found the usual signs of a Libertarian mindset (extreme distrust of government coupled with a strong belief that the free market can solve all problems). He has recommended, for instance, that people lie on census forms to void the data, pointing out that the U.S. government has previously used the “confidential” census data to assist in the rounding up of Japanese-American citizens during World War II.

I won’t get into a discussion of why I think Libertarianism is naïve as it’s getting late and this entry is already much longer than I thought it was going to be, so I’ll just mention the Clean Water Act. Yeah, in a perfect free market, the long-term drawbacks of pollution would cause companies to naturally avoid it, but since when do companies look at the long term?

Still, these concerns are valid, which is why I think Mr Gilmore’s involvement in projects like creating inexpensive implementations of secure IP will ultimately benefit us all. #

Two additional points

An article about the attempts to build copy-protection into hard-drives (lots of links to past news) and another about the backlash against Warner Brothers by fans angry about the company bullying Harry Potter fan sites. (via Swaine) #