This is an update of my older article touching the same subject, only with some of the outdated things removed (thank goodness some nuisances have practically disappeared from the internet), and some new ones added. Compared to the old article, I also reordered the points such that they are still roughly sorted from most annoying and most relevant to least.
Unless you are here for historical reasons though, you will probably want to read the latest version of this article instead.
My hope is that many people will read this and consider it when making or updating their websites. Or at least complain to webmasters if they encounter it in other sites. But the main purpose of this text is mostly to spit my bile and vent my (and many others') frustrations. This means it will contain a large amount of sarcasm, and may seem threatening or insulting to sensitive readers. Read it at your own risk and don't come complaining afterwards. It also means it are my personal opinions and if you think you have good reasons to do any of the things mentioned here, just go ahead. After all, I'm not the Pope or anything.
I search for something in a search engine, I find a page with the answer, I click on it, it loads… and then POOF! It vanishes into thin air and I get the main page shoved up my nose. It is tedious and sometimes impossible to find that specific page by starting from the main page. The only solution is to disable JavaScript to thwart this mechanism, and go back. Now I really would want to know why people put this kind of nonsense in their sites. Is their main page really so neat that they want everyone to see it? Mostly, it isn't. Maybe they believe people will look at other parts of the site when sent back to the main page. Well, I personally tend to never want to visit a site again if it denies me the viewing of a page without manually digging for it.
The only reason why this could be justified, is to prevent ‘deep linking’, i.e. other sites linking directly to pages deep within the site. Then again, there should be nothing wrong with that, it is the whole idea of the internet. Any properly designed site shows its identity on each webpage, with a navigation system that allows the visitor to go to the home page. Even in sites with frames, where a deep link would not load the entire frameset and leave the visitor stranded without navigation, this technique is still not justified. Of course frames are utterly obsolete, but assume someone is still forced to use them. It only takes a little bit of scripting to detect a deep link and reload the entire frameset with the linked page inside it, instead of going for the lazy solution of kicking the visitor to the home page.
Imagine you are cozily surfing the web, with your desktop neatly arranged, and your browser window perfectly tuned so you can see enough of the websites while still keeping an eye on other windows next to the browser. Then suddenly you enter a site and POP! It blows up the browser to full screen, making it explode in your face. I really, really, cannot think of any reason why anyone should do this. Except maybe to express: hey, my site is so great that I am sure you'll want to view it full screen, so I make it full screen for you!
Unfortunately all the sites I encountered that did this, sucked. Moreover, I want to decide myself how large my browser window should be. If a site only looks good in full screen, then it sucks from the start.
Please, do not use this silly JavaScript trick to emphasize that your site sucks. It should only be used where it could actually be useful, and where the user really wants it, i.e. in a button which is clearly marked “clicking here will resize your browser window.” The ‘onLoad’ property of a webpage is not where the user wants it!
With the advent of CSS, the possibility to use “text-decoration: none
” arose. This means that people can actually turn off the underline for hyperlinks, which has been the default since the invention of the web browser. This is not bad as such, as it allows to tailor the appearance of links to a custom web design. However, many people like to use this to make the links totally identical to the surrounding text. Only when the user hovers over them with the mouse they become visible, and sometimes even that has been disabled! The only pointer that the word is clickable in the latter case, is the cursor changing to a hand icon.
Now tell me, do you believe anyone wants to scan every word in every webpage with the mouse, in order to detect where the wonderfully hidden links are? No! Links must be visible at first glance. Whether with an underline, different style, colour, or whatever. Blue-coloured underlined text is burned so deeply into the common knowledge of people, that it is the ideal way to indicate links. On any webpage where the hyperlinks are the main feature (for instance, a search results page,) both underlines and the blue colour must be used. On other pages it is OK to drop either the blue colour or the underline, but never both. Neither underlines nor blue text should be used for anything else than links, unless their meaning is clearly indicated. There is never a good reason to make links invisible, except in a game “find the hidden links”. There is one hidden link in this very paragraph. Did you find it?
In this regard, the new layout that Google rolled out for its search results around March 2014 baffled me. They removed the underlines, in what seems to me a change just for the sake of change. They seem to have attempted to bring the layout more in line with the current awful trend of whiteout-inducing user interfaces, that feature text floating around in a sea of white space with no separators anywhere. The new result page rubs me in such a wrong way that I am starting to actively avoid using Google. Bad move, guys.
This is one of the newest fads that started around 2012. I load a page and see no indications of page numbers or how much items there are. The only indication is a moderately-sized scroll bar in my browser. So I happily start scrolling, assuming I can quickly get an overview of the entire page. But when I almost reach the bottom, it suddenly sprouts new items and the scroll bar jumps up. As I continue scrolling, this happens over and over again. It never ends and I get the feeling I am inside Zeno's paradox, or one of those feverish nightmares that seem to have no end.
In ancient times people stuck paper together until it was one immensely long ribbon. Terribly unwieldy, therefore they rolled it up into scrolls. Accessing information inside the scroll involved transferring paper from one roll to the other, until the desired information showed up. Unpractical, time-consuming, and difficult to estimate how much information was in a scroll. Then people invented books with pages. Scrolls got out of fashion very quickly because one can bookmark in a book and skip to the right page immediately after looking it up in an index. A similar evolution happened with audio and video. A cassette is basically a scroll in a box, and has the same problems. This is why since the advent of random access media like CDs and hard drives, almost nobody uses cassettes anymore.
The web never had a prehistoric sequential data period. It has always been a collection of separate pages with hyperlinks. Bookmarking a page used to be trivial and a quick glance at the scroll bar when opening a page revealed how much information it contained. For some reason web designers decided that this lack of a prehistoric period has to be compensated for. To be trendy and hip, a modern website must now mimic ancient text and multimedia storage systems. Just imagine that when viewing YouTube, you need to fast-forward through a dozen videos of kittens and people performing dumb antics until you find what you want. Why would something similar be justified for text-based content?
It is not hard to figure out where it all went wrong and why history seems to be going in reverse here. The two main culprits are touchscreen devices, and the general inability of humans to cope with more than a single user interface. Infinite scrolling somewhat makes sense on touchscreen devices that can only be controlled by pushing and swiping one's meat sticks on the display. Scroll bars are awkward on such devices, and eat away the often scarce screen real estate, so Apple Inc. got the idea of almost entirely obliterating them. On a device with the aforementioned limitations this was an OK trade-off between ease of use and ease of retrieving data. On a desktop PC or laptop, it makes no sense. If you want to offer infinite scrolling on mobile devices, fine. But please at least give the option of pagination to people with less constrained computing devices.
Next to making content difficult to find, infinite scrolling pages also have a risk of choking the browser. Ironically, this risk is highest on mobile devices which generally have limited computing power compared to a desktop PC. Therefore the option to view content in paginated form is useful even on a mobile website.
What bothers me a lot about the much-hyped era of “web 2.0” is that it has spawned an endless series of very similar websites where form is favoured over function. There is the typical blog with some photo at the top, which might be intended to give the website an identity. Unfortunately this identity gets completely lost due the site template being used at a gazillion other sites. Even if the photo was made by the authors themselves and is not a stock photo, it obediently follows Photography 101 rules making it utterly forgettable like the rest of the site.
Then there are variations on this theme where more crap is added that makes the site even more forgettable. Tag clouds, photo streams, tweets, social media ‘like’ buttons and random faces of people who liked this page scattered everywhere. Ads or surveys that pop up after viewing the page for ten seconds. Somewhere amidst this crap might be the actual content, but sometimes it is a challenge to find it, and people might have clicked one of the dozen distracting elements before getting to that point.
Believe it or not, there has been an era when Flash and Java were considered cool, and they kind of were. Flash was used for funny animations or simple games. Java was used for educational applets or custom interfaces on devices with a built-in web server. It was all used in moderation and all was fine. When both technologies became too popular for their own good however, people started using them for the most pointless things like animated buttons, navigation bars in websites, and sometimes even entire websites.
One of the worst applications of Flash are the dreadful ‘Flash intros’. Before loading the main content of a website, the visitor is almost forced to sit through a sometimes shoddily created animation, just because Flash is supposed to be cool. In the very worst cases, this animation lacked a ‘skip intro’ button. I have skipped many an entire website because I could not skip its dumb Flash intro. If you still believe we're around the year 2000 and you badly want to create a Flash intro, make sure to accompany it with a prominent ‘skip intro’ button.
The general public gradually became aware that both technologies are bloated constructs that force webpage visitors into downloading large binary chunks that can only be viewed through proprietary plug-ins that do not work on all platforms. Then came the hackers who found exploits in these technologies, allowing to install root kits and steal confidential information. The developers of both technologies responded with an endless series of updates, with the more recent updates enforcing more stringent security failsafes, making the end-user experience annoying at best, and impossible at worst.
The nice thing is, both Flash and Java seem to be condemned to certain death by the companies that currently own them. Adobe has announced that it will at some point cease developing Flash because HTML 5.0 is increasingly taking over the same functionality in a much more flexible manner. Oracle has not announced anything about wanting to kill Java in the browser, but they sure seem to be doing the utmost effort to achieve that goal. Loading a single Java applet nowadays is an ordeal. Getting an applet to work in a non-annoying way is a royal pain.
Yet, despite their impending obsolescence both technologies are still being used in cases where pure HTML perfectly suffices — often even without requiring any new HTML5 features. Here is an example from the old days when people already did simple things in amazingly convoluted ways. To create buttons that changed colour when the user moused over them, some web designers used separate Java applets for each button. That's right, an entire Java applet for a simple stupid button. Often there were dozens of these buttons, crammed into a navigation frame: dozens of java applets eating memory and increasing the probability of your browser crashing. When Flash became popular, of course there was a Flash variation on this theme. The buttons typically looked like this:
When JavaScript was introduced, it became possible to implement these buttons in a much simpler way. Then came CSS, which made it even simpler. If you look in this page's source, you'll see that the above button actually is a plain hyperlink of a certain class, defined in the style sheet. This is faster, safer, cleaner, easier to modify, it separates the content of the page from its presentation, and makes the page much easier to navigate both by people and search engines. This was only a simple example. Much more complex designs can also be made with nothing but CSS and perhaps a dash of JavaScript, all technologies that pretty much every browser supports without having to install cumbersome plug-ins.
Some sites somehow manage to include banner ads or other dynamic content that is implemented so badly, that browsers are unable to render the rest of page before this content has been loaded. If the content cannot load at all, visitors see nothing until they waits for a timeout. If the ‘stop’ button is pressed before this timeout, the page will often stay blank. I consider banner ads pure noise. And I do not like it if the noise prevents me from looking at the real content of a site. Mostly I just give up and go to another site if I can't see anything within reasonable time, and most other people will do the same. So whether this ‘no content until the ad has loaded’ stuff is on purpose or just due to incompetence, whoever has a website that exhibits it is always shooting themself in the foot.
I wonder how many rotting corpses of dead epilepsy patients are lying behind computer monitors right now, with an ad still flashing: you are the 10000th visitor of this site!
There's also variation where you are the 999999th visitor,
and the ad is throbbing and shaking as if it is being held in the hands of someone who drank ten too many cups of coffee. Of course everyone is the ‘10000th’ or ‘999999th’ visitor of that site, but that doesn't matter. What matters, is that flashing and shaking things are an ideal visitor repellant, and the repulsion force is proportional to the area and frequency of the flashing thing. Especially on pages where the user is supposed to read even a small piece of text, fast moving and flashing things are not done. If there is nothing else of any importance on a page, and the page is only to be seen during a short period, then it may be OK to put something flashing on it. But please, no flashing junk on any page with more than two sentences of text!
Suppose I enter a website through a hyperlink or a search engine, but it is in too deep a level, e.g.
www.site.com/products/rubberduckies/yellow/aggressive/model2.html.
Now imagine I want to see all rubber duckies, but I can't immediately find a navigation link to go to that higher level. The logical thing to do, is to cut off all the parts of the URL after ‘rubberduckies’. In other words, go to
www.site.com/products/rubberduckies/.
In a well-designed website, I would then arrive at the page with the overview of all rubber duckies. But in many cases I get ‘permission denied’, although I am pretty certain that I have permission to see all rubber duckies. This forces me to go back all the way to the main page and re-traverse the navigation structure. That sucks. To prevent this, it is as simple as naming the main page of each subdirectory ‘index.html’ (or ‘index.htm’ or ‘index.php’), or making a redirect pointing to the actual page.
During the height of the famous Browser Wars, some genius at Netscape or Microsoft added functionality to JavaScript for intercepting right-clicks. The original idea was probably to let web designers implement custom contextual menus. Many people however used it to simply disable right-clicking altogether. I have seen anti-right click scripts on sites where it really did not make any sense at all. There are only two plausible explanations for why someone would do such thing.
The first is to prevent visitors from right-clicking images to save them. This is a childish and dumb idea. First of all, the image has already been downloaded, otherwise the user wouldn't see it. By right-clicking and choosing ‘save as’, the user just moves the already downloaded copy somewhere else. Prohibiting this is like an insult to those visitors by making them feel like criminals who would not understand a warning like: it is forbidden to use images on this site for any other purpose than personal viewing.
Saving an image somewhere to be able to view it offline when your site is long gone, is still personal viewing and is a fair right.
The second reason could be to prevent visitors from viewing the HTML source. This is again the staple of an amateurish web designer who probably has copied chunks of HTML and JavaScript by themselves and is so proud of their creation that they believe they can enforce copyright through copying yet another chunk of JavaScript. Or maybe it is simply to hide the fact that they stole someone else's source.
The catch is, this silly mechanism is trivial to circumvent. For images, people can probably drag & drop them, or take a screenshot. In all cases, simply disabling JavaScript in the browser completely disables the intercepting of right clicks. Why take the effort of adding this stupid protection mechanism? Those visitors who have the technical proficiency to do something you don't like with your image or page source, are also the most likely to thwart your inferior protection mechanism, so why bother?
Moreover, right-clicking is used for many other things than saving images or viewing HTML source. For instance, opening links in a new tab/window, bookmarking them, looking up text, … Sabotaging these abilities is a sure way to piss of your visitors and chase them away.
Ahh, another JavaScript classic! You probably know them: the sites where a swirling and rotating heap of garbage stubbornly follows the cursor. It is as if the mouse has been dipped into a hot steaming pile of excrement, causing a swarm of flies to keep on following it. This is a certain way of screaming out: hey, I'm a beginning amateuristic webmaster who just pastes together all the supercool scripts I can find, without thinking two seconds about it.
And it is also a certain way to make it a nuisance to browse your site. Just imagine that whatever you are doing the whole day, a random pile of junk is constantly orbiting your hands. People would get a mental breakdown for less.
The same goes for anything else that is distracting on a website where the main goal is to let the visitor read some text or watch images or a movie. Cut down on moving and blinking stuff if you want people to actually read your text or watch your images, instead of getting annoyed and fleeing away from your site.
In the old days of the Internet it was simple: the certain operating system
was Windows. The number of sites whose main content was usable in Mac OS or Linux only was negligible, if not inexistent. But that did not matter because they would have sucked for exactly the same core reasons. When someone designs a website in such a way that it only works in operating system A with browser B, they are a bit like a virtual racist. Especially when done conspicuously, by redirecting people who do not use the über-software to a page that bluntly states they use the wrong system, without even giving them a chance to try it with their ‘inferior’ system.
I had hoped these practices would disappear with increasing maturity of the Internet, but this might have been a vain hope. Microsoft has finally lost their monopoly on web browsing, but now other companies are gaining user base. While those companies previously did an effort to follow widely adopted standards because this is by far the best strategy for an underdog, now their increasing market share risks pushing them into the same arrogance as MS exhibited with Internet Explorer. And this arrogance again risks spilling over to website designers.
Imagine you have a shop and 100 potential customers. You pick out ten of those customers, based on the fact that they wear shoes of a certain brand. And you put two bodyguards at the entrance, with the order to prevent those people from entering your store at all cost, even though your very best customers might be among those ten. Does that make sense? As much as taking 10% of your income in the form of paper money, and eating it. Or dancing naked in a town square, holding a fresh herring in each hand.
Making your site cross-browser, cross-platform is not as hard as you may think, and makes much more sense than locking out a group of people because they prefer to use a different system than you. Putting a label Designed for / Best viewed with browser B
on your site is nothing more than telling: I am too lazy or incompetent to make my site work for everyone,
or: I believe we're still in the year 1997.
It is OK to include an extra gimmick on your site that will only work in a certain browser, but it is not OK to make this gimmick the core part of your site.
When frames were first introduced back in the days when animals could still speak, they were cool. They actually were a nice alternative to designing websites with tables, which kinda sucked. However, it immediately became apparent that frames had their degree of annoyance too. If you bookmarked the page, you actually bookmarked the top frame, i.o.w. the main page, again and again. Also, searching was confusing because you never knew in what frame you were searching. And printing… please don't remind me of that. Moreover, if you entered such a site through a search engine, you mostly had no means at all to navigate the site, because the navigation frame was missing (see above).
Nowadays, one can use CSS to get a frames-like layout without needing table tags. With JavaScript, PHP or web design tools, you can synchronize navigation blocks across webpages. This gets rid of all the problems above. Frames are prehistoric, please do not make your site look prehistoric.