(Or, more accurately, why the DOCTYPE is no more broken than any other potential switching mechanism.)
In a recent article, “Beyond DOCTYPE
: Web Standards, Forward Compatibility, and IE8“, Aaron Gustafson states that “the DOCTYPE
[is] unsustainable as a switch for standards mode.”
His argument is based on the problem that many developers and authoring tools now make use of correct DOCTYPE
s despite the fact that they are not in fact using standards-based, valid code. Therefore, you can not actually assume that a valid DOCTYPE
actually indicates the presence of the type of HTML (HyperText Markup Language) code it claims.
Yes. This is true.
However, he then continues to state that a reasonable solution for this issue is to create yet another standards-based rendering switch. How is this logical?
Let’s review: the reason the current DOCTYPE
switching mechanism is broken is because developers and authoring tools don’t use it correctly. The solution? Create a new switch which…can also be misused very easily.
…we’re really only left with one option for guaranteeing a site we build today will look as good and work as well in five years as it does today: define a list of browser versions that the site was built and tested on, and then require that browser makers implement a way to use legacy rendering and scripting engines to display the site as it was intended—well into the future. Aaron Gustafson
That’s a great idea, except for the minor flaw that there’s absolutely nothing stopping developers from misusing it in exactly the same way they have misused the DOCTYPE
. Authoring tools may add an auto-generated list of default browsers, developers may cut and paste from other sites without understanding what they are using (much like some currently do) or (undoubtedly) new browsers will be developed which either ignore these switches or mis-interpret them.
I think that there’s a certain amount of sense in stating the exact state of browsers when your site was launched. I can see distinct value to being able to state that your site was developed and tested on Firefox 3, Internet Explorer 8 and Opera 9.732. I can certainly understand that this can help future browsers understand how to interpret your older code: when Firefox 14 is released, it will (hypothetically) simply incorporate the rendering rules from version 3, apply them, and there you are: a perfectly rendering web site. Complete with all the limitations it had when it was built, and incapable of taking advantage of any superior changes in rendering that a well-authored and standards compliant site perhaps could have benefited from.
I do feel that it is a serious mistake to consider this to be any kind of long-term solution, however. In reality, it’s just another requirement which can be mis-used exactly like any other.
The solution (which is not, of course, a popular one) is actually attentive developers who are prepared to make changes to their sites when new browsers are released. Developing to standards is a great way to ensure minimum requirements for redevelopment: why should we add yet another feature to pander to developers who refuse to observe basic minimum standards of coding?
Joe Dolson
If what was a valid and what worked were actually 100% the same, I’d be willing to consider a doctrine that restrictive. But, in practice, it’s extremely common to need to use something which is not literally valid in order to provide the best possible experience. It can be an unavoidable condition.
Practically speaking, I don’t think that a comparison between FORTRAN and HTML (HyperText Markup Language) is really meaningful: HTML is a purely presentational language, so inaccuracies in syntax simply mean that the information doesn’t display correctly: there are no formulas in HTML to fail, or which can return incorrect values because of poor syntax.
As much as browser enforcement of validity and correct syntax seems appealing, I embrace the nature of the web: that anybody can publish a web site for themselves without any special knowledge or expertise. Enforcing perfect syntax would destroy that aspect of the web.
I would, however, love to see that as a requirement for professionals in web development…I love the fact that any amateur can publish a web site, but I do think that professionals should be expected to know what the hell they’re doing!
Brian
I would love to have a browser that, when encountering the doctype, would optionally perform a validation and if not valid refuse to display the page, offering the option then to ignore the required validation and display the page anyway.
As a career software developer, I shudder to think what life would have been like if the FORTRAN compiler said in effect “Eh, that syntax is pretty close…let’s give you a number that *might* be what you want”.
I think it’s time for browsers to be cruelly resistant to not standards compliant pages. Of course…with the standards being somewhat open to interpretation that’s a entirely new argument to be had.
Joe Dolson
There’s a right way and a wrong way to tell people that they’re using a lousy browser — and that’s definitely the right way. Thanks!
SneakyWho_am_i
Ignore this rubbish. It will be as reliable as:
– user agent string (anybody remember what these are for??)
– javascript browser detection
– Doctype Declarations
– Conditional Comments
– CSS (Cascading Style Sheets) hacks
All of which exist now and most of which do absolutely nothing for us. The most reliable one is javascript and even then it’s only useful client-side, and only useful if you REALLY know what you’re doing (most people still seem to think that the availability of document.all is THE requirement for sending certain code).
Better to just accept that your site has an expiry date. This page best viewed in browser x at resolution y.
Some of us are being paid to develop websites. For those people it is not possible to ignore this new lunacy, but for those of us who can ignore it (personal site? blog) it makes total sense to pretend we never heard of this new switch.
The browsers change and the standards don’t. Code for standards, not for individual browsers. It’s more cost effective in the long run I expect. If your user agent can’t display the page properly then you should upgrade to one that can. If this means erosion of some or other product’s market share then so be it.
If you notice that your visitors are using browsers with default 10-year-old-standards-support, then let them know about it! “Hey, there’s a choice to be made, you’re not experiencing well-made web applications to their fullest potential, you might enjoy your experience more with (Firefox|Opera|Konqueror|Safari)”
Don’t try to accommodate them if you’re not going to lose your job over it.
– it’s easier to not
– you can sleep easy at night knowing you’ve done what’s morally right
te
Great article! Thanks a lot! 😀
Phil
This sort of thing, like the DOCTYPE switch in the first instance, is to minimise the possibility of that, rather than wipe it, which I think is reasonable to expect in the (relative) short term.
I only hope this does have the required effects. In the meantime I’ll be ignoring it and developing with the edge keyword or, even better, HTML (HyperText Markup Language) 5 (or at least HTML 4 with an HTML 5 DOCTYPE).
Joe Dolson
It’s a very mixed bag; and regardless of the editor, it’s still ALWAYS possible to create bad code. I don’t think that’s an easy thing to measure. It would really require some extensive testing to see what the variety of editors and content management systems will produce when you sit a variety of different inexperienced users in front of them. Big question; hard to answer.
And that’s definitely a point. This may give Microsoft a little breathing room — or it may not. Given that this won’t practically exist until the release of IE (Internet Explorer) 8, and the rate of uptake on new Microsoft browsers…it’s hard to say.
I think that nothing can stop people from “breaking the web.” No matter what tools are provided to try and slow it down, people will always do the unexpected. Something like this may have a small impact, reducing the overall effect of change on the web.
[An interesting aside which I just thought of: one signal I’ll admit I use to judge whether a site is active is whether it works properly in the most modern browsers…that could change radically if sites trigger a particular mode in browsers even if the site has actually been abandoned for a decade!]
I wouldn’t be so sure…the cruft of standard Microsoft coding seems to suggest that they have no problem with that idea. But you could be right…
Phil
Thanks Joe, I certainly appreciate this bouncing of ideas too!
I can’t guarantee, in fact I have no real experience of, whatever free WYSIWYG (what you see is what you get) editors produce in the way of code for small sites these days. That said, the fact that Microsoft have put the effort into this “fix” so that sites won’t break again assumes they have done some research into the potential issues that may arise with the new browser.
In closing, you are right, it will be implemented, in fact, chances are it is already in the codebase. I also agree, it won’t work… eventually. I think Microsoft probably already know that too, as no-one is foolish enough to believe that they can carry tens of rendering engines in a browser forever. I predict that it is set up to fail with enough time to allow Microsoft to better their standards support as well as allow WYSIWYGs to catch up and implement decent standards support themselves, so that no “breaking of the web” has to happen ever again, the position that other modern browsers already find themselves in.
What do you think?
Joe Dolson
And, to be honest, I wonder how much this will truly effect that group of people.
Let’s think, for a minute, about how complete amateurs are likely to code: on the one hand, there’s the possibility that they’ll attempt CSS (Cascading Style Sheets)-based layouts which are difficult to make compatible with multiple browsers, and which are likely to be susceptible to browser differences.
On the other hand, it may be that they’ll construct their sites using “whatever works”: which is likely to be
table
based and fundamentally primitive. Most cheap or older tools work this way, and I doubt this will change any time soon.What do you think is more probable? (And yes, I agree that playing the probability game is really not the point.)
However unfortunate that is, I have to say that one advantage primitive coding offers is that the interpretation has changed significantly for a long time, and probably won’t.
What about behaviors? OK, this could be a problem: amateur developers are likely to grab whatever they find online which claims to fulfill their need. This code is frequently problematic, and definitely falls into the category of things which could die at any moment.
I read your post on the subject, and I agree with you on the whole: it’s a difficult decision which has a lot of down sides. It does also have advantages (which I’ve also said above).
My single biggest issue with this idea, honestly, is that I simply don’t believe it will work — I’m not even questioning, particularly, whether it “should” or “shouldn’t” be implemented. That decision is made: it will be implemented, whether we like it or not.
And by the way — thanks for having this conversation! This is the kind of comment dialog I really appreciate.
Phil
I tried on my own blog, but Zeldman explains this the best. This topic of debate is not about those who develop to standards, it’s not really about those who develop poorly, as they will still be paid to go to work and fix whatever has gone wrong, it’s about those who aren’t developers at all, who don’t read A List Apart, who didn’t even know that specs existed for HTML (HyperText Markup Language). These are the site owners who won’t have “done the job right” but also won’t have the technical knowledge to know where they went wrong when it breaks. And, I stress it again, the users, those visiting these, potentially thousands of, little sites will pay for it with decreased experience and the feeling that the web is “broken”.
Joe Dolson
I’m not sure I really buy the “user suffers” argument on this issue. It only holds up if you make the assumption that web site owners and developers choose not to fix the problems with their website which show up in a new browser version.
I’ll grant you that many developers/site owners won’t notice browser-dependent problems until they themselves start using that browser, of course.
However, to me this argument is a lot more about the developers and the corporations funding those developers than it is about users: it’s about how much work the developers will have to do to accommodate for browser changes.
Under the existing scenario, if a developer has “done the job right,” they’ll have relatively minimal work following a browser change. If they haven’t, they may have substantial reworking to do. With the proposed situation, neither group of developers will necessarily have any need to do substantial revisions on their sites following in the path of destruction left by a new browser; but compliant developers will have one more facet to account for in their initial development.
I agree, absolutely, that the user needs to come first: and in this scenario, I think the user is impacted by developers choosing not to fix their websites to correctly support new browsers. This change may result in that not being necessary — but it does it by allowing developers to keep developing in the same way they always have: never growing, never changing, never needing to learn anything new. I can’t really bring myself to support that idea.
Phil
It’s not, unless it’s not the ignorant one who is affected by said ignorance. In this case, ignorance causes broken websites, which in turn causes confusion for users.
It’s all very well to consider this issue from the point of view of the standards following developer, but Microsoft, due to their past mistakes but continuing reign at the top of the browser pile, have to deal with millions of users.
Joe Dolson
It’s an irony that the responsibility taken on by committing to standards-based web development comes loaded with the obligation that we’ll be the ones keeping up with new developments in standards, new variations on requirements, and new issues with browser compatibility. New rules are written all the time to protect those who are unaware of them.
What happened to ignorance not being an excuse?
I don’t expect life to be fair; but I have no problem resenting the fact that it isn’t. 😉
Mike Cherim
What’s really needed is across-the-board compliance with web standards plain and simple. With that all the rest becomes moot. This step is apparently just one more on the way to the end-goal. I wish we’d just get there for crying out loud.
I will say, if I had my way, the only people who would have do anything would be those who aren’t living up to standards now. All this annoys me as it is only a step on the way (or should I say IN the way).
Just like when IE7 came out and I learned they only addressed some stuff. I get annoyed because I know those who don’t give a shit will carry on, undisturbed. Meanwhile, those playing the game by the current rules have to shoulder the burden. That’s BS!
Dennis at Web Axe
Again, another insightful and refreshing blog post, Joe. I agree that adding another DocType would not correct the problem. Besides developers improving their awareness, maybe coding applications can help interpret what type of X/HTML is being created and then suggest the appropriate DocType. For example, HTML (HyperText Markup Language) Tidy has a “document looks like…” message.
Joe Dolson
But will it work? My question isn’t whether it’s a good idea to protect developers from their stupidity; it’s whether it’s a good idea to implement another method which will fail to do exactly that.
If this were a fool-proof method which I felt would perform as claimed, creating a simple and effective means to ensure compatibility for all websites regardless of your level of competence, I’d be a lot more excited.
Phil
Because there are a lot more of them then there are of us and it is the users that pay for it.
This switching isn’t for the benefit of the developers, it is so that sites created by poor developers don’t break for the users when a new browser comes out. Ultimately it is Microsoft’s fault for including such errors in their rendering in the first place, but they have to do something that will keep the majority of websites intact when they bring out their new and improved browser.