Monday 13 February 2012 ◷ 22:44
When your audience is mostly n00b front-end strugglers, as a medium existing because of publications and not solutions it's perfectly reasonable to waste everyone's time adding another chapter to the circle jerk that doing some Boston Globe website apparently was. Otherwise it makes no sense, for the following reasons:
There is no reliable way to determine the amount of bandwidth a user is willing to spend on a single resource. Because screen size is all we've got to query in some mobile curious way that doesn't mean it's usable or efficient to do so. So it's some Spielerei to prepare for the magic event when there is an API that we can use to determine the bandwidth on a device? No, because mobile means there isn't a single bandwidth on a device. Even an API can't predict when conditions suddenly get worse because a device moves from Wifi to Edge. In such a volatile environment it's backward to keep thinking in monolithic bitmapped resources arriving after crafting a single request for it. There's wavelets, progressive schemes and vector based images that are much better at using bandwidth flexible and efficiently because they can stream until something determines there is enough resolution, and resume when more is needed. But if you're looking for a markup solution - because that is what your audience is craving for - it's just moving the same old protocol into another layer, effectively offering the user no advantage. Just wasting time waiting to get something into a standard it shouldn't be in in the first place, but apparently it's to hard to get designer-slash-developers to ask server specialists to solve their problems properly. That's why hey are called HTML monkeys.
And even if you've gone through such great lengths to prevent sucking redundant data over someone's precious connection, only one click seperates that user from going to another website that didn't and makes the effect of your optimisations seem pretty useless. It's like risking your life to remove a single mine from a minefield and then claim you've made an improvement in safety for the user. But that user isn't going near your single safe spot as long as all the other mines are still there. Nice tech demo, but in the real world the whole ecosystem the user operates in has to have some minimal level to make it usable in he first place. And the core of that ecosystem is still the browser.
Browsers should do a lot more than just implement web standards flawlessly. Initiatives like Opera Mini show users can be given good tools without some standards body first have to make a standard for it, or every webmaster come up with his own reinvention of what should be browser funcionality. But if all HTML monkeys can do is markup and think in markup they will never understand how to build a browser and solve the real problems. They seem to have enough issues grasping why browsers don't function as they would expect them to work from their markup viewpoint anyway.
Sunday 12 February 2012 ◷ 16:04
For the benefit of the weak of mind it would have been so much easier to let Microsoft win the browser wars ten years ago. That would have given us for example a nice vector based graphics implementation, hardware accelerated graphics effects and web typography. At this point when WebKit seems to have the same advantages as Internet Explorer had ten years ago mongering the killing of W3C is like supporting democracy only to get into power and then start supporting dictatorship. There is of course nothing wrong with dictatorship when you're the dictator or part of it's clan. So buy some shares of $AAPL or $GOOG and start building WebKit only sites: there is nothing wrong with vendor lock-in if you're the vendor.
Standards aren't designed for the current MVP development mentality of the minimum viable developers that most web developers seem to be these days. If your only maintenance issue with web standards would be giving an intern the login to your WordPress site, you shouldn't care about web standards and vendor prefixes. And only the developer with a minimal viable mindset would interpret vendor prefixes as being "experimental", and propose to promoting them to have only -alpha- or -beta- meaning. They are just a mechanism for allowing a namespace within the syntax of valid CSS so you don't have to split it at the MIME type. Which is nice in case you're working on a closed system in which you have control over both the rendering engine and the code it's rendering. And of course Apple and Google consider the whole internet their closed system in which they soon have control, with or without W3C. Combine that with a myriad of n00b web developers who bet on having moved on to something else when their standards-ignorant code hits the fan in a couple of years, and Flash seems like a good idea.
The real issue is that with the propaganda of HTML5 as the set of new HTML and CSS combined with JavaScript APIs in a modern browser it hardly makes sense to define future HTML and CSS standards from the perspective of a platform that lacks JavaScript. If other browser vendors don't start supporting -webkit prefixes any JavaScript library provider could do the same thing by automatically providing a polyfill for it. If a site doesn't work without JavaScript anyway, the problem that it only uses the -webkit vendor prefixes is academic.
Saturday 12 November 2011 ◷ 14:38
Looks like my Bricolage install is still running. Better get some rants up here then ;-)
Wednesday 15 December 2010 ◷ 00:06
Spent the week in a snowy Achterhoek.
Saturday 11 December 2010 ◷ 17:57
Motion is a piece of software that does something with a webcam, like when you point a webcam - or several webcams - at some place it can be configured to detect a certain amount of motion in a particular area of the image and then create a movie or send the image somewhere. All very complicated so I just installed it and plugged my webcam in and forgot about it. And when I returned from a week's holiday somewhere in some directory it had stored a bunch of images like this: