Too big to know

I read our assigned reading—chapters 4 and 5 of David Weinberger’s Too Big To Know (ISBN 978-0-465-02142-0)—on the train, so I didn’t have any “resources” when I thought something wasn’t right, but I still spotted three obvious problems while reading.

Weinberger argues that “mere diversity of ethnicity is not” relevant (p. 74, second-last line), which he based off Scott Page’s The Difference. While Page is someone I respect a lot, I have to disagree to the categorical claim that “mere diversity of ethnicity is not.” According to Malcolm Gladwell’s Outliers, mere diversity of ethnicity (or rather the history of the person’s ancestors, even if the person’s life circumstances have been completely disconnected from those of their ancestors) can be relevant—for reasons that are not yet understood.

The second problem is that Weinberger quoted Howard Rheingold as saying “Even the mere presence of moderators—even if they never moderate a single posting—is enough to keep out the trolls” (p. 78, second paragraph, last two lines) and believed it at that. This might have been true in the olden days, but anyone who is on an open group on LinkedIn and plagued by a never-ending spam problem can attest that the “mere presence” of moderators is not enough; in fact even the presence of hard-working moderators who moderate hundreds of articles (as is the case of AIGA’s official group) is not enough to deter trolls.

The third obvious problem is that he stated that “of sixty randomly chosen political sites, only 15 percent put in links to sites of their opponents” (p. 82, paragraph 3, lines 4–5) and thought this signals a problem. However, whoever has worked in an organization knows that perhaps upper management is just apprehensive of linking to anything. The lack of linking is not indicative of a problem except if you consider ignorance of what links mean to be a problem (but I do consider this to be a problem, especially when many lawyers seem to count among the ignorant ones…).

In any case, I will continue reading after I get the urgent stuff done. Maybe my opinion of it will change, or maybe it will not; as for right now, I think while his argument has merit, it also has holes, and, judging from what these holes are in the two chapters I have read, probably quite a number of them.

Is a system-provided screen reader necessarily stable?

I should not be doing this at such a time in the semester, but I kept the screen reader running for a few hours today, and I found, to my dismay, that the answer is no. A system-provided screen reader is not necessarily stable.

The first program to fall victim to instability was Terminal. It started crashing for no apparent reason, and at one point it repeatedly crashed after less than a couple of minutes of usage. Terminal and VoiceOver do not play well together.

The second program to fall victim to instability was Safari. After a few hours of screen reader usage, Safari started to stop responding to tab switching. Turning VoiceOver off immediately fixed the problem. Turning it on caused the problem to resurface after just a couple of minutes.

If such is the stability of a screen reader built into the OS, I wonder what kind of stability third-party screen readers on other platforms can really achieve.

Random notes related to site specificity and other things

As cited by Vince Dziekan in Virtuality and the Art of Exhibition (p. 42), Nick Kaye defines (in Site-Specific Art: performance, place and documentation, Routledge, 2000) site-specificity as encompassing “a wide range of artistic approaches that ‘articulate exchanges between the work of art and the places in which its meanings are defined.” The artwork is in some sense inseparable from the site in which it is exhibited. Meaning exists within the interaction between the site and the artwork. The whole is greater than the sum of its parts.

(During the artists’ presentation at Multipli{city}, there was indeed a strong consensus that the exhibited artwork took on a separate meaning when they were transplanted to the Graduate Gallery. The graffiti wall became a work with a completely different feel, for example, and the re-created makeshift shack space could only serve as “documentation.” After the panel discussion the artist talked with other people and agreed that if transplanted to a small town, for example, his installation would then take on even more wildly different meanings.)

Site specificity is opposed to media specificity (p. 191). In a sense, site specificity is treating the site as a material support. That said, in the digital realm, “media” is “fundamentally” just “data streams” (Cubitt as cited by Dziekan) and perhaps we can talk about “the liminality of borders in the digital age” (Dziekan, p. 144, although not referring to this context). The site is also not just the physical space, as “the artistic investigation of site never operates along physical or spatial lines exclusively but rather operates embedded within an encompassing ‘cultural framework’ defined by art’s supporting institutional complex” (One Place after Another: notes on site-specificity, 1977, p. 88, as cited by Dziekan).

According to Dziekan, modern curatorial practice very much hinges on site specificity (e.g., p. 42). He also mentioned other processes in curatorial design, such as choreography (p. 93).

Random questions not mentioned above:

What is a “programme architecture”?

What is a “facture”? “digital facture”?

First impressions of automated checkers for WCAG 2.0 AA

A week ago I finished my assignment on automated checkers for WCAG 2.0 AA. I asked it to check the home page of another blog of mine and it spewed out 223 “potential problems.” (A classmate told me she got more than a thousand.)

I drudgingly went through the list and at the end what did I find?

Three legitimate concerns.

Yes, three out of 223 were all I could find. That’s an accuracy of just slightly over 1%.

Granted, from an AI point of view I know that we don’t talk about accuracy but rather about recall and precision, and a lot of the bogus warnings do concern deep AI problems such as how human language can be understood. Or impossible-to-solve ones like guessing the author’s intent. That said, some of those warnings—especially those related to standard third-party Javascript library API calls, standard icons, or, incredibly, non-breaking spaces—are just incredulous.

I’m not hating the checker or have anything against it per se, but for these checkers to be taken seriously they really have to get better. An accuracy of 1% is not going to work.


I went to the 4ormat presentation on Thursday. So this will be our online portfolio, and not Behance, Cargo Collective, or Coroflot.

I’m not saying there’s any problems with the decision. It’s a totally fine decision, and as the presenter said, they’ve explored all the options and settled on (what they think is) the best. And 4ormat—while we’re students here—does seem to be very attractive. For one thing, I certainly am going to test how giving it a separate domain will work out.

That said, this still means that OCAD will not have an official presence on Behance.

That is, people will see Art Center, MICA, RISD, SCAD, SVA, and even Academy of Art, but not OCAD.

If I remember correctly, the presenter mentioned that a lot of students don’t have online portfolios. I wonder if that really is the case. Talia has one. Larry also has one. Three is certainly not a representative sample, but for those of us who are already using Behance (or maybe something else), are we really going to give up Behance for two years (or four), use something else, and then when we graduate and lose our free access switch back?

I’m not so sure.

The end of the OCAD network on Facebook

Two days ago, on November 23, I received a mass email from IT Help saying that the forwarder that forwards email to will be turned off at the end of the semester. Since Facebook has not been allowing the creation of new networks (nor the update of existing ones, apparently) for quite a while, this means that new students will no longer be able to join the OCAD network on Facebook.

Granted, networks on Facebook have not been doing much lately, but that can be said of virtually everything. Everything on Facebook—including messaging, SMS support, and even fan pages)—is getting less and less useful. So perhaps it will just be a matter of time before all the existing networks will die off.

Still, this will be a “milestone event” for OCAD: The end of its official network on Facebook must still mean something.

WCAG first impressions

Since some of us are talking about aChecker, I threw my own site at it and it spewed out a slew of complaints. I didn’t assume my site was flawless (in fact I knew it had many problems), but the amount of complaints it threw at me was just too much.

I mean, some of what it spew back at me was justified. (For example, I didn’t know WCAG requires the lang attribute to be tagged onto the HTML element instead of the BODY element—not that the requirement made any sense.) But some of it was just bogus. Contrast problems for non-textual elements that happens to be text? With the advent of webfonts, textual data can be anything (especially when people have started talking about using specialized dingbat fonts as a replacement for graphics). You just can’t infer that a piece of textual data will be actual text, especially when the glyphs concerned are obviously symbols.

The problem is that these requirements are divorced from both the context of what the text is and how the text is actually used.

So I wonder: Will the mandatory adoption of WCAG actually produce the opposite effect of what is intended? Will people, out of the requirement (as opposed to the desire) to be compliant with the WCAG, forego simple text for graphics, throwing the web back to where it was 10 years ago when heavy graphics ruled? I don’t want to believe in this, but if WCAG 2.0 AA is going to become mandatory, I think this will be a very real possibility.

Exclusivity, or the unreliability of Wikipedia

I have long claimed that Wikipedia is strongly biased against knowledge that comes either from non-English-language sources and from cultures (and subcultures) where most of their practices are undocumented. This is cultural imperialism. And I have also long claimed that its now-standard requirement for references (that are not dictionaries and encyclopaedias) are also hindrances into knowledge dissemination. However, during today’s synchronous seminar a different picture of how exclusive Wikipedia has become emerged.

It turns out that last year they also did a class project on Wikipedia. And it turned out that most of the stuff they wrote about were deleted. Not edited. Just deleted. On what amounts to bogus grounds.

We are talking about a field that, even though can still be emerging already, already has tons of English-language literature published in English-speaking countries. This is a bias that is not even against non-white, non-English-speaking cultures. It is outright unfathomable.

What of attitude, really, is this?

This of course is rooted not only in the current focus of references, but also on the “principle” of “notability,” which I have always found to be incomprehensible. You can’t imagine when anything will become important. If there were no “notability” requirement, then the second the thing becomes “notable” (whatever notable means), Wikipedia will be the only encyclopaedia in the world to talk about that thing. That would be an unmatched advantage.

Whoever is in control of Wikipedia certainly has no such vision. They only focus on short term measurable success, on duplicating the accomplishments of traditional encyclopaedias. They used to talk about knowledge contribution as their fundamental principle; but that is probably one of the biggest lies that ever came out of an entity that purports to belong to the free culture.

Unexpected discovery on Big Welcome Day

I actually talked about this in my study plan: my lack of knowledge about AODA’s effects on non-digital design. So I was pleasantly surprised when I was checking out the stuff that I got from RGD Ontario’s booth and found Access Ability : A Practical Handbook on Accessible Graphic Design.

Obviously, three weeks ago I already found Inclusive Design : A Universal Need in the OCAD library. However, the primary focus of that book was interior design and architecture, fields that I wouldn’t be qualified to even touch.

So I was definitely very happy to have found something from none other than RGD Ontario. For one, this means there is something about graphic design to talk about; and secondly, it means that this something is not some obscure, fringe thing that has no consequence—in other words, I am not crazy or wasting my time trying to steer my direction away from digital technologies…

Syndicate content