Too Big To Know, Part 2

I still have about 40 pages more before I can mark this book as “read,” but I am becoming less and less convinced of the validity of his ideas.

One serious problem with his arguments is that he seems to have absolute no idea what axioms are, as he talks about “axiomatic certainty” on page 45 while and on page 149 he talks about “making unfalsifiable claims is not science.” The thing is that anyone with a mathematical background knows that axioms are by definition unfalsifiable claims (and must be unfalsifiable in order to do their job), and yet any logical argument must begin with these unfalsifiable claims, or else it’s just “vacuously true.”

Naturally, then he continues to sing praises of evolution (page 150) as if there were no unfalsifiable claims, which is of course false, because if there were no unfalsifiable claims then there would be no axioms and therefore no logic to speak of. Evolution, if it really is science, must contain—and in fact must be based on—unfalsifiable claims.

Both evolution and creationism are based on unfalsifiable claims, and must necessarily be so because the laws of logic demand it. And creationism is in fact very much valid science. If he is not even aware of this, how can he persuade us to believe his other ideas?

Blind people not welcome?

“Crime prevention program in effect: For the safety of our customers and staff, please remove your hat, hood and dark glasses before entering this building. Your cooperation is appreciated.”

There is neither a braille nor an audio version of this notice anywhere.

So, in other words, “blind people and Muslim women are not welcome at our bank.”

Is such the attitude of our society’s institutions? Or are they just so insensitive that they didn’t know blind people exist?

Subtitle editors

We really have been introduced only to CapScribe and Amara, the latter of which I was deeply familiar. But is this really the limit of what’s available?

Aegisub

When I was looking for alternatives for our captioning assignment, by chance I found Aegisub after a long unsuccessful search for open-source MacOSX-compatible editors. It is MacOSX compatible. And it is open-source. But for a simple school project the interface looked far too complicated; it’s, however, probably going to be really useful if I learnt how to use it and a real project comes my way.

SubtitleEdit, Jubler, MPC-HC

A few days ago I was checking my site statistics and noticed an interesting backlink, which linked to this post which recommended a few subtitle editors that I didn’t know about:

Since I don’t have Windows, the only one I can check out is Jubler. I’ll do that some time.

InqScribe

The other day I landed on this post and a poster recommended InqScribe. It supports both Windows and MacOSX, but it’s a commercial piece of software.

So there seems to be really quite a lot of options other than just CapScribe and Amara. Maybe some time I’ll try them all and report back what I find.

What is the rationale behind timeouts?

I logged in to my course account yesterday, left it open, and today I found the login screen sitting where my studio course’s syllabus should have been.

Why is a timeout even necessary? To force students to take breaks? These annoying timeouts are there even if you saved the pages onto your hard disk.

Why is this happening in this program in particular? Isn’t our program the unlikeliest place on earth that this happened? I am sure this is an act of exclusion at least for blind people, people with learning difficulties, or people with motor impairments. Did the programmer even try out that green dot computer interface thing? Are they telling us to block the timeout code as if it were a virus?

Why are all web sites getting less and less usable? What is it, really, that we—the design, IT, and engineering professions on the whole—are striving towards? Isn’t our goal usability and not uselessness?

Too big to know

I read our assigned reading—chapters 4 and 5 of David Weinberger’s Too Big To Know (ISBN 978-0-465-02142-0)—on the train, so I didn’t have any “resources” when I thought something wasn’t right, but I still spotted three obvious problems while reading.

Weinberger argues that “mere diversity of ethnicity is not” relevant (p. 74, second-last line), which he based off Scott Page’s The Difference. While Page is someone I respect a lot, I have to disagree to the categorical claim that “mere diversity of ethnicity is not.” According to Malcolm Gladwell’s Outliers, mere diversity of ethnicity (or rather the history of the person’s ancestors, even if the person’s life circumstances have been completely disconnected from those of their ancestors) can be relevant—for reasons that are not yet understood.

The second problem is that Weinberger quoted Howard Rheingold as saying “Even the mere presence of moderators—even if they never moderate a single posting—is enough to keep out the trolls” (p. 78, second paragraph, last two lines) and believed it at that. This might have been true in the olden days, but anyone who is on an open group on LinkedIn and plagued by a never-ending spam problem can attest that the “mere presence” of moderators is not enough; in fact even the presence of hard-working moderators who moderate hundreds of articles (as is the case of AIGA’s official group) is not enough to deter trolls.

The third obvious problem is that he stated that “of sixty randomly chosen political sites, only 15 percent put in links to sites of their opponents” (p. 82, paragraph 3, lines 4–5) and thought this signals a problem. However, whoever has worked in an organization knows that perhaps upper management is just apprehensive of linking to anything. The lack of linking is not indicative of a problem except if you consider ignorance of what links mean to be a problem (but I do consider this to be a problem, especially when many lawyers seem to count among the ignorant ones…).

In any case, I will continue reading after I get the urgent stuff done. Maybe my opinion of it will change, or maybe it will not; as for right now, I think while his argument has merit, it also has holes, and, judging from what these holes are in the two chapters I have read, probably quite a number of them.

Is a system-provided screen reader necessarily stable?

I should not be doing this at such a time in the semester, but I kept the screen reader running for a few hours today, and I found, to my dismay, that the answer is no. A system-provided screen reader is not necessarily stable.

The first program to fall victim to instability was Terminal. It started crashing for no apparent reason, and at one point it repeatedly crashed after less than a couple of minutes of usage. Terminal and VoiceOver do not play well together.

The second program to fall victim to instability was Safari. After a few hours of screen reader usage, Safari started to stop responding to tab switching. Turning VoiceOver off immediately fixed the problem. Turning it on caused the problem to resurface after just a couple of minutes.

If such is the stability of a screen reader built into the OS, I wonder what kind of stability third-party screen readers on other platforms can really achieve.

Random notes related to site specificity and other things

As cited by Vince Dziekan in Virtuality and the Art of Exhibition (p. 42), Nick Kaye defines (in Site-Specific Art: performance, place and documentation, Routledge, 2000) site-specificity as encompassing “a wide range of artistic approaches that ‘articulate exchanges between the work of art and the places in which its meanings are defined.” The artwork is in some sense inseparable from the site in which it is exhibited. Meaning exists within the interaction between the site and the artwork. The whole is greater than the sum of its parts.

(During the artists’ presentation at Multipli{city}, there was indeed a strong consensus that the exhibited artwork took on a separate meaning when they were transplanted to the Graduate Gallery. The graffiti wall became a work with a completely different feel, for example, and the re-created makeshift shack space could only serve as “documentation.” After the panel discussion the artist talked with other people and agreed that if transplanted to a small town, for example, his installation would then take on even more wildly different meanings.)

Site specificity is opposed to media specificity (p. 191). In a sense, site specificity is treating the site as a material support. That said, in the digital realm, “media” is “fundamentally” just “data streams” (Cubitt as cited by Dziekan) and perhaps we can talk about “the liminality of borders in the digital age” (Dziekan, p. 144, although not referring to this context). The site is also not just the physical space, as “the artistic investigation of site never operates along physical or spatial lines exclusively but rather operates embedded within an encompassing ‘cultural framework’ defined by art’s supporting institutional complex” (One Place after Another: notes on site-specificity, 1977, p. 88, as cited by Dziekan).

According to Dziekan, modern curatorial practice very much hinges on site specificity (e.g., p. 42). He also mentioned other processes in curatorial design, such as choreography (p. 93).

Random questions not mentioned above:

What is a “programme architecture”?

What is a “facture”? “digital facture”?

First impressions of automated checkers for WCAG 2.0 AA

A week ago I finished my assignment on automated checkers for WCAG 2.0 AA. I asked it to check the home page of another blog of mine and it spewed out 223 “potential problems.” (A classmate told me she got more than a thousand.)

I drudgingly went through the list and at the end what did I find?

Three legitimate concerns.

Yes, three out of 223 were all I could find. That’s an accuracy of just slightly over 1%.

Granted, from an AI point of view I know that we don’t talk about accuracy but rather about recall and precision, and a lot of the bogus warnings do concern deep AI problems such as how human language can be understood. Or impossible-to-solve ones like guessing the author’s intent. That said, some of those warnings—especially those related to standard third-party Javascript library API calls, standard icons, or, incredibly, non-breaking spaces—are just incredulous.

I’m not hating the checker or have anything against it per se, but for these checkers to be taken seriously they really have to get better. An accuracy of 1% is not going to work.

Syndicate content