Subtitle editors

We really have been introduced only to CapScribe and Amara, the latter of which I was deeply familiar. But is this really the limit of what’s available?


When I was looking for alternatives for our captioning assignment, by chance I found Aegisub after a long unsuccessful search for open-source MacOSX-compatible editors. It is MacOSX compatible. And it is open-source. But for a simple school project the interface looked far too complicated; it’s, however, probably going to be really useful if I learnt how to use it and a real project comes my way.

SubtitleEdit, Jubler, MPC-HC

A few days ago I was checking my site statistics and noticed an interesting backlink, which linked to this post which recommended a few subtitle editors that I didn’t know about:

Since I don’t have Windows, the only one I can check out is Jubler. I’ll do that some time.


The other day I landed on this post and a poster recommended InqScribe. It supports both Windows and MacOSX, but it’s a commercial piece of software.

So there seems to be really quite a lot of options other than just CapScribe and Amara. Maybe some time I’ll try them all and report back what I find.

What is the rationale behind timeouts?

I logged in to my course account yesterday, left it open, and today I found the login screen sitting where my studio course’s syllabus should have been.

Why is a timeout even necessary? To force students to take breaks? These annoying timeouts are there even if you saved the pages onto your hard disk.

Why is this happening in this program in particular? Isn’t our program the unlikeliest place on earth that this happened? I am sure this is an act of exclusion at least for blind people, people with learning difficulties, or people with motor impairments. Did the programmer even try out that green dot computer interface thing? Are they telling us to block the timeout code as if it were a virus?

Why are all web sites getting less and less usable? What is it, really, that we—the design, IT, and engineering professions on the whole—are striving towards? Isn’t our goal usability and not uselessness?

Too big to know

I read our assigned reading—chapters 4 and 5 of David Weinberger’s Too Big To Know (ISBN 978-0-465-02142-0)—on the train, so I didn’t have any “resources” when I thought something wasn’t right, but I still spotted three obvious problems while reading.

Weinberger argues that “mere diversity of ethnicity is not” relevant (p. 74, second-last line), which he based off Scott Page’s The Difference. While Page is someone I respect a lot, I have to disagree to the categorical claim that “mere diversity of ethnicity is not.” According to Malcolm Gladwell’s Outliers, mere diversity of ethnicity (or rather the history of the person’s ancestors, even if the person’s life circumstances have been completely disconnected from those of their ancestors) can be relevant—for reasons that are not yet understood.

The second problem is that Weinberger quoted Howard Rheingold as saying “Even the mere presence of moderators—even if they never moderate a single posting—is enough to keep out the trolls” (p. 78, second paragraph, last two lines) and believed it at that. This might have been true in the olden days, but anyone who is on an open group on LinkedIn and plagued by a never-ending spam problem can attest that the “mere presence” of moderators is not enough; in fact even the presence of hard-working moderators who moderate hundreds of articles (as is the case of AIGA’s official group) is not enough to deter trolls.

The third obvious problem is that he stated that “of sixty randomly chosen political sites, only 15 percent put in links to sites of their opponents” (p. 82, paragraph 3, lines 4–5) and thought this signals a problem. However, whoever has worked in an organization knows that perhaps upper management is just apprehensive of linking to anything. The lack of linking is not indicative of a problem except if you consider ignorance of what links mean to be a problem (but I do consider this to be a problem, especially when many lawyers seem to count among the ignorant ones…).

In any case, I will continue reading after I get the urgent stuff done. Maybe my opinion of it will change, or maybe it will not; as for right now, I think while his argument has merit, it also has holes, and, judging from what these holes are in the two chapters I have read, probably quite a number of them.

Is a system-provided screen reader necessarily stable?

I should not be doing this at such a time in the semester, but I kept the screen reader running for a few hours today, and I found, to my dismay, that the answer is no. A system-provided screen reader is not necessarily stable.

The first program to fall victim to instability was Terminal. It started crashing for no apparent reason, and at one point it repeatedly crashed after less than a couple of minutes of usage. Terminal and VoiceOver do not play well together.

The second program to fall victim to instability was Safari. After a few hours of screen reader usage, Safari started to stop responding to tab switching. Turning VoiceOver off immediately fixed the problem. Turning it on caused the problem to resurface after just a couple of minutes.

If such is the stability of a screen reader built into the OS, I wonder what kind of stability third-party screen readers on other platforms can really achieve.

Random notes related to site specificity and other things

As cited by Vince Dziekan in Virtuality and the Art of Exhibition (p. 42), Nick Kaye defines (in Site-Specific Art: performance, place and documentation, Routledge, 2000) site-specificity as encompassing “a wide range of artistic approaches that ‘articulate exchanges between the work of art and the places in which its meanings are defined.” The artwork is in some sense inseparable from the site in which it is exhibited. Meaning exists within the interaction between the site and the artwork. The whole is greater than the sum of its parts.

(During the artists’ presentation at Multipli{city}, there was indeed a strong consensus that the exhibited artwork took on a separate meaning when they were transplanted to the Graduate Gallery. The graffiti wall became a work with a completely different feel, for example, and the re-created makeshift shack space could only serve as “documentation.” After the panel discussion the artist talked with other people and agreed that if transplanted to a small town, for example, his installation would then take on even more wildly different meanings.)

Site specificity is opposed to media specificity (p. 191). In a sense, site specificity is treating the site as a material support. That said, in the digital realm, “media” is “fundamentally” just “data streams” (Cubitt as cited by Dziekan) and perhaps we can talk about “the liminality of borders in the digital age” (Dziekan, p. 144, although not referring to this context). The site is also not just the physical space, as “the artistic investigation of site never operates along physical or spatial lines exclusively but rather operates embedded within an encompassing ‘cultural framework’ defined by art’s supporting institutional complex” (One Place after Another: notes on site-specificity, 1977, p. 88, as cited by Dziekan).

According to Dziekan, modern curatorial practice very much hinges on site specificity (e.g., p. 42). He also mentioned other processes in curatorial design, such as choreography (p. 93).

Random questions not mentioned above:

What is a “programme architecture”?

What is a “facture”? “digital facture”?

First impressions of automated checkers for WCAG 2.0 AA

A week ago I finished my assignment on automated checkers for WCAG 2.0 AA. I asked it to check the home page of another blog of mine and it spewed out 223 “potential problems.” (A classmate told me she got more than a thousand.)

I drudgingly went through the list and at the end what did I find?

Three legitimate concerns.

Yes, three out of 223 were all I could find. That’s an accuracy of just slightly over 1%.

Granted, from an AI point of view I know that we don’t talk about accuracy but rather about recall and precision, and a lot of the bogus warnings do concern deep AI problems such as how human language can be understood. Or impossible-to-solve ones like guessing the author’s intent. That said, some of those warnings—especially those related to standard third-party Javascript library API calls, standard icons, or, incredibly, non-breaking spaces—are just incredulous.

I’m not hating the checker or have anything against it per se, but for these checkers to be taken seriously they really have to get better. An accuracy of 1% is not going to work.


I went to the 4ormat presentation on Thursday. So this will be our online portfolio, and not Behance, Cargo Collective, or Coroflot.

I’m not saying there’s any problems with the decision. It’s a totally fine decision, and as the presenter said, they’ve explored all the options and settled on (what they think is) the best. And 4ormat—while we’re students here—does seem to be very attractive. For one thing, I certainly am going to test how giving it a separate domain will work out.

That said, this still means that OCAD will not have an official presence on Behance.

That is, people will see Art Center, MICA, RISD, SCAD, SVA, and even Academy of Art, but not OCAD.

If I remember correctly, the presenter mentioned that a lot of students don’t have online portfolios. I wonder if that really is the case. Talia has one. Larry also has one. Three is certainly not a representative sample, but for those of us who are already using Behance (or maybe something else), are we really going to give up Behance for two years (or four), use something else, and then when we graduate and lose our free access switch back?

I’m not so sure.

The end of the OCAD network on Facebook

Two days ago, on November 23, I received a mass email from IT Help saying that the forwarder that forwards email to will be turned off at the end of the semester. Since Facebook has not been allowing the creation of new networks (nor the update of existing ones, apparently) for quite a while, this means that new students will no longer be able to join the OCAD network on Facebook.

Granted, networks on Facebook have not been doing much lately, but that can be said of virtually everything. Everything on Facebook—including messaging, SMS support, and even fan pages)—is getting less and less useful. So perhaps it will just be a matter of time before all the existing networks will die off.

Still, this will be a “milestone event” for OCAD: The end of its official network on Facebook must still mean something.

WCAG first impressions

Since some of us are talking about aChecker, I threw my own site at it and it spewed out a slew of complaints. I didn’t assume my site was flawless (in fact I knew it had many problems), but the amount of complaints it threw at me was just too much.

I mean, some of what it spew back at me was justified. (For example, I didn’t know WCAG requires the lang attribute to be tagged onto the HTML element instead of the BODY element—not that the requirement made any sense.) But some of it was just bogus. Contrast problems for non-textual elements that happens to be text? With the advent of webfonts, textual data can be anything (especially when people have started talking about using specialized dingbat fonts as a replacement for graphics). You just can’t infer that a piece of textual data will be actual text, especially when the glyphs concerned are obviously symbols.

The problem is that these requirements are divorced from both the context of what the text is and how the text is actually used.

So I wonder: Will the mandatory adoption of WCAG actually produce the opposite effect of what is intended? Will people, out of the requirement (as opposed to the desire) to be compliant with the WCAG, forego simple text for graphics, throwing the web back to where it was 10 years ago when heavy graphics ruled? I don’t want to believe in this, but if WCAG 2.0 AA is going to become mandatory, I think this will be a very real possibility.

Syndicate content