Why is the granularity returned by the Linux driver so low?

So reading a Macbook’s ambient light sensor in Linux is just reading off /sys/class/hwmon/hwmon2/device/light – That seems easy and good, a far cry from having to write a C program.

The problem? Linux’s granularity of the read is way too low.

In MacOSX, the sample program returns numbers in at least the hundreds range. I can see the numbers change if I try very hard to cover where I think the sensors are. I get nonzero readings even in late afternoon when the room is nearly dark. In MacOSX, the light sensors felt super sensitive.

In Linux, the kernel returns an ordered pair like (12, 0) if I turn on my bright spotlights. If you put a piece of tape over the the webcam like a lot of people do, you get something more like (9, 0).

First disturbing thought: No matter what the ambient lighting looks like, Linux gives you zero for the right-hand-side sensor.

Even more disturbing: If I turn on just my regular 60W light I get all zeroes. From Linux’s point of view there’s no difference between no ambient light at all and having about 800 lumens dispersed in a small room. Heck, I get zero even during the day, right next to a window (so I live in an apartment – we get crappy lighting in apartments, but still). Compared to MacOSX, in Linux the light sensors seem so insensitive they are practically useless.

If we get zero even during the day, what point is reading from the light detectors? I can’t even tell night from day, or a darkened lecture hall from a brightly-lit classroom.

Short of reading the kernel source code is there even a way to figure out why granularity is so low?

By invitation only conferences?

So despite all the discussion of opening this up, it is still “by RSVP only”? How exclusive (and therefore ironic)! I know this has been mentioned before but, I mean, when I (or rather “if I manage to”) graduate I won’t be able to ask people to come to see my project presentation? How depressing! And what has just been objected to is not even some new idea. Why hasn’t this been shot down weeks ago when we first brought it up? And the scoping… our project has never been correctly scoped…

How should we add alternative text for diagrams?

I’ve mentioned this before to people but never wrote it down: How should we even begin to handle graphs and diagrams? What is the “alt text” for a graph, a schematic, floor plan, infographic, or UML diagram? Just consider this diagram: (A UML diagram used for the discussion) How should we even describe this as an “alt text”? (Let’s ignore text browser users for the moment.) Describing the picture certainly wouldn’t work; what matters are not the visual elements themselves, but their relationships to each other. Even worse: Imagine this being exported into PDF (or SVG, or EPS), then embedded into InDesign. Suppose the InDesign file is going to ultimately end up as an accessible PDF. But the text in the diagram is going to be a jumbled mess. So what accessibility are we talking about? Are we deluding ourselves? This has serious implications: Imagine, for example, a piece of online instructional material full of such diagrams. Under the AODA organizations are supposed to be able to supply this in an “accessible format.” What does it even mean for this to be accessible?

Old media requires lubrication

“Old media requires lubrication.” This was one of the answers to the fake questionnaire I was given at Night Kitchen during last year’s Nuit Blanche. Back then the answer didn’t really make much sense to me, and in fact I thought the answer was bizarre. But of course, I hadn’t been involved in any “old media” creation that would have required lubrication. Imagine how I felt when I had left the installation turned on for the night and then discovered bits of the sprocket wheel on the wooden frame. I was so glad the wheel had not been destroyed. So I guess I can now sympathize with that answer to that fake question: Old media does require lubrication. It probably requires daily lubrication, even. But does that mean our installation, with such a strong electronics component, is still “old media”? So “new media” is virtual only? I don’t know if I can side with this conclusion, yet.

The third point

Yesterday I finally remembered. I was talking to the professor a few days ago and thought there was a third point that I forgot, and indeed, there was a third point that I forgot. So here it is: We keep talking about agile, but agile values “working code.” And one way people doing agile keep their code working was to use TDD (test driven development) or BDD (behaviour driven development) techniques. They iterate quickly, but they don’t iterate in a vaccuum. Before they code, they write the test first. (Or at least that’s my understanding after taking that Coursera course.) To do agile, we have to first have our success criteria—a fluid set that changes over time, of course—set down. Success criteria, however, are sorely missing in our case: We just don’t have them. We know something is wrong, but we haven’t really defined what we mean by right. So no matter how short our iterations are, we still can’t be doing agile; if there’s a word for what we’re trying to do, we can probably call it hacking. And yes, this came out of that project too.

The face2face postcard project

For outliers like us, by the time orientation kicks off in September we will already have had our first semester finished (and very likely graded), so participating in any sort of orientation activity might feel a little superfluous. However, I think the face2face postcard project (something that I just found out a couple of days ago that we are even told “not required to participate”) is an interesting enough idea that I will very possibly participate in.

Of course, someone whose first semester is already ending is probably going to be over-thinking about the problem. First off, I perceive the project as essentially having three dimensions: an exercise in identity design, (ok, this might be stretching things a bit) an experiment in interaction design, and (ok, this might seem totally out there but remember my first semester is already almost history) a study in inclusive design.

With the program’s focus on digital inclusion, treating a postcard project as an inclusive design problem might seem like a wacky idea. However, the reality is that the world we live in is still a physical one, constructed of physical things. While digital technology can make many things simpler and more flexible, we can never truly, fully dissociate ourselves from its intrinsic physicality. Moreoever—borrowing the engineer’s jargon—, digital is not “passively safe”—in the sense that if loss of inclusivity is to be considered a catastrophic failure. And so I genuinely believe that inclusivity cannot hinge on digital technology alone; but then what does it even mean for a piece of 5″×7″ cardboard to be “inclusive”?

In the mind of this naïve first-year student, the single true barrier to accessing this piece of 5″×7″ cardboard—which by the way is required to only have my “self portrait” on it—is sightedness: A self portrait on a piece of paper is completely inaccessible to an unsighted person.

Perhaps we can describe the picture, either in the form of text (which can then be fed either to a screen reader or a Braille display) or in the form of audio. So this brings us back to our focus on digital technology. But how do we link the two pieces together? How do we link the piece of cardboard to a website, without the use of any visual element (such as a QR code or a printed URL)? Perhaps we can add the URL in Braille? But can this even be done? (Yes, with Computer Braille Code and a stylus—but will people be able to actually read the Braille I write?…)

Actually this reveals a bigger problem: Assuming that Braille is indeed practical and if Braille is the “passively safe” missing link between the primary 2D artefact and the alternative representation, how much Braille should we use? Are there other non-digital options? How much inclusion should we aim for?

What, really, is the role of print in the context of inclusivity?

Syndicate content