I’ve written here about my interest in Amazon’s recently implemented “Popular Highlights” feature, which lets Kindle readers know what passages other Kindle readers are taking note of. But Ted Striphas points to a rather worrisome aspect of this technology:
When people read, on a Kindle or elsewhere, there’s context. For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point. When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.
I’m not against the crowd, and let me add that I’m not even against this type of cultural work per se. I don’t fear the machine. What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.
In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”