The following was originally written for a class blog for Dr. Martin Irvine’s Semiotics and Cognitive Technology course, offered by Georgetown University’s Communication, Culture, and Technology program.
There was a time, so I’m told, when using a computer meant much more than opening a laptop, clicking on an icon and suddenly being whisked away into the magical world of the internet. Yes, children, the elders speak of a time when using a computer involved first programming a computer, telling it what you wanted it to do and then doing it. The consumers of these products were, of course, more interested in consuming than coding, so an obliging set of “producers” started doing the coding for the consumer. Within a generation that has become such a standard that I would wager not 15% of computer users have ever seen computer code, much less interacted with it.
Considering all of the hype around computers and the internet in the past 20 years, especially the idea that this new technology represents a fundamental shift in human consciousness, it’s terrifying to think that as a society we so poorly understand how computers work, and as importantly, why they work the way they do.
This is especially crucial if we accept how significant an impact computers and the internet are having on ourselves and our society. In terms of storage or cognitive offloading, computers are easily as important as the written word, if not more so; on a single hard drive we can store as much information as one might find in an entire library, and the hard drive has the ability to do work on that information, transform it into something else. Moreover, the impact of the computer on communication may prove as significant a development in human history as that of spoken language, where now we can communicate with millions of people at once in real time. At the individual level, our imperfect understanding of the way particular mediums effect cognition prevent an easy prediction for how computers will affect how we think, but following theories of extended cognition and media ecology, we can at the very least expect significant consequences.
Previous developments of landmark cognitive technologies, such as that of language and writing, have largely been born out of community. Indeed, Wittgenstein argues that language cannot be private: it can only be developed and usefully performed in dialogue, with others. Later written languages were somewhat more exclusive, often limited to higher classes. Even in those cases, the portion of a society that were literate (and could therefore have input into the formalization of rules of literacy) would dramatically outnumber those who are “technologically literate” today.
Those “technologically literate” individuals of the past 100 years are the creators of this new “language” of computing, by which I don’t only mean authors of the code, but designers of the programs and engineers of the hardware as well. These individuals are defining the Rules of the Game. It is, as Mahoney suggests, crucially important that we know not only who these individuals are, but what kind of values, histories, and agencies they embedded into the technology we all use. Reading work from seminal computer scientists like Vannevar Bush and Doug Engelbart feels eerie in some way, like they were predicting the future, when in fact they weren’t just predicting: they were prescribing research paths that were dutifully followed. Their forefathers, Claude Shannon and Alan Turing, just as their progeny, Bill Gates and Steve Jobs, have had and will continue to have similarly powerful legacies. I don’t think these men were evil, so I don’t know if they deserve our suspicion, but I argue that at the very least we should be wary of their impacts because these technologies will literally shape the way we interact with the world in the next 100 years (although obviously, any time a group of affluent white men is poised to have this significant of an impact, being outright suspicious wouldn’t be the worst thing).
This examination must be an interdisciplinary one, historical, sociological, philosophical, psychological. It requires that more individuals from all these backgrounds critically engage with the technology itself. But most important to the goal of ensuring that the computational technology we use has embedded in it our communal values and beliefs is the spread of technological literacy. This means teaching children programming, certainly, as has already begun in many places, as well as encouraging diversity in programming schools and workplaces. Spreading technological literacy and obtaining the full value of computational technology will also mean encouraging the spread of these skills and knowledge beyond the domain of engineering. Artists, musicians, historians, social scientists, teachers, all professions that could benefit from basic programming skills, if only because it would improve their understanding of the tools they use on a day to day basis.
What we face is hegemony by the computer scientist, a subtle domination of our extended minds via the technology we have extended them into. The hope I see is in new programming languages, created by individuals rather than capital driven companies; open-source software meant to be toyed with, developed by many individuals rather than handed down from on high; and user-generated content that defies manipulation or re-creation by marketing departments. Already we have committed ourselves to this computing path, but before we go too far I hope that we can take account of what we value before we’ve already lost it.