From motherese to machinese: computers and the new cybersoundscape
By Carlos Alberto Augusto © 1999
Present day computers will be evaluated in a not so distant future with the same interest we gave to coup de poings. It is not difficult to predict that in the not so distant future these machines will be so powerful that new and hitherto never thought of ways to use them will come up. Not simply new possibilities unattainable with today's technology but a new, never before imagined paradigm would emerge. Our interfacing with these new machines will inevitably change. We will no doubt smile when we read about or actually see what has been presented to us as the revolutionary interfaces of the future. I believe a new trully innovative interface will emerge almost completely based on sound. New technology... based on sound. We are on the verge of a new oral culture.
Let this not fool you though. Acoustic ecology enthusiasts may feel excited with the prospect. But although the bedrock of the new cyberscape could be acoustically based we could be facing effects more devastating than those created by noise pollution...
The hazardous soundscape of Cyberia may not even be measurable in decibel and hertz.
1st Movement: Moronic technology.
Ours has often been characterized as the age of the information revolution. That it is a revolution, comparable to say the industrial revolution, there seems to be little doubt about it. But let’s face it: computers are still at an early stage of development and are less than satisfactory machines. Despite the rising “clock speeds”, higher storage capacities, visual pyrotechnics and, above all, despite their rising ubiquity, computers remain slow, limited in their usage and in the scope of their application domains. Despite what Wired magazine tried to persuade us for years and despite the hoopla that surrounds the launching of certain computer related products, we have not seen a clear and radically new paradigm emerging from the use of computers. We have in general accommodated computers into our old categories. Their true revolutionary character is yet to be explored.
Moreover, the present trend in the employment of this technology has a distinctly more scattering than integrating effect. Every day we find new “products” showing up in the market, software, boards, external devices, that each only seems to help increase the confusion. And let’s not forget the cables to connect these products...
This situation is undoubtedly bound to change. We can expect more and more integration of the different aspects of the computer modus operandi with less visibility of its infrastructure. As users we focus too much on technical aspects of the use of computers, which in turn severely affects their performance, and the way we use them. Processing, transmission and storage of data are but technical operations. Computers are like the backstage of a theatre venue where a play is being performed. They should therefore be out of sight, help the performance and be flexible enough to accommodate new and different projects instead of serving the good old play that's been playing for 30 years and unacceptably shape its content.
It doesn't seem unreasonable to expect dramatic changes in the future. It is my firm belief that the more these different aspects of computer operation become integrated and the more ubiquitous they become, the more sound will emerge as the key element in interfacing these technologies with human beings.
Why do I consider sound to be a superior, more practical means of controlling the computer? Hearing and speaking are mobile, sound is omnidirectional and continuous, and it reaches out for the recipient. And, paraphrasing Plato, hearing and speaking should be there where thoughts are born, i.e., in the head.
2nd Movement: Hear my bit.
We communicate with computers through a set of devices that were invented way back when Aristotle defined the world for us. We type in data with keyboards because writing- machines were advanced devices when computers were created. We verify our data visually in displays that were inspired by 1940’s radar monitors. Most people are said to use their computers as simply sophisticated writing-machines and calculators (for word processing, accounting, etc.) but indeed the truth is that in general the computer paradigm puts a serious limit to the kind and number of activities we can perform with these machines. Simply “wearing” computers will not bring about any significant change. Who needs to dress up as a computer anyway? What we do need is a radically different metaphor.
Enter any office today and one will hear a myriad of squeaks, boings and beeps of electronic origin. While you’re listening to the new CD you just bought, a “Squeak!” tells you’ve got new email, “Bing!” the copying of that hard-drive reached its end, “Eep!” you made a mistake. Using computers today indeed causes a great deal of noise. But people seem to use sound to look beyond the computer.
If, on the other hand, one takes cellular phones, watches and a wealth of other electronic devices one notices that our world is already very much dependent upon electronically originated sound.
We have witnessed quite a serious change in the visual dependency of the computer interface. In the past 6 or 7 years speech recognition, sound and speech synthesis technologies are finally becoming more reliable and less of a marketing gimmick. The visually impaired for instance are now able to take full advantage of present day computer technologies based solely on sound. New software products are being introduced which allow, for example, receiving an acoustic version of your daily paper (not a radio news bulletin but the acoustically synthesized version of the written text) through the net during your morning ride to work in your car. This is not futurism. These are products offered today to satisfy today’s needs.
The only real limitation that prevents all these new ideas from fully maturing seems to be the present infrastructure flaws: processing power, transmission speed and storage limitation. When and if these problems are adequately dealt with, be it with the present day silicon/magnetic computer, optical or any other more or less quantic solution we can anticipate the possibility of dialoguing with the computer as easily (or maybe more easily) as we dialogue with each other today.
We see the rising of concepts such as sonification, audiolization, acoustic “interfaces”, etc. Specialists point out that in many circumstances, instead of looking at, it would be advantageous to listen to data. The truth is the full blossoming of these concepts is just a very few MHz away.
3rd Movement: The New Orality --Look, Ma! No Hands.
Instead of looking at monitors and writing we will listen to and speak to computers in the not so distant future. Let me speculate a bit around this idea.
The acoustic paradigm will allow us to individually or collectively access any person, anywhere in the world, listen to, compose and record our thoughts, access any text, speech, register or record.
Imagine being able to listen to “Alice in the Wonderland” or the news, call dad, compose a message and actually send it, search for those records of nuclear fission literature, store them for later retrieval when addressing, say, an audience in a seminar. These data will be produced from direct acoustic interaction or artificially synthesized. All this without a single monitor, pen or mouse.
We will need simple tools such as a stereo hearing and mike set. This set will replace the cell phone and the computer terminal. We will also need acoustic pull-down menus and cursors. These will work within the acoustic space created within the stereo acoustic space defined by the hearing set. All data in the world permanently available anytime, anywhere! Through sound.
Moreover, thought, ours and everybody else’s, and thought processes and their record, will become instantly available, right there close to their source (the head). Plato would love it! And at the same time they will become simultaneously scriptable and recordable. All data will flow permanently, anywhere we are, and at any time we need it. It will be available to everybody at the same time. In the midst of a constant flow of data we will
be able to constantly access it.
Problems will arise of course. How well do we cope with the idea of speaking to no physical person most of the time or being attended by a machine? Answering machines have posed problems when they first appeared. People have complained that they will not leave a message in a machine. ATM’s or pumps in gas stations sound stupid. Automatic email replies pose similar problems. But we constantly see people speaking on their cell phones, by themselves, alone, inside their cars or right in the middle of the street. The new interface will provide a lighter way of doing all this.
A simple password protected, phase-cancellation program will spare us the hearing of the cacophony that will result from all these constantly flowing of acoustic data in addition to preventing us from accessing data directed to someone else. No more beeps and boings from your neighbor. Dealing with the computer through sound will be cleaner and far more convenient from the user point of view.
The return of an oral civilization and an acoustically balanced world will await us thanks to the new computer metaphor. We’ll be mostly listening to the world around us.
4th Movement: Cyber-Dreams and nightmares.
But wait! Do not recycle all your CRT and your liquid crystal monitors yet!
The acoustic metaphor will be a dream turned into reality? In a way yes. But a few disturbing thoughts also come to mind. Firstly, socializing with a machine, constantly speaking to it and receiving its replies (yes, globalization also means global loneliness and these technologies could probably aggravate this problem) may cause unforeseen disturbances. The voice of the machine could replace the real voices of other humans and the sounds of nature. Motherese could be replaced by “machinese”.
Finally, someone might find the mike and headphone set cumbersome and suggest the introduction of small implants inside the head, specially tuned to excite the right centers of the brain and already set-up with the right phase-cancellation routines, a password and maybe preprogrammed to access predetermined locations and resources according to your position and rank...
Coda: The Darker harmonics.
The future of the acoustically driven computer looks somber? Some science-fiction writers have tried to depict cities of the future punctuated by flames and fumes, surrounded by a constant pouring rain, over populated and buried in the middle of the most sinister debris. But this may very well be a benign view of the future after all. Mike Davies points out in his "Ecology of Fear" that Blade Runner may very well be a benign and romantic vision not consistent with what Los Angeles really is. The violent and potentially explosive “real” LA is not dark and surrounded by fumes. It is already there in the outskirts of the city shining in broad daylight.
Presumably the idea of the acoustic computer is imminent. And the promise of a new orality, and a balanced and controlled acoustic environment, with sound playing a major role as the gauge to assess the human scale seems so temptingly plausible.
But what if Big Brother is Homo Acusticus and is listening instead of watching you after all?
Electronic version 2002. Copyright Carlos Alberto Augusto ©2000.
return to Writings