For the past twenty years, I've thought, written, and talked about
the way computers interact with minds, societies, and reality. Because
I've lived in the place and during the era in which Silicon Valley
and cyberculture emerged, I've been able to chronicle the microchip's
transformation of human thought, culture, and, governance as a participant
observer. The mind, community, and civilization that have been changing
as I've described them are my own mind, community, and civilization.
As the technologies I've used and studied have grown more powerful,
as my creative and professional work have become more enmeshed in
PCs, online communities, and mobile phones, and as the use of microprocessor-based
devices has changed fundamental aspects of the human world, my own
attitudes about these technosocial changes have undergone an evolution.
My opinions about the potential and danger of the always-on, smartifact-saturated,
hyper-mediated, pervasively surveilled world we're building have
grown darker and more complex over the years.
I first used electronic tools to explore consciousness in the
late 1960s. While I was in graduate school, studying neurophysiology,
I worked with an electrical engineer to build a portable biofeedback
machine. In 1968, brain researcher Joe Kamiya showed that the
brainwaves of Zen monks were characterized by "alpha waves,"
and that people were able to train themselves to produce more
alpha waves by listening to an audible tone linked to a brainwave-measuring
device. In my graduate school, the electroencephalograph (EEG)
was the size of a refrigerator. The engineer I worked with managed
to fit a transistorized version of an EEG machine into a box less
than half the size of a small refrigerator. And then one day in
the early 1970s he fit it all in the palm of his hand by using
a new gizmo called an "operational amplifier" that put
hundreds of transistors into a single chip. I didn't realize at
the time that I was witnessing the launch of Moore's law.
I started writing professionally in 1973, using the kind of portable
mechanical typewriter that writers had used for most of the 20th
century. Buying my first electric typewriter was a big deal. Then
there were correcting typewriters. I could swap out the typewriter's
printing ribbon cartridge for a correcting cartridge, then type
over a mistake and cover it with white ink. When the microprocessor
came along, I read about a company in New Mexico that would send
you a home computer kit. You could make your own personal computer,
enter programs by flipping switches, and make lights blink with
your answers. When the Apple I came along in 1976, I began to
hear rumors that people were finding ways to use computers to
write on television screens. You could erase, correct, and move
words and paragraphs automatically. The idea that such a thing
was possible set me off on an investigation that never ended.
I found a paper at a "Computer Faire" by a fellow named
Jef Raskin. Raskin (who later initiated the Macintosh project),
was working at Apple Computer. Yes, he had programmed his own
writing software for the Apple II, but the Apple II could only
put upper-case letters on a screen because the hardware didn't
support lower case. I suspended my quest for a computerized writing
machine, but I never gave up the idea. Eventually, I discovered
that the device I had been imagining had been in existence for
nearly a decade, less than an hour's drive from my house.
A few years after my encounter with Raskin, I started writing
articles about a place called the Xerox Palo Alto Research Center,
known by those who worked there as PARC -- where the resident
wizards had been using computers to write with since the early
1970s. They not only displayed lower case letters, but italics,
different fonts in different sizes, and graphics integrated with
the text. The text was stored on hard disks the size of an extra-large
pizza box. Slick-looking documents could be printed on big laser
printers. All of these devices were custom-made at PARC. The first
word processor I used on the Alto, known as "Bravo,"
was the creation of a young programmer named Charles Simonyi,
who has since become a Microsoft billionaire as the architect
of Microsoft's version of Bravo, known as Word.
As soon as I sat down at the Alto, the grandparent of all personal
computers, used the mouse to highlight a paragraph on a high-resolution
black-and-white screen, took the heart-stopping action of deleting
the paragraph, then pointed to another part of the document and
pasted the paragraph back in from the cyber-limbo I had deleted
it to -- my writing life was changed irrevocably. I started driving
for forty minutes every morning in order to write my articles
on an Alto. In 1983, I took out my first bank loan to buy an IBM-XT
with 64K RAM and a 5 megabyte hard disk.
While the popular press concentrated on the nascent PC industry
and people like teenagers Steve Wozniak, Steve Jobs, and Bill
Gates, I became interested in the small band of true believers
who had rebelled against the priesthood of mainframe computers
to create the first personal computers. Some of them, like Bob
Taylor, were still at PARC when I got there. I became fascinated
by and wanted to tell the stories of lesser-known but perhaps
more important figures such as Alan Kay and Bob Taylor and their
colleagues at PARC, and Doug Engelbart at Stanford Research Institute,
whose work had created the foundation for personal computing.
Engelbart was explicitly interested in using computers, which
had been used only as scientific calculators or business data
processors before, as media for "augmenting human intellect."
Engelbart first conceived of the idea of using computers to extend
human thought and communication power back in 1950. It took him
until 1963 to find anyone to fund his research because the idea
of using computers as extensions of the mind was so preposterous.
Using a PC enabled me to write in ways and at a pace that had
not been possible for me before. When I started using the first
graphics programs to illustrate my own books, I understood that
this was truly changing the way I worked and created, not just
adding power to old ways of working, the way the electric typewriter
had. The first words of Tools for Thought: The History and Future
of Mind-Amplifiers, published in 1985, initiated my decades-long
, multiple-book chronicle of augmented minds, virtual communities,
smart mobs, and ubiquitous computing:
South of San Francisco and north of Silicon Valley, near the place
where the pines on the horizon give way to the live oaks and radiotelescopes,
an unlikely subculture has been creating a new medium for human
thought. When mass-production models of present prototypes reach
our homes, offices, and schools, our lives are going to change
dramatically.
The first of these mind-amplifying machines will be descendants
of the devices now known as personal computers, but they will
resemble today's information processing technology no more than
a television resembles a fifteenth-century printing press. They
aren't available yet, but they will be here soon. Before today's
first-graders graduate from high school, hundreds of millions
of people around the world will join together to create new kinds
of human communities, making use of a tool that a small number
of thinkers and tinkerers dreamed into being over the past century.
Nobody knows whether this will turn out to be the best or the
worst thing the human race has done for itself, because the outcome
of this empowerment will depend in large part on how we react
to it and what we choose to do with it. The human mind is not
going to be replaced by a machine, at least not in the foreseeable
future, but there is little doubt that the worldwide availability
of fantasy amplifiers, intellectual toolkits, and interactive
electronic communities will change the way people think, learn,
and communicate.
I can't say with certainty whether twenty years have shown that
the changes in our minds and lives made possible by computers
is the best or the worst thing we've done with microchip technologies,
but I can say with certainty that those amplifiers, tookits, and
communities changed the way I think, learn, and communicate. While
I was working on Tools for Thought, I took another life-changing
step, but this time it wasn't a solo mind amplifier that emerged
when I plugged my computer into my telephone, but the ancestral
culture of today's social cyberspaces. The BBSs of the time drew
me in, but the truly transformative experience was my introduction
to the Well, one of the first publicly available online communities.
Kevin Kelly talked me into writing the first article that used
the term "virtual community" for Whole Earth Review
in 1987.
In 1990, I wrote Virtual Reality about VR both as a technology
and a cultural metaphor. By that time, I was on my third generation
Macintosh, and the rate of progress in PC technology made the
notion of immersive virtual worlds in the foreseeable future seem
plausible. Although I did point out in the book that affordable
computing power, graphic displays, and motion-detecting technologies
would take a decade or two to mature, the subculture that avidly
read Mondo 2000 magazine (and would later read Wired) adopted
the idea if not the actual working gear of VR as a symbol of the
era they yearned for, when consciousness-alteration could happen
through computer graphics instead of drugs. A critique of VR as
a cultural metaphor that I mentioned in my book, and which was
subsequently extended voluminously by critics is the necessary
caution that ought to be used when one considers leaving reality
behind. In retrospect, the Mondo and VR cults were the first stirrings
of what came to be known as "digital culture" when the
next magazine Kevin Kelly edited, Wired, started publishing in
1993. The critique and discourse about the wisdom of replacing
reality with digital simulacra has been a healthy one, and at
least some portion of those who consider themselves enthusiasts
for digital culture now take a more critical stance toward the
headlong rush into chip-world.
VR eventually will get to where it was envisioned in 1990, but
I'm not as certain as I used to be that it will be useful to more
than scientists, engineers, and techno-cultists. As a metaphor
for cyberculture, VR is now ancient. Since then, digital culture
has gone through the simulacra, virtual community, digital revolution,
dotcom, and smartmob metaphors and is now entering the first stirrings
of the metaphor that might swallow them all - the ubicomp zeitgeist.
In 1993, I published The Virtual Community. Rooted in my own
experiences since the 1980s with bulletin-board systems, an ancestral
online service called The Source, and the online conferencing
system that became the prototype for the virtual community I wrote
about - The WELL - I ended up surveying a new cultural landscape
that was much larger than the one I knew intimately. I dipped
into the parallel universes of Usenet, Fidonet, chat rooms, Multi-User
Dungeons, listservs, and even new ways of navigating the Internet
like gopherspace and WWW - each a rich social cyberspace in itself.
Again, the environment I wrote about, which was transforming as
I wrote about it, was the same milieu I found myself living and
working in. I wrote, with perhaps a bit too much enthusiast's
uncritical passion:
The virtual village of a few hundred people I stumbled upon in
1985 grew to eight thousand by 1993. It became clear to me during
the first months of that history that I was participating in the
self-design of a new kind of culture. I watched the community's
social contracts stretch and change as the people who discovered
and started building the WELL in its first year or two were joined
by so many others. Norms were established, challenged, changed,
reestablished, rechallenged, in a kind of speeded-up social evolution.
People in virtual communities use words on screens to exchange
pleasantries and argue, engage in intellectual discourse, conduct
commerce, exchange knowledge, share emotional support, make plans,
brainstorm, gossip, feud, fall in love, find friends and lose
them, play games, flirt, create a little high art and a lot if
idle talk. People in virtual communities do just about everything
people do in real life, but we leave our bodies behind. You can't
kiss anybody and nobody can punch you in the nose, but a lot can
happen within those boundaries. To the millions who have been
drawn into it, the richness and vitality of computer-linked cultures
is attractive, even addictive.
In 2000, I noticed the way people on the streets of Tokyo and
Helsinki were using telephones to send short text messages and
connected my casual if ubiquitous observations with the reports
I read of political demonstrations in the Philippines, self-organized
by text messages, that toppled the Joseph Estrada regime. My 2002
book, Smart Mobs was about mobile communications, pervasive computing
and collective action. Having lived through the PC and Internet
revolutions, I suspected that the intersection of mobile telephones,
Internet communications, and personal computing was beginning
to enable another new medium that built upon and amplified the
social impacts of chips and nets. I recalled how the key to the
PC was that it could be any kind of symbol machine you wanted
it to be, from spreadsheet to graphics toolkit - a medium that
savvy users could hack to their own advantage, and even use to
create industries. And I couldn't forget that the power of the
Internet as a social, political, economic, cultural medium derives
from the many-to-many capability of the Net - every desktop computer
that went online became a worldwide multimedia printing press,
town hall, broadcasting station, marketplace, social space. I
could see that the new medium had three obvious unique characteristics
- building computation and Net access into telephones untethered
computation and Internet capabilities from the desktop and freed
them to colonize all those parts of people's lives that happen
when we aren't sitting at our desks. Secondly, cell phones have
brought global connectivity to a significant fraction the world's
population in just a few years: there are 700 million people on
the Internet by now, but there were 600 million mobile phones
sold in 2004 alone. The third, and ultimately the most potent
potential of smart mob technologies I discovered was that these
devices and the way people use them lowers a key threshold for
collective action that enables people to organize and coordinate
social, political, economic, cultural actions in the face to face
world, using the powers of the online realm. People can organize
to act collectively in markets and mobs, create Wikipedias, Meetups,
and eBays, in ways they couldn't organize before, with people
they couldn't organize before, at scales they couldn't organize
before, in places and at a pace never before possible. I don't
think we've begun to absorb the meaning of the social changes
of the smart mob era, nor has it manifested its most influential
movements, industries, thinking tools or social media.
When I wrote Smart Mobs, I came to realize that the devices most
people in the world will carry and wear will not be the only significant
technological change in the built environment and public spaces.
I recall meeting Mark Weiser in 1990. As director of PARC's computer
systems laboratory, Weiser was the successor to Bob Taylor, who
had led me to write Tools for Thought. In other words, I had every
reason to take him seriously when he proposed that the more likely
future in the first decades of the 21st century won't be VR in
which people put themselves into virtual worlds, but the very
reverse, in which tiny chips in everything from pencils to chairs
and walls will literally build computation into physical fabric
of the world. He called it "Ubiquitous Computing," and
the technical power and ethical dilemmas Weiser raised in 1990
are beginning to manifest in 2004 as microsensor networks, smart
dust, RFID tags. By the time trillions of chips permeate the world,
similar to but more powerfully than bar codes invaded decades
ago, we'll be living in a very different cognitive and social
environment. What will future generations of toddlers think when
they grow up in a world in which the doorknob knows your name
and the bathroom won't let you drown and mommy and daddy always
know exactly where on earth you are? I referred to the term Mark
Pesce first used, "techno-animism," to try to hint at
the change that likely when the urban environment is permeated
by invisible, interconnected smartifacts.
Right now, I'm interested in what I called in Smart Mobs "technologies
of cooperation," and the theoretical underpinnings of collective
action. We know far more about how to create technologies for
amplifying collective action than we know about the dynamics that
enable or prevent people from doing things in groups. I remain
concerned about the part of the equation where human thought and
action can make a difference - the ways we choose to use the technologies
capitalism, computer science, and Moore's law have brought us.
If there ever was a time in which it was crucial for non-technologists
to understand the capabilities of emerging media, to debate the
social and political meaning of these new tools, and to explore
ways in which individual and collective action can influence the
way these technologies affect our lives, that time has come.
Writer and thinker Howard Rheingold has been one of the essential
intellectual referents of digital culture for the last 20 years.
Texts such as "Virtual Reality," which predicted the
rising of the electronic simulacra era, and the seminal "The
Virtual Community," one of the first studies about the
Internet as a new social space, has made him one of the most influential
experts in the social and cultural effects of the technological
explosion. Having been the most qualified chronicler of the personal
computing revolution in the middle of the eighties and the Internet
in the middle of the nineties, Rheingold thinks that we are now
living a third wave of change,as the convergence of the Internet
and mobile technology makes it possible for thousands of persons
to cooperate and act collectively in real time, with unforeseeable
effects. His latest book is "Smart Mobs" (2002).
Text originally published in ArtFutura's 2004 catalog.