This timeline is provided to help show how the dominant form of communication changes as rapidly as innovators develop new technologies.
A brief historical overview: The printing press was the big innovation in communications until the telegraph was developed. Printing remained the key format for mass messages for years afterward, but the telegraph allowed instant communication over vast distances for the first time in human history. Telegraph usage faded as radio became easy to use and popularized; as radio was being developed, the telephone quickly became the fastest way to communicate person-to-person; after television was perfected and content for it was well developed, it became the dominant form of mass-communication technology; the internet came next, and newspapers, radio, telephones, and television are being rolled into this far-reaching information medium.
The public internet came along after four decades of television dominance and decades of private internet use and development. It came along after hundreds of years of inventive thinking and groundbreaking theorizing, and it built on every bit of human intelligence that had come before. The key innovators were dozens of scientists whose work covers decades; the entrepreneurs were thousands of political leaders, policy wonks, technology administrators, government and commercial contractors, and even grassroots organizers
In the early 1960s, J.C.R. Licklider (pictured above), Leonard Kleinrock, Donald Davies, Paul Baran, Lawrence Roberts and other research scientists came up with the ideas that allowed them to individually dream of and eventually come together to create a globally interconnected set of computers through which everyone could quickly and easily access data and programs from any site.
The first group of networked computers communicated with each other in 1969, and ARPANET, or the Advanced Research Projects Agency Network became the start of the internet. Four U.S. universities were connected and became a research system by which computer scientists began solving problems and building the potential for worldwide, online connectivity. ARPANET had its first public demonstration in 1972, and in this same year the first e-mail program was written by Ray Tomlinson. By 1973, a majority of the internet use was for e-mail discussion.
Vint Cerf and Robert Kahn came up with a streamlined networking standard – internet Protocol or IP – in the late 1970s. At the time, there were still only 188 host computers on the network, but IP brought new growth in the next few years. In 1984, a domain-name service was created, allowing the organization and classification of the world’s online sites. This address system is still in use today; for example, .com, .org, .edu. More have since been added.
In 1991, the World Wide Web was developed by Tim Berners-Lee (pictured at left) as a way for people to share information. The hyper-text format available through his Web made the internet much easier to use because all documents could be seen easily on-screen without downloading. The first “browser” software – Mosaic – was introduced by Marc Andreessen in 1993, and it enabled more fluid use of images and graphics online and opened up a new world for internet users.
In 1996, there were approximately 45 million people using the Internet. By 1999, the number of worldwide Internet users reached 150 million, and more than half of them were from the United States. In 2000, there were 407 million users worldwide. By 2004, there were between 600 and 800 million users (counting has become more and more inexact as the network has grown, and estimates vary).
The internet is a work in progress. While IP version 6 is now ready for implementation, some scientists – led by internet pioneer David Clark and others – are working toward a complete reinvention of the worldwide internet, starting from scratch. The project is expected to develop over the next decade.
After Berners-Lee brought his “World-Wide Web” to life in 1990, and Andreessen launched Mosaic, the revolutionary browser, in 1993, the Internet had an estimated 16 million users by 1995, and venture capitalists were busy full-time, funding hundreds of new Internet-related business concerns. Individuals all over the world are sharing their interests, hopes and dreams online, and the number of internet users is nearing a billion.
Thanks to the work of thousands of collaborators over the final four decades of the 20th century, today’s Internet is a continually expanding worldwide network of computer networks for the transport of myriad types of data. In addition to the names above, there were direct contributions from Ivan Sutherland, Robert Taylor, Alex McKenzie, Frank Heart, Jon Postel, Eric Bina, Robert Cailliau, Tom Jennings, Mark Horton, Bill Joy, Douglas Engelbart, Bill Atkinson, Ted Nelson, Linus Torvalds, Richard Stallman and so many others – some of them anonymous hackers or users – it is impossible to include them all.
Wireless satellite and broadband communications networks are helping people in even the most remote locations find ways to connect. Overcoming the initial concerns that commercialization would limit creativity or freedom of speech, the Internet has become a crazy-quilt mix of commercial sites, government information, and incredibly interesting pages built by individuals who want to share their insights.
The number of people making Internet pages continues to grow. As of mid-2004, more than 63 million domain names had been registered, approximately one for every 100 people living in the world.
Mondo 2000 editor R.U. Sirius (real name, Ken Goffman), as quoted in a 1992 article in the Bergen (N.J.) Record headlined “Unfolding the Future”:
“Who’s going to control all this technology? The corporations, of course. And will that mean your brain implant is going to come complete with a corporate logo, and 20 percent of the time you’re going to be hearing commercials?”
Peter Huber, a senior fellow at the Manhattan Institute, quoted in a 1992 Forbes article titled “An Ultimate Zip Code”:
“Combine GPS with a simple transmitter and computer … If you want to track migratory birds, prisoners on parole or – what amounts to much the same thing – a teenage daughter in possession of your car keys, you are going to be a customer sooner or later.”
David Porush, a professor at the Rensselaer Polytechnic Institute, in a 1992 speech for the Library and Information Technology Association:
“If cyberspace is utopian it is because it opens the possibility of using the deterministic platform for unpredictable ends … We might even grow a system large and complex and unstable enough to leap across that last possible bifurcation – autopoetically – into that strangest of all possible attractors, the godmind.”
Author and Wired magazine columnist Bruce Sterling, in a 1993 Wired article headlined “War is Virtual Hell”:
“The whole massive, lethal superpower infrastructure comes unfolding out of 21st-century cyberspace like some impossible fluid origami trick. The Reserve guys from the bowling leagues suddenly reveal themselves to be digitally assisted Top Gun veterans from a hundred weekend cyberspace campaigns. And they go to some godforsaken place that doesn’t possess Virtual Reality As A Strategic Asset, and they bracket that army in their rangefinder screens, and then they cut it off, and then they kill it. Blood and burning flesh splashes the far side of the glass. But it can’t get through the screen.”
Futurist Jim Dator, in a speech to the WFSF World Conference in 1993:
“As the electronic revolution merges with the biological evolution, we will have – if we don’t have it already – artificial intelligence, and artificial life, and will be struggling even more than now with issues such as the legal rights of robots, and whether you should allow your son to marry one, and who has custody of the offspring of such a union.”
Futurist Alvin Toffler, in a 1993 Wired article titled “Shock Wave (Anti) Warrior”:
“If we are now in the process of transforming the way we create wealth, from the industrial to the informational … the more knowledge-intensive military action becomes, the more nonlinear it becomes; the more a small input someplace can neutralize an enormous investment. And having the right bit or byte of information at the right place at the right time, in India or in Turkistan or in God knows where, could neutralize an enormous amount of military power somewhere else … Think in terms of families. Think in terms of narco-traffickers. And think in terms of the very, very smart hacker sitting in Tehran.”
John Perry Barlow, internet activist and co-founder of the Electronic Frontier Foundation, in a 1994 essay for Wired magazine titled “The Economy of Ideas”:
“We’re going to have to look at information as though we’d never seen the stuff before … The economy of the future will be based on relationship rather than possession. It will be continuous rather than sequential. And finally, in the years to come, most human exchange will be virtual rather than physical, consisting not of stuff but the stuff of which dreams are made. Our future business will be conducted in a world made more of verbs than nouns.”
Tom Maddox, in a 1994 article for Wilson Quarterly titled “The Cultural Consequences of the Information Superhighway”:
“The sharp-edged technology of the NII can cut a number of ways: It can enlarge the domain of the commodifiers and controllers; it can serve the resistance to these forces; it can saturate us all, controlled and controllers alike, in a virtual alternative to the real world. Meanwhile, most of humanity will live and die deprived of the wonders of the NII, or indeed the joys of adequate nutrition, medical care, and housing. We would do well to regulate our enthusiasms accordingly – that is, to remember where love and mercy have their natural homes, in that same material world. Otherwise we will have built yet another pharaonic monument to wealth, avarice, and indifference. We will have proved the technophobes right. More to the point, we will have collaborated to neglect the suffering of the damned of the earth – our other selves – in order to entertain ourselves.”
Nicholas Negroponte, in a 1995 column for Wired magazine titled “Wearable Computing”:
“How better to receive audio communications than through an earring, or to send spoken messages than through your lapel? Jewelry that is blind, deaf, and dumb just isn’t earning its keep. Let’s give cuff links a job that justifies their name … And a shoe bottom makes much more sense than a laptop – to boot up, you put on your boots. When you come home, before you take off your coat, your shoes can talk to the carpet in preparation for delivery of the day’s personalized news to your glasses.”
Greg Blonder, in a 1995 essay for Wired magazine titled “Faded Genes”:
“In 2088, our branch on the tree of life will come crashing down, ending a very modest (if critically acclaimed) run on planet earth. The culprit? Not global warming. Not atomic war. Not flesh-eating bacteria. Not even too much television. The culprit is the integrated circuit … By 2090, the computer will be twice as smart and twice as insightful as any human being. It will never lose a game of chess, never forget a face, never forget the lessons of history. By 2100, the gap will grow to the point at which homo sapiens, relatively speaking, might make a good pet. Then again, the computers of 2088 might not give us a second thought.”
Hans Moravec, as quoted in a 1995 article in Wired titled “Superhumanism”:
“The robots will re-create us any number of times, whereas the original version of our world exists, at most, only once. Therefore, statistically speaking, it’s much more likely we’re living in a vast simulation than in the original version. To me, the whole concept of reality is rather absurd. But while you’re inside the scenario, you can’t help but play by the rules. So we might as well pretend this is real – even though the chance things are as they seem is essentially negligible.”
The content on this page is an excerpt from Janna Quitney Anderson’s book “Imagining the Internet: Personalities, Predictions, Perspectives,” published by Rowman & Littlefield in 2005.
View history of other information technologies: