Nobody can know everything. That seems obvious, but from the Renaissance to the twenty-first century people have struggled with the challenging fact that the sum total of human knowledge is too large, too imperfectly distributed, and too complex for any one person to grasp. This is not to say that folks like Leonardo, Galileo, and Michelangelo didn’t make the attempt. They were, after all, Renaissance men, polymaths, children of the age of Gutenberg and movable type printing.
By the end of the fifteenth century it was clear that printed material would pile up faster than the most educated person could read it. Information overload was upon us. I recently re-read Vannevar Bush’s “As We May Think” (The Atlantic, July, 1945). Bush had this lament regarding the abundance of information and the inadequate tools for managing and retrieving all that good stuff:
So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it…. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within [the walls of the library]; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene.
Selection, in this broad sense, is a stone adze in the hands of a cabinetmaker.
Since 1945 we’ve been working on replacing that stone adze with tools we can use to carve intention and meaning from the massive blocks of information that have piled up. The highly structured data records systems of the days of “big iron” mainframe computing have given way to the networked knowledge flows of web based mobile computing. While the information will continue to pile up as long as there are people who read and write, calculate, theorize, and compute, we may be working our way out of the mindset of information overload. We may finally have reached a point where we can manage the massive scale of the crossed streams of data sizzling through the Internet.
Clay Shirky (author of Here Comes Everybody, Penguin Press, 2008) has his own perspective regarding the problem. He says that the “post-Gutenberg economics” of the web have driven down the cost of publishing to a point where there is no need to filter for quality before you publish. This “filter failure” at the front end of the publishing industry results in even more information being added to the growing mountain we must conquer just to stay current with our interests. “The filter for quality,” Shirky says, “is now way downstream from the site of production.” The free e-books at Amazon seem to prove his point. The publishers’ slush piles have been digitized and are waiting for you at Amazon! But the books are free and if you decide to discard them after reading a few pages, well… they’re easy to delete and they don’t take up any room in the landfill.
Shirky is at his best discussing social networks. He talks about privacy issues as outbound filtering problems, and email spam as an inbound filtering problem. Still, it seems to me that he doesn’t quite have a handle on the curation tools that are available, or just why they may be valuable as a way to repair the filter failures. This interview from 2010 with Steve Rosenbaum underscores what I’m talking about. Shirky’s perspective in his conversation with Rosenbaum pivots on the idea that curation is a practice where brute force people-power is substituted for programmed search. He uses Mahalo to underscore that. Mahalo, Jason Calacanis’ effort that Shirky references, was until quite recently a brute-force human research shop with little in the way of automated content generation.
In Here Comes Everybody (page 102) Shirky says, “Every webpage is a latent community. Each page collects the attention of people interested in its contents, and those people might well be interested in conversing with one another too. In almost all cases the community will remain latent, either because the potential ties are too weak, or because the people looking at the page are separated by too wide a gulf of time, and so on.” As far as it goes, this is powerful stuff!
In Ghostbusters, the movie, Egon Spengler warns Dr. Peter Venkman against “crossing the streams” of their proton packs. “It would be bad,” Spengler says. “Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.” Fortunately the effects of blended feeds and streams on Paper.li create happier outcomes. Material aggregated here shrinks Shirky’s “gulf of time” and plays to groups who have strong community ties–people with shared interests in the material being aggregated. But the blended stream can also be generated for personal relevance. You can blend information from Facebook, Google+, and twitter with selected RSS feeds from your favorite blogs and news sources to create a daily paper for your own use and enjoyment.
Evren Kiefer, who publishes both a personal paper and a more tightly focused paper on content strategy has this to say about personal publishing on Paper.li:
I have a lot of varied interests, this is also why my first paper doesn’t make much sense to anyone but me. When people tell me to be focused… I like to remind myself: “I am no sword. I am no laser. I am a man.”
The web is a workshop and a recreation space, a study hall, a studio, or a conference room, a classroom and a library and a whole lot more. Whatever use you want to make of the media it moves, whatever your interest in the information it contains, however you want to communicate with the others who use it, there are tool sets evolving that will truly separate the people of the 21st century from their neolithic stone adze using forebears. An age of networked knowledge is upon us, bringing with it the tools to conquer the mountain.