Wednesday, May 9, 2012

No Comment
We live in a world where our default state is to be permanently connected. It is up to us to make sure that it doesn’t consume us, says columnist Tom Chatfield. The story of human relations with computers is one of increasing intimacy. Since the very first electronic computers emerged in the 1940s, they have made a remarkable progress: from room-sized mechanisms, incomprehensible without an advanced degree, to intuitive handheld devices functioning more like an extension of our minds than a conventional tool.
As Gordon Moore predicted in his eponymous “law”, computing power has roughly doubled every 18 months since­ the invention of the integrated circuit in the late 1950s. At the start of the 1970s, a computer chip held a couple of thousand transistors. Today, it is more often counted in the billions and is still rising. It’s becoming increasingly clear, in fact, that many of the most crucial limiting factors in modern computing are no longer related to speed, cost, capacity or connectivity, but rather to us – and the all-too-human limitations of our capacities for attention, engagement and action.
Consider what technology has done to the human experience of time. As an increasing body of research suggests, for the first time in human history we are starting to spend the majority of our waking hours “plugged in” to some form of digital device. American teens now spend more than 10 hours each day consuming media of some kind, when multi-tasking is taken into account – a figure towards which the rest of the world is inexorably creeping. Good vs bad Inevitably, the inverse of this is also true. “Unplugged” time – when we are not using or consuming media of some kind – now represents a minority of our waking hours. Thanks to the increasingly intimate role technology plays in our lives, the very definition of our normal state – our default experience of the world and each other – is shifting. It is this kind of observation that had led Paul Miller of technology blog The Verge, to declare that he is “leaving the internet for a year”. “I feel like I've only examined the internet up close. It's been personal and pervasive in my life for over a decade, and I spend on average 12+ hours a day directly at an internet-connected terminal (laptop, iPad, Xbox), not to mention all the ambient internet my smartphone keeps me aware of,” he writes. “Now I want to see the internet at a distance.” Whether Miller’s experiment tells us anything interesting about life in the connected age is an open question. Gaining perspective is a fine idea in principle. To me, though, it seems dangerously likely to reinforce a false dichotomy: the belief that offline time is inherently “better” than online, and that grappling with modern living means a battle between “good” quality time spent away from technology, and “bad” quality time spent using it. Such a dichotomy helps no-one. And it risks obscuring one increasingly urgent question: not whether there’s some magic formula for balanced modern living – but what it means to make good use of both offline and online time in our lives, treating each as a valuable, distinct resource, representing different but equally fertile opportunities for action and interaction. This last question was a large part of the impetus behind my most recent book, How to Thrive in the Digital Age. Time is one of the book’s central preoccupations – and, in particular, an attempt at better understanding the different kinds of time in our lives associated with technology. The resources my “plugged in” self is able to call upon are easy enough to enumerate. Linked to the world’s hive mind, I have staggering research and communications capabilities. I can search for information – or ask others, and explore what they have done – in seconds. I can co-ordinate efforts, collaborate and exchange ideas with lightening speed. I can find more information on just a handful of websites than many libraries contained a century ago.

0 comments:

Post a Comment

 
Toggle Footer