How Library Stuff Works videos

During my co-op at McMaster, I had the opportunity to make video tutorials for the library website. I created four videos using different media creation/editing tools, mainly Camtasia Studio, PowToon and VideScribe. While I am technologically savvy, I did not have any experience with these tools and had to learn how to use them from scratch. I was able to pick up these skills quickly and even start experimenting with different video styles. Here are the four videos I made:

Exploring the Library Website

This is a simple screen casting video with voice-over on how to navigate the library website. I learned a lot about using Camtasia to screen cast and edit in my first effort. I actually found doing the voice-over the most challenging. I had to try to speak at the right speed and volume, not to mention being as clear as possible. It was quite a learning experience!

Scholarly vs Popular Sources

The second video was quite different from the first. While I used Camtasia to edit and do the voice-over, I created the actual video using PowToon, a web application that lets you create animated videos and presentations. I tried to experiment with a more upbeat style here.

Innis Library Virtual Tour

This video was made in close collaboration with the librarian and staff at the Innis business library. One of the Innis staff, who has experience and training for voice-over, provided his voice for the audio, and I created the video using Camtasia. I also took pictures of the library for this video. With the staff’s help and feedback, I was able to create a video that showcases their library, and it was rewarding to know how much they liked it.

Search vs Research

By the time I was making my 4th video, I wanted to do something different. Most videos in the How Library Stuff Works series were made with Camtasia and PowToon, so I was looking to create more of a variety. After some research, I decided to try VideScribe, which lets you create whiteboard-style animation videos. It was easy to use and didn’t take me that long to learn, and I had a lot of fun being more creative!

I learned a lot from making these four very different videos, and I think it is important for librarians to experiment with different tools to help our patrons learn more effectively. In this vein, I also created some infographics using Piktochart:

Here is an orientation/promotional brochure I created for the business library using Lucidpress:

Most of these tools are free to use/try and easy to learn. I enjoyed creating the videos and infographics, but I also learned that they are only as useful and effective as how much you are willing to implement them in your instruction, on your website and in your interaction with patrons and students at the reference desk. One of the things I am continuously interested in exploring is how to promote our libraries and the tools we created so that they get used. We shouldn’t just create content to create content. We should create content with purpose and with an intention for it to be widely used by our patrons.

Advertisements

Reading and writing in the digital age

In my previous post, I summarized some of the major critiques of the Internet as a medium of distraction that undermines our ability to read closely and think deeply. As Carr argues in The Shallows, “[w]hen the Net absorbs a medium, it re-creates that medium in its own image.”[1] When a printed book is transferred to the Internet, its words “become wrapped in all the distractions of the networked computer” with hyperlinks and other interactive features that encourage distracted and fragmented reading.[2] For Carr and other critics like him, there is a strong connection between attentive reading and deep thinking fostered by the printed book, and they warn that we need to be wary about what the Internet is doing to our brains and, explicitly or inexplicitly, to our way of life.

In contrast, many proponents of the Internet see this kind of skepticism towards the Internet as neo-luddism. Shirky, in particular, refutes the notion that deep thinking is tied to the type of reading advocated by Carr as “there are a host of people, from mathematicians to jazz musicians, who practice kinds of deep thought that are perfectly distinguishable from deep reading.”[3] Shirky argues that the concerns raised by Carr focus on “literary reading, as a metonym for a whole way of life.”[4] He believes that such skepticism arises from “cultural anxiety” as our society shifts away from literary/print culture to techno/digital culture.[5] While Shirky’s claim that “no one reads War and Peace [because it’s] too long, and not so interesting” might seem philistine, if not downright obnoxious, to people who love literature (and War and Peace), he is right that the debate about the Internet and the printed book is fundamentally about culture.[6] It is telling that Carr calls a book reader’s brain a “literary brain”[7] as he laments the eroding of “our rich literary tradition” and “the book ethic” by the Internet: “After 550 years, the printing press and its products are being pushed from the center of our intellectual life to its edges.”[8] Similarly, underneath Birkerts’ fear about replacing context with access through “a one-stop outlet” such as Wikipedia and Kindle is his palpable cultural anxiety:

I see in the turning of literal pages—pages bound in literal books—a compelling larger value, and perceive in the move away from the book a move away from a certain kind of cultural understanding, one that I’m not confident that we are replacing, never mind improving upon.[9]

For Birkerts, Carr and many others, printed books are cultural objects inseparable from a literary tradition that values long-form, linear reading/writing and private contemplation. However, Weinberger argues that “we’ve elevated private thought because of the limitations of writing [physical books],” a mostly solo, isolated act.[10] Using Jay Rosen’s site PressThink.org as an example, he notes that Rosen’s blog series on the Internet and journalism constitutes a long-form argument (stemmed from deep thinking):

The six posts combined contain 110,000 words, which make the series longer than The Shallows, and about one and a half times as long as this book. Of those 110,000 words, Rosen wrote only 15,000—about twice the length of this chapter. The rest are comments left by readers. Rosen’s posts, however, attract high-quality comments.[11]

In this case, the acts of writing and thinking are public, interactive, collaborative and evolving, and Weinberger argues that this new type of writing and reading offers unique advantages over the traditional reading and writing based on the printed book.

Esposito points out that we have come to associate a well-thought-out argument or story with the length of a physical book because printed books have been our primary intellectual medium for more than 500 years. Thus, “the accident of the convenient size of a single volume has served to create an arbitrary image of an intellectual category.”[12] Coining the printed book “the primal book,” Esposito argues that the most important aspect of the primal book is “its air of authenticity,” the sense that  “such a book is a bit of the inner life of the author brought into the world for all to admire.”[13] However, the Internet disrupts this sense of authenticity because “in electronic form, anything goes” – e-books are not bound by pages, neither is authenticity (and consequently, authority) associated with ink and paper;[14] the very notion of authority/authenticity has become unstable. What is gained in placed of uncertain authority/authenticity is the networked world of ideas embodied in what Esposito calls “the processed book” – “the book where the primal utterance of the author gives rise to hyperlinks and paralinks and neural networks and whatever other kinds of connections and cross-connections computer scientists come up with.”[15]

In the context of the processed book, therefore, “the primal book doesn’t disappear; rather, it is stripped of its air of being a vital expression of a human being and is reduced to its text.”[16] Such a non-humanist approach to text would probably appall Carr and Birkerts, who see books as the epitome of human thought risen from our collective intellectual history, in a linear, orderly fashion. Carr warns in his book that “[c]hanges in reading style will also bring changes in writing style as authors and their publishers adapt to readers’ new habits and expectations.”[17] For Carr, a printed book is finished and “indelible” and “the finality of the act of publishing” has fostered great writing by pushing writers to “write with an eye and an ear toward eternity.”[18] For others, however, the impermanence of electronic text and the demythologizing of authenticity/authority offer new possibilities:

Words very well might not only be written to be read but rather to be shared, moved, and manipulated, sometimes by humans, more often by machines, providing us with an extraordinary opportunity to reconsider what writing is and to define new roles for the writer. While traditional notions of writing are primarily focused on “originality” and “creativity,” the digital environment fosters new skill sets that include “manipulation” and “management” of the heaps of already existent and ever-increasing language.[19]

Perhaps it’s true that our brains are changing, but in the end, the debate between the Internet and the printed book is not so much about our changing brains but rather our changing culture in which the traditional intellectual values are losing their hold. Carr believes that “the Net is sapping us of a form of thinking—concentrated, linear, relaxed, reflective, deep—that [he] see[s] as central to human identity and, yes, culture.”[20] Shirky, on the other hand, is optimistic that “cultural sacrifice in the transformation of the media landscape” would be worth what we gain from the Internet and the networked abundance it offers.”[21] Maybe “cultural sacrifice” is inevitable, but technological change doesn’t always have to result in “winners and losers,” as Postman claims in “Informing Ourselves to Death.”[22] People find ways to adapt and negotiate between the old and the new.

Hayles, for example, argues that we need to develop both deep attention and hyper attention in the Information Age. Hayles defines deep attention as characterized by “concentrating on a single object for long periods (say, a novel by Dickens), ignoring outside stimuli while so engaged, preferring a single information stream, and having a high tolerance for long focus times.”[23] Hyper attention, on the other hand, is characterized by “switching focus rapidly among different tasks, preferring multiple information streams, seeking a high level of stimulation, and having a low tolerance for boredom.”[24] Hayles argues that hyper attention is necessary in information-intensive digital environments because it enables the reader to switch between different information streams and quickly grasp the gist of material; however, without deep attention, we wouldn’t be able to solve complex problems and understand challenging literary works.[25] Hyper attention and deep attention are both acquired skills. It is quite possible that we are already learning how to switch between two modes of attention depending on why and what we read, print or otherwise.

Bibliography

Birkerts, Sven. “Resisting the Kindle.” The Atlantic, March 2009. http://www.theatlantic.com/magazine/archive/2009/03/resisting-the-kindle/307345/.

Carr, Nicholas G. The Shallows : What the Internet Is Doing to Our Brains. New York: W.W. Norton, 2010. http://www.torontopubliclibrary.ca/detail.jsp?Entt=RDM2644019&R=2644019.

———. “Why Skepticism Is Good: My Reply to Clay Shirky.” Britannica Blog, 2008. http://blogs.britannica.com/2008/07/why-skepticism-is-good-my-reply-to-clay-shirky/.

Esposito, Joseph. “The Processed Book.” First Monday 8, no. 3 (March 3, 2003). http://ojs-prod-lib.cc.uic.edu/ojs/index.php/fm/article/view/1038.

Goldsmith, Kenneth. Uncreative Writing: Managing Language in the Digital Age. Columbia University Press, 2011.

Hayles, Katherine N. “How We Read.” In How We Think: Digital Media and Contemporary Technogenesis. Chicago: The University of Chicago Press, 2012. http://raley.english.ucsb.edu/wp-content2/uploads/Hayles-HWT.pdf.

Hayles, N. Katherine. “Hyper and Deep Attention: The Generational Divide in Cognitive Modes.” Profession, 2007, 187–199.

Postman, Neil. “Informing Ourselves to Death,” 1990. http://faculty.cbu.ca/rkeshen/other%20courses/222/overheads/Second%20Term/technology/Informing%20Ourselves%20to%20Death.pdf.

Shirky, Clay. “Why Abundance Is Good: A Reply to Nick Carr.” Britannica Blog, 2008. http://blogs.britannica.com/2008/07/why-abundance-is-good-a-reply-to-nick-carr/.

———. “Why Abundance Should Breed Optimism: A Second Reply to Nick Carr.” Britannica Blog. Accessed July 28, 2016. http://blogs.britannica.com/2008/07/why-abundance-should-breed-optimism-a-second-reply-to-nick-carr/.

Weinberger, David. Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room. New York: Basic Books, 2011.

Notes

[1] Nicholas G. Carr, The Shallows : What the Internet Is Doing to Our Brains (New York: W.W. Norton, 2010), 105.

[2] Ibid., 121.

[3] Clay Shirky, “Why Abundance Is Good: A Reply to Nick Carr,” Britannica Blog, 2008, http://blogs.britannica.com/2008/07/why-abundance-is-good-a-reply-to-nick-carr/.

[4] Ibid.

[5] Clay Shirky, “Why Abundance Should Breed Optimism: A Second Reply to Nick Carr,” Britannica Blog, 2008, http://blogs.britannica.com/2008/07/why-abundance-should-breed-optimism-a-second-reply-to-nick-carr/.

[6] Shirky, “Why Abundance Is Good: A Reply to Nick Carr.”

[7] Carr, The Shallows. 80.

[8] Ibid., 92.

[9] Sven Birkerts, “Resisting the Kindle,” The Atlantic, March 2009, http://www.theatlantic.com/magazine/archive/2009/03/resisting-the-kindle/307345/.

[10] David Weinberger, Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (New York: Basic Books, 2011)

[11] Ibid.

[12] Joseph Esposito, “The Processed Book,” First Monday 8, no. 3 (March 3, 2003), http://ojs-prod-lib.cc.uic.edu/ojs/index.php/fm/article/view/1038.

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Ibid.

[17] Carr, The Shallows.

[18] Ibid., 201.

[19] Kenneth Goldsmith, Uncreative Writing: Managing Language in the Digital Age (Columbia University Press, 2011),

[20] Nicholas G. Carr, “Why Skepticism Is Good: My Reply to Clay Shirky,” Britannica Blog, 2008, http://blogs.britannica.com/2008/07/why-skepticism-is-good-my-reply-to-clay-shirky/.

[21] Shirky, “Why Abundance Is Good: A Reply to Nick Carr.”

[22] Neil Postman, “Informing Ourselves to Death,” 1990, http://faculty.cbu.ca/rkeshen/other%20courses/222/overheads/Second%20Term/technology/Informing%20Ourselves%20to%20Death.pdf.

[23] N. Katherine Hayles, “Hyper and Deep Attention: The Generational Divide in Cognitive Modes,” Profession, 2007, 187–199.

[24] Ibid.

[25] Katherine N. Hayles, “How We Read,” in How We Think: Digital Media and Contemporary Technogenesis (Chicago: The University of Chicago Press, 2012), http://raley.english.ucsb.edu/wp-content2/uploads/Hayles-HWT.pdf.

What’s in a book?

In this classic Norwegian comedy sketch, the “medieval helpdesk” is teaching a monk how to use a revolutionary new technology, the codex book – a form of document featuring leafed pages bound between durable hard covers. What makes this sketch funny to the modern audience is the absurdity of someone not knowing how to flip a page or open a book, actions so familiar to us they seem instinctual, but they were anything but natural when the codex book first arrived in the third or fourth century AD, the period in which the scroll was the primary technology for written words. The sketch reminds us that with each arriving technology, there is a period of adaption (and often resistance), and what we consider ordinary now was once new and extraordinary.

Information technology has come a long way since the days of scrolls, but every new medium creates not only new habits and reflexes but also new ways of engaging with and thinking about information. The transition from oral tradition to writing made possible the “switch from a predominantly narrative mode of thought to a predominantly analytic or theoretic mode” (quoted in Wright, 2007), while the invention of the printing press changed “our sense of when we are—the ability to see the past spread out before one” (Gleick, 2011). The Internet has certainly changed our lives profoundly, to the extent that we call this present epoch The Digital Age, or The Information Age with the understanding that the Net is where we get most of our information. It changes how we perceive and use information and contributes to our sense of information overload – the sense of at once having too much and not having enough. It is quickly becoming a “universal medium” permeating every aspect of our lives:

The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.  (Carr, 2008)

The Internet hasn’t just changed the nature of human thought but possibly the very structure of our brains.

In his article “Is Google Making Us Stupid?,” which later turned into a best-selling book The Shallows, Nicholas Carr (2008) expresses having “an uncomfortable sense that someone, or something, has been tinkering with [his] brain, remapping the neural circuitry, reprogramming the memory.” He blames the Internet for his declining ability to concentrate on a book or a lengthy article as his mind now “expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.” Citing various studies in The Shallows, Carr (2011) argues that the Web is “rewire[ing] our mental circuits” and remolding them to what he calls “The Juggler’s Brain:”

The Net’s cacophony of stimuli short-circuits both conscious and unconscious thought, preventing our minds from thinking either deeply or creatively. Our brains turn into simple signal-processing units, quickly shepherding information into consciousness and then back out again. (Carr, 2011)

Hypertext reading, as opposed to linear reading represented by print books, has diminished our reading comprehension and undermined our capacity for deep attention and contemplation.

The mental functions that are losing the “survival of the busiest” brain cell battle are those that support calm, linear thought—the ones we use in traversing a lengthy narrative or an involved argument, the ones we draw on when we reflect on our experiences or contemplate an outward or inward phenomenon. (Carr, 2011)

By choosing to embrace the Internet, “we have rejected the intellectual tradition of solitary, single-minded concentration, the ethic that the book bestowed on us” (Carr, 2011).

Carr is far from the only person who wants to take back reading from the Web. Birkerts (2010) warns of “the wholesale implementation and steady expansion of the externalized neural network: the digitizing of almost every sphere of human activity.” Comparing contemplative and analytic thought as embodied in the novel and the Internet respectively, Birkerts argues that “information is nothing without its contexts,” and that contemplation, which is “thinking for its own sake,” as opposed to “the kind that would depend on a machine-drive harvesting of facts toward some specified end,” is essential for illumination and insight. Concentration required for deep thinking, however, is becoming harder and harder to come by in the age of the Internet. It has to be consciously fought for against “the encroachment of the buzz,” the distraction and noise in our “over-networked culture” perpetuated by the Web (Ulin, 2009).

Many critiques about the Internet share the same thread: they are all concerned with preserving “the mental space for the book” (Hari, 2011). Here, ‘the book’ represents more than just a physical object; it epitomizes the long-treasured way of reading, thinking, and even living. It is a culture and a history lasting since the invention of movable type, and the Internet seems to be tearing it down, byte by byte.

Anxiety over the changing technology is nothing new. In Phaedrus, Plato warns that writing “will create forgetfulness in the learner’s souls,” and it gives only “semblance of truth” compared to speech, “the living word of knowledge which has a soul, and of which written word is properly no more than an image.” Johannes Trithemius rails against printing in favour of hand-copying In Praise of Scribes, written in 1492 when printing was rapidly adopted throughout Europe. For him, it is “the scribes who lend power to words and give lasting value to passing things and vitality to the flow of time” while “[t]he printed book is made of paper and, like paper, will quickly disappear.” Of course paper turned out to outlast scribes.

In his 1936 essay “The Storyteller,” Walter Benjamin (2006) laments the rise of information at the expense of storytelling, which is steeped in oral tradition that emphasizes communicable shared experiences. Instead of allowing for the story to take hold in memory and transcend the immediacy of the now, information is concerned with “prompt venerability,” with what is immediately relevant and useful.

Every morning brings us news of the globe, and yet we are poor in noteworthy stories. This is because no event comes to us without being already shot through with explanation. In other words, by now almost nothing that happens benefits storytelling; almost everything benefits information. (Benjamin., 2006)

It is not hard to see echoes of Benjamin’s complaints in many critiques of the Internet today. Ironically, Benjamin is as critical of the novel as information. For him, the novelist is too isolated and individualistic to impart insight or wisdom to his reader, who is equally isolated in his reading experience. Like those who romanticize the private experience of reading a print book today, Benjamin is nostalgic for the public, shared experience of oral storytelling.

It is perhaps human nature to want to hold on to what is familiar and what we have come to accept as fundamental to our edification, if not our very existence. This pattern, however, does not negate the valid concerns and anxieties about technology, past or present. How many of us can relate to Carr and his fear that the Internet has made us less thoughtful and more prone to distraction? How many of us have spent less time reading books and more and more time on the Web with its never exhausted stimuli and content? Just like the confused monk who is afraid that he might “lose any of the text” in the Medieval Helpdesk video, we share the feeling that we might be losing something important by completely embracing the Web. That is the nature of technology; nothing is gained without something lost. It is up to each of us as individuals and the society as a whole to decide if what is in danger of losing is worth keeping, and for many people, what the traditional reading experience represents – something akin to a spiritual experience transcending space and time – is worth holding on to.

In my next post, I will look at the other side of the argument: what we gain reading on the Internet as well as the potential of digital writing. Now I will leave you with this video on the latest technological innovation.

References

Benjamin, W. (2006). The Storyteller: Reflections on the Works of Nikolai Leskov. In The Novel: An Anthology of Criticism and Theory 1900-2000. Malden, Mass: Blackwell Publishing.

Birkerts, S. (2010). The American Scholar: Reading in a Digital Age – Sven Birkerts. Retrieved from https://theamericanscholar.org/reading-in-a-digital-age/#.V3_1nFQrK71

Carr, N. (2008, August). Is Google Making Us Stupid? Retrieved from http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/

Carr, N. (2010). The shallows : what the Internet is doing to our brains. New York: W.W. Norton.

Gleick, J. (2011). The information : a history, a theory, a flood. New York: Knopf Doubleday Publishing Group.

Hari, J. (2011, June 24). Johann Hari: How to survive the age of distraction. Retrieved from http://www.independent.co.uk/voices/commentators/johann-hari/johann-hari-how-to-survive-the-age-of-distraction-2301851.html

Plato. (n.d.). The Internet Classics Archive | Phaedrus by Plato. Retrieved July 18, 2016, from http://classics.mit.edu/Plato/phaedrus.html

Trithemius, J. (1492). From In Praise of Scribes (De Laude Scriptorum). Retrieved from http://williamwolff.org/wp-content/uploads/2009/06/TrithemiusScribes.pdf

Ulin, D. L. (2009). The lost art of reading. Retrieved from http://www.latimes.com/entertainment/arts/la-ca-reading9-2009aug09-story.html

Wright, A. (2007). Glut : mastering information through the ages. Ithaca.: Cornell University Press.

Swimming in the sea of information

Information Age

DSC_1688

The above photo was taken at the Information Age Gallery in the London Science Museum during my visit to London, England in May. As its website describes, this exhibit aims to celebrate “more than 200 years of innovation in information and communication technologies.”  I visited the gallery in the hope to gain some insight for my summer independent study on information overload and distracted reading, and Mark Weiser’s quote struck me as relevant as ever in today’s so-called Information Age or Digital Age in which the Internet has become an integral part of modern life.

The exhibit was divided into six different sections showcasing the evolution of information technologies from the wired cable to today’s wireless web and mobile phones. Using creative media installations, it told the uplifting story of the major developments in technology networks (literally as information technology went from earth-bound wires to wireless satellites in space). Two things that jumped out at me while I was looking through the exhibit were the focus on the term “network” (as underlined in its name Information Age: Six Networks that Changed Our World) and the unbridled optimism about the ever expanding and more complex information network precipitated by the digital revolution. The optimism is perhaps not surprising considering how the exhibit was partly sponsored by technology firms such as Google, but in reality, there has been an on-going debate about the the negative effects of digital technology on many aspects of our lives, ranging from privacy issues, commodification of information, to our changing reading habits (and even brains). However, the exhibit did provide a springboard for me to start thinking about information as a networked system and its ramifications to how we consume, create and perceive information, for better or worse.

Information Overload

IMG_20160514_160230~2

Babel (2001) by Cildo Meireles. A tower of radios playing at once, addresses ideas of information overload and failed or thwarted communication. Taken at the Tate Modern, 2016.

One of the ramifications of a sprawling information network as embodied in the Web is the sense of information overload. Often brought up in discourses about the negative side of the Information Age, information overload seems intuitively a  very contemporary issue caused by modern information technology that feeds us more information than we can ever consume. However, the feeling of being overwhelmed with too much information is hardly a new phenomenon. As noted by Ann Blair (2010), “[c]omplaints about information overload, usually couched in terms of the overabundance of books, have a long history,” dating all the way back to 1st century AD when the Roman philosopher Seneca complained that “the abundance of books is distraction.” Others have subsequently complained about ‘book overload’ being a cause for from our inability to “learn anything from books” to the fall of civilization (Weinberger, 2011).

Alvin Toffler was the person who popularized the term “information overload” in his 1970 book Future Shock, which “pointed to research indicating that too much information can hurt our ability to think” (Weinberger, 2011). From hence came other terms like “information glut,” “information anxiety” and “information fatigue” (Gleick, 2011; Weinberger, 2011); they all speak to a feeling of exhaustion, stress and even apathy from too much information, a feeling of drowning in a sea of data:

Deluge became a common metaphor for people describing information surfeit. There is a sensation of drowning: information as a rising, churning flood. Or it calls to mind bombardment, data impinging in a series of blows, from all sides, too fast”. (Gleick, 2011)

While information overload is not a new concept or sensation unique to modern society, it has taken on more urgency in today’s networked information system in which information is decentralized and interconnected and where “all bits are created equal and information is divorced from meaning” (Gleick, 2011). Instead of a more serious problem, however, Weinberger (2011) describes the contemporary information overload as “a different sort of problem” that requires a new set of tools and different types of strategy to help us manage the huge amount of data available on the Web.

Human beings have always come up with practical and innovative ways to tackle the problem of information overload, from developing “biblical concordances and alphabetized subject headings” in the Middle Ages to creating encyclopedias, anthologies and indexes to manage the huge amount of print books (Blair, 2010; Gleick, 2011). In Glut: Mastering Information Through the Ages, Alex Wright  (2007) argues that even tribal people in the Ice Age relied on “folk taxonomies, mythological systems, and preliterate symbolism” to navigate the tribal information systems based on “symbolic expressions” and oral traditions. As history shows, humans have always tried to reduce information to manageable size by filtering, summarizing and sorting, or what Weinberger (2011) calls “our basic strategy of knowing-by-reducing.” The strategies for coping is not that much different in today’s Information Age although the tools required are unique to the current information system, the Internet. Gleick (2011) talks of “filter and search” while Weinberger (2011) focuses on “algorithmic and social” tools in their respective books, but the basic strategy comes down to effective filtering. In Clay Shirky (2008)’s talk “It’s Not Information Overload. It’s Filter Failure,” he argues that filter failure or the “inefficiency of information flow” is caused by the tension between the old information system based on traditional institutions and hierarchies and the new networked information system that requires “new filters” as well as a “mental shift,” a new way of seeing the world. Information overload then is caused by our current lack of tools and the right mindset to navigate the new information environment, and we need a new kind of channel to tackle the problem (Shirky, 2008)

Echoing Blair (2010) that we share our ancestors’ “sense of excess,” Shirky (2008) concludes that “we are to information overload as fish to water; it’s just what we swim in,” so instead of complaining about how much information is out there, we should look at ways to better filter and manage the information network. We need to embrace the social  and interconnected nature of a networked information system and see ourselves as “creatures of the information” (Gleick, 2011; Weinberger, 2011).

Both Gleick and Weinberger end their books with optimism that information overload is only a passing problem to be solved with better algorithms and social tools, just like how we have managed to do so in the past. Perhaps they are right although there are those who don’t share their sense of enthusiasm for the latest information technology and the optimism that humans will only be smarter and society better  for it. My next post will explore some of the critiques of and anxiety about the Web and how it has changed the way we read and understand information.

References

Blair, A. (2010). Information overload, the early years. (2010). Retrieved July 8, 2016, from http://archive.boston.com/bostonglobe/ideas/articles/2010/11/28/information_overload_the_early_years/

Gleick, J. (2011). The information : a history, a theory, a flood. New York: Knopf Doubleday Publishing Group.

Shirky, C. (2008). Web 2.0 Expo NY: Clay Shirky (shirky.com) It’s Not Information Overload. It’s Filter Failure. Retrieved from https://www.youtube.com/watch?v=LabqeJEOQyI

Weinberger, D. (2011). Too big to know rethinking knowledge now that the facts aren’t the facts, experts are everywhere, and the smartest person in the room is the room. New York: Basic Books.

Wright, A. (2007). Glut : mastering information through the ages. Ithaca.: Cornell University Press.

A response and reflection about DH

This is my third and last assigned blog post for the Digital Humanities course, and while I hope this will not be my last post on the topic, I do want to use this opportunity to reflect on what I have learned so far about DH and my experience with it. That is why I chose to respond to Catherine Tumber’s blog post “Bulldozing the Humanities” and the article on which her blog post is based – “Technology Is Taking Over English Departments: The False Promise of the Digital Humanities” by Adam Kirsch.

As the titles reveal, neither Tumber nor Kirsch is a fan of DH. Evoking DH’s favourite word “building,” Tumber compares digital humanists to “postwar master builders” who destroyed the good old “pedestrian-centered street life” with tacky and incongruous parking lots, malls, and widening streets (for cars instead of pedestrians!). She argues that DH commits the equivalent of a “transect violation” (“One thing does not belong with the other without fundamentally altering urbanism’s aesthetic, social, and functional integrity”) by emphasizing technology over meaning and thus “gutting what lies at the heart of the liberal arts: language and the narrative sensibilites that shape meaningful human endeavor.” While Tumber’s argument is mostly based on Kirsch’s longer and perhaps more illuminating article, she brings up an interesting point about the relation between DH and poststructuralist theory:

If all this talk of “disaggregation,” statistical “distancing,” and “intertextualities” sounds familiar, that’s because in the 1970s and ’80s poststructuralist literary theory did digital humanities’ intellectual spade work. It was but a short technical step from “deconstructing the text” and larding up critical analysis with hyper-abstract semiotic jargon, to reducing art, history, and literature to data. Digital humanists claim to be addressing the “the crisis of the humanities,” but they are doing so on the same trajectory—now tarted up with non-verbal tech effluvia—that led to the humanities’ deterioration and public disaffection in the first place.

I found her unease with “abstract semiotic jargon’ and “non-verbal tech effluvia” telling because, much like poststructuralist theory, DH is all about looking at the old and the familiar with a new set of eyes, which requires inventing new language, finding new ways, and creating new tools to engage with the material. My personal experience with DH certainly did not start without trepidation and uncertainty because there is a steep learning curve to understanding DH, partly due to its affinity with technology and partly because, as both Tumber and Kirsch point out, DH does not have a fixed identity. Both of these elements can be alienating, if not also frightening, to novices, especially those steeped in traditional humanities in which they have been taught to value content above all. I can certainly imagine my pre-DH self reading Kirsch’s article and nodding along with every word. For those of us who have long treasured the power of words and literature to evoke emotion, expand minds and inspire new ideas, it is hard not to agree with Kirsch that humanistic thinking is “a matter of mental experiences, provoked by works of art and history, that expand the range of one’s understanding and sympathy.” To those unfamiliar with DH, Kirsch presents a convincing picture of DH as counter-humanist if not anti-humanist. Taking big data and ‘distant reading’ to task, Kirsche argues that digital tools are empty vessels without context.

It is striking that digital tools, no matter how powerful, are themselves incapable of generating significant new ideas about the subject matter of humanistic study. They aggregate data, and they reveal patterns in the data, but to know what kinds of questions to ask about the data and its patterns requires a reader who is already well-versed in literature.

I agree with Kirsch that “meta-knowledge—knowledge about the conditions of the data you are manipulating—proves to be crucial for understanding anything a computer tells you. Ask a badly phrased question and you get a meaningless answer.” However, I disagree with his implication that digital humanists don’t possess meta-knowledge or know how to ask important questions. Certainly there is always the danger that a DH tool becomes an end in itself instead of a means to an end, but a tool such as The Real Face of White Australia asks important questions about national identity and access with just simple (yet hacked and manipulated) images of faces. While I love words, it is undeniable that sometimes a picture is worth a thousand words, and I don’t think anyone would accuse of the developers of not possessing enough contextual knowledge or not knowing how to ask questions – humanist questions.

In their rebuttal to Kirsch’s article, Jeffrey Schnapp et al rightly point out that Kirsch’s understanding of what counts as the Humanities is “unduly narrow” as it only focuses on English Studies. They disagree with Kirsch’s notion that “the Humanities are external to ‘technology’ or that the books that he prizes aren’t themselves technological artifacts.”

The “mental experiences, provoked by works of art and history” that Kirsch celebrates are anything but “mental”; they are anchored in a concrete physical apparatus whose powers and limits are informed by its design, by the practices of reading that it enables, by the sorts of knowledge forms that shape and constrain it, and by the very real institutions—from public libraries and museums to colleges and universities—that enable, teach, and value cultural literacy.

As I’ve learned through the course, the Humanities are just as steeped in the physical as they are in the mental; the medium has always shaped how we perceive the content. DH brings the medium to the forefront and forces us to think about how content changes when the medium changes and what it means to preserve something that is in constant flux. DH teaches us, or at least it has taught me, to question our preconceived and even treasured notions of authorship, content and medium, originals versus copies etc. As Jeffrey Schnapp et al put it, digital humanists are “trained humanists who work with, interpret, write about, and teach diverse aspects of the cultural record of humankind—and [they] do so with a wide-range of tools and sources, including ones that are textual, visual, sonic, artifactual, experiential, and digital.”

To Kirsch’s credit, he acutely observes that DH “gains some of its self-confidence from the democratic challenge that it mounts to that old distinction” between thinking/writing and making/building and posits that “if digital humanities is to be a distinctive discipline, it should require distinctive skills.” What he objects to is the necessity of these skills to the Humanities. He asks:

… are they humanistic skills? Was it necessary for a humanist in the past five hundred years to know how to set type and publish a book? Moreover, is it practical for a humanities curriculum that can already stretch for ten years or more, from freshman year to Ph.D., to be expanded to include programming skills? (Not to mention the possibility that the kind of people who are drawn to English and art history may not be interested in, or good at, computer programming.)

While I believe that DH tools and technical skills are essential to DH, I admit I do sometimes find the emphasis on technology and the idea of ‘building/making’ alienating. If I am not technologically inclined, can I still be a digital humanist? If I don’t make or build anything, can I truly understand DH? Part of me want to say that no, I cannot truly understand DH without at least trying or learning to make or build something because that is a huge part of what makes DH unique and exciting. It aims to bridge the (perhaps artificially constructed) gap between the mental and the physical in the Humanities, or that is how I’ve come to understand it. I also do not completely disagree with Kirsch’s critique about “the high language of inevitability” used by some digital humanists in their zest for ‘revolution’, not just innovation. As Kirsch puts it, it has “the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over.” I’ve certainly come across this sort of language in the literature about DH and libraries. While I admire the passion imbued in such declarations, I can also see that they might create barriers for people who do not subscribe to similar beliefs.

This post has turned out to be more rambling than intended, but the best response to both Tumber and Kirsch’s articles I could make was to bring in my own experience with DH. Whatever its flaws, DH has shown me something new and has given me new tools and new perspectives to understand the world and the human experience, and in the end, that is what the Humanities are about.

To curate or not to curate: digital curation in contemporary society

“Contemporary curating has become an absurdity,” declared David Balzer in The Guardian article “Reading Lists, outfits, even salads are curated – it’s absurd.” While the title and the opening pronouncement of the article leaves no doubt as to what the author’s verdict is on digital content curation, the article goes on to examine the relationship between “curationism” and the concept of “value performance” by tracing the history of curation from its conception in ancient Rome, its professionalization in major art institutions to its present day appropriation by pop culture and the internet masses.

While Balzer points out that the act of curating has always been closely connected to the idea of an audience because the curator “gives value to things but also, crucially, performs value – conscious of onlookers,” he blames social media for the proliferation of present day curationism that, instead of presenting more distinctive choices, only creates more sameness.

The explosion of social media led to accelerated curatorial ways of thinking. Value had to be performed like never before. Users are hyper-conscious of what they want and choose, performing this for an audience of friends and strangers doing exactly the same thing, often in exactly the same way.

Although social curation sites such as Pinterest and Tumblr often emphasize individual taste and personalization, the content has the tendency to converge around taste preferences, accentuated by the encouragement to “re-pin” or “re-blog” content form other users with similar tastes and worldviews. Personalized recommendation used by retailers such as Netflix and Amazon to “tell users what they’d like based on what they’ve already liked, or what others with similar taste have liked” functions in a similar manner, except in such cases, computer algorithms are the curator. For Balzar, the modern day obsession with curation tells more about our insecurity and superficiality than anything else as he concludes that “[t]o perform value is not necessarily to possess it;” the more we curate, the more hollow the act becomes.

Balzar provides an interesting perspective on digital curation, but what does curation mean in the context of digital humanities?

In this indictment of curationism, Balzar criticizes contemporary digital curation’s tendency to drown out differences and mistake exclusivity with authenticity, but as Alice Marwick (2011) notes in her article on Twitter and the imagined audience, authenticity is a social construct (p. 199). While it might be true that “the ideal audience is often the mirror-image of the user” who “(re)construct[s] their identities in the process of constructing the imagined audience,” thus leading to controlled content and self-censorship, social media and its “networked audience” also create new opportunities for connection and individual expression (Marwick, 2011, p. 120).

In the case of Pinterest and celebrity curators such as Gwyneth Paltrow and her blog Goop, consumerism or conspicuous consumption lies at the heart of value performance. However, in digital humanities and library science, digital curation is interlinked with the idea of preservation, and social curation or co-curation projects often aim to “tell stories not being told, or to tell existing stories in a different way” (Couldry & Fotopoulou, 2015, p. 243). Historypin is a good example of what co-curation looks like in the context of digital humanities. By “[e]nabling networks of people to share and explore local history, make new connections and reduce social isolation,” Historypin creates a shared online space of public history, curated and developed by a “global community of people, groups and institutions who gather and share the history of the places that matter to them [emphasis mine]” (Shift, 2014). In this space, multiplicity of individual and local experiences are not subsumed by a single narrative of history; rather, history is how the different communities interpret it to be. Hooper-Grenhill’s (2000) belief that museums should create exhibits that speak to multiple “interpretative communities” (People who share similar understanding of objects due to their cultural and personal backgrounds) is realized in a project like Historypin where virtual objects are selected, arranged and displayed in various contexts. Contrary to other image-based and consumption-based curation websites where the focus is more on objects themselves than communication and connection with the audience (Lui, 2015, p.129), Historypin is underpinned (pun intended) by the yearning for shared experiences and connection with people and places.

Despite its focus on technology, digital humanities are as much about people as tools we use to understand ourselves, and digital curation offers unique opportunities for community engagement and for telling stories in innovative and vastly different ways than it was possible before. As Lui (2015) puts it, “curatorial acts enact power” (p. 138). Despite Balzar’s disapproval for crowd-sourced curation, a project like Historypin gives voices to those whose stories might not have been told otherwise, and there is definitely value in that.


References & Further Readings

  • Bechmann, Anja, and Stine Lomborg. 2013. “Mapping Actor Roles in Social Media: Different Perspectives on Value Creation in Theories of User Participation.” New Media & Society 15 (5): 765–81.
  • Boon, Tim. 2011. “Co-Curation and the Public History of Science and Technology.” Curator: The Museum Journal 54 (4): 383–87.
  • Fotopoulou, Aristea, and Nick Couldry. 2015. “Telling the Story of the Stories: Online Content Curation and Digital Engagement.” Information, Communication & Society 18 (2): 235–49. doi:10.1080/1369118X.2014.952317.
  • “Historypin – Shift.” 2015. Accessed June 18. http://www.shiftdesign.org.uk/products/historypin/.
  • Hooper-Greenhill, Eilean. 2000. Museums and the Interpretation of Visual Culture. Museum Meanings. London ; New York: Routledge.
  • Lui, Debora. 2015. “Public Curation and Private Collection: The Production of Knowledge on Pinterest.com.” Critical Studies in Media Communication 32 (2): 128–42. doi:10.1080/15295036.2015.1023329.
  • Marwick, A. E., and d. boyd. 2011. “I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience.” New Media & Society 13 (1): 114–33. doi:10.1177/1461444810365313.
  • Ovadia, Steven. 2013. “Digital Content Curation and Why It Matters to Librarians.” Behavioral & Social Sciences Librarian 32 (1): 58–62. doi:10.1080/01639269.2013.750508.