As I reflected recently on the digital tools I use now, versus, say two years ago, it struck me just how fast all of those things in the toolbox have changed. I know, it's cliche to say everything is changing fast, but everything is changing fast, gal-darnit.
For example, I looked at the computer I migrated from two years ago and was surprised to see very little of the same software that feels so familiar and comfortable to me now on my new machine.
For writing, I only keep Microsoft Office around anymore for emergencies (can I dislike that bloated piece of junk more? Well, there is IE, too, and Vista; I'm sensing a pattern ...) I now typically use OpenOffice but am already beginning to become a bigger and bigger believer in GoogleDocs and working from the cloud.
Google also has won me over with its browser Chrome. Chrome is so fast and sleek and, well, just amazingly fantastic, I can't imagine going back to Mozilla, and I won't even mention that bloated piece of junk IE. Can the Chrome OS get released soon enough? I would love to get rid of that bloated piece of junk Windows 7 (already unceremoniously dumped that bloated piece of junk Vista).
Anyway, in terms of core tools that I use professionally, Adobe's Creative Suite also quickly has become essential. I bring out Photoshop and InDesign almost daily and used Flash and Dreamweaver to design a couple of my key web sites. The full version of Adobe Acrobat has become really handy, often, and I use Premiere for video editing.
None of those tools were on my old machine, except OpenOffice (yet I still was using Microsoft Office at that time, due to compatibility issues with OO that since have been resolved). Two years ago, I wasn't using EndNote, or Twitter, or Skype or iTunes, all of which I use just about daily now. And this doesn't take into account the mobile apps on my Android phone, which is less than two years old, either. Audacity, the open source audio editing program, might be the tool with the most longevity right now for me. I definitely prefer open source software. Not just because I am a poor student. But I think there is something vibrant and special about software developed for the love of the software, not for profits.
Part of all of this, I suppose, is related to the ripening of my dissertation studies, but I also remember just a few months ago exploring both mind mapping and Venn diagram software and determining I would never have a use for either of those. Turns out, the mind mapping software (FreeMind, specifically) just this week turned out to be the exact tool I needed to help me organize my dissertation reading list. I was feeling frustrated with the list, and EndNote, as great as that is, just wasn't allowing me to visualize what I needed to do to string a thread through my reading list. So, in a moment of frustration, I thought I would give mind mapping another shot and ended up downloading FreeMind. I watched a couple of short web tutorials on YouTube and then started playing around with it. I don't know now if I could ever have even continued the reading list without it. I feel like telling everyone I know. So I am, at least those few who read this blog and actually would continue this far into such a rambling post. And the even fewer of you who have a reading list to plug into it, but for those of you who do, this is for you!
Through FreeMind, I not only was able to visualize my sourcing tree for the dissertation, but I have been able to score the sources in terms of list value, and mark the pieces I still need to read, and get copies of. Emboldened, then I thought I would try a quick Venn diagram of my unifying themes for the dissertation, and amazingly enough, that sort of worked, too.
So what's next? What do I want to bring into the toolbox, or pick up afresh? Maybe Final Cut Pro. ... I think Adobe Premiere is OK as a video editor. But I sense there could be something much better. I have heard so many good things about Final Cut Pro (also that it is difficult to learn). I would like to work more on my video editing, which is something that communicators of all types will need to be better at in the future. I enjoy doing that kind of work, too. Is there an opensource video editor that actually works well? Maybe I'll look into that first. Otherwise, I'll soon start looking for Final Cut Pro tutorials.
Wednesday, June 30, 2010
Sunday, June 27, 2010
Putting a theoretical foundation under mobile storytelling
One weakness I have noticed in discourse about mobile devices -- and mobile storytelling in particular -- is a general lack of a specific theoretical foundation from which to build. There are many, many new media theories, and, of course, general communication (or old media) theories, and there are innumerable theories from related fields, such as psychology, sociology, anthropology, etc. But what are the theories that are key to mobile media? That's a difficult question to answer.
The best source I have found so far to start such a discussion is "Digital Cityscapes," edited by Adriana de Souza e Silva and Daniel Sutko (e Silva, A., & Sutko, D. (2009). Digital cityscapes: Merging digital and urban playspaces: Peter Lang Pub Inc.). That collection has six articles in the first section, from a variety of authors, focused primarily on theory. I also have found a smattering of articles that address the core issues of the field to some degree. But, again, the information is scarce. Part of that is just the infancy of the field, but I also think part of that is scholars mostly working on the micro level at this point (myself included), instead of taking the time to step back and look more generally about holistic issues related to the "mobile" life.
So I have begun to work on making broader theoretical connections, at least in terms of mobile storytelling, and soon will start posting about them here as well as linking them to www.mobilestorytelling.net. I might even try to eventually develop those thoughts into a book article or chapter. But first, a paper. ...
The initial step in this paper-producing process is to determine what I really want to know about the theoretical connections across the mobile realm. Actually, there is a step before that. I first have to determine what I don't want to know about.
As a social scientist, I am not particularly interested in hardware specifications and manufacturing or model developments, such as the differences between the iPhone GS and the iPhone 4. I appreciate those, and I follow them on a consumer level, and I want mobile devices to keep gaining new abilities. But that's not want I want to write about.
Privacy concerns are integrated into user-generated content and mobile storytelling, but I think of those as ancillary to my studies at this time. Even though I am highly interested in location awareness, I am not focusing on achievement games, like FourSquare, or object location finding, via geocaching, or similar wayfaring, unless it is related to uncovering an embedded story, or something I think of as the "airrative," or story embedded in the air.
Back to the original issue, I'm not sure what theory or theories can cover all of that and the rest of it, but it's not my intent to find a master theory. At least not yet. I first want to look closely at storytelling with mobile devices, particularly nonfiction storytelling, which I anticipate being the core of my dissertation. So what does that involve?
Thinking of this as a relatively contained academic paper, or article, or chapter, and not as the basis of a lengthy dissertation just on theory, I started to look at all of the various realms this could include, such as cyberspace theory, museum studies, cognitive theory, immersion theory, etc.
I'm not sure where this will lead, but I plan to start by doing a literature review of the key theories in the realms in which I think the overlap is most critical. Those general areas are:
* New Media
* Locative Media
* Narrative Theory
* Interaction Theory
Here is a very quick Venn diagram that shows some points of overlap among those four, particularly in the realms of sharing information (stories) and connecting in social ways:
I also am interested in relations to spacetime and game design, but I think I have enough to consider for now. My plan is to take these four broad areas mentioned above and search through them for direct mobile storytelling ties, ones that I think inform the field, as a way to help to develop broader theoretical connections. I'm really not sure how this will turn out, until I begin. That's part of the fun. ... So onward!
The best source I have found so far to start such a discussion is "Digital Cityscapes," edited by Adriana de Souza e Silva and Daniel Sutko (e Silva, A., & Sutko, D. (2009). Digital cityscapes: Merging digital and urban playspaces: Peter Lang Pub Inc.). That collection has six articles in the first section, from a variety of authors, focused primarily on theory. I also have found a smattering of articles that address the core issues of the field to some degree. But, again, the information is scarce. Part of that is just the infancy of the field, but I also think part of that is scholars mostly working on the micro level at this point (myself included), instead of taking the time to step back and look more generally about holistic issues related to the "mobile" life.
So I have begun to work on making broader theoretical connections, at least in terms of mobile storytelling, and soon will start posting about them here as well as linking them to www.mobilestorytelling.net. I might even try to eventually develop those thoughts into a book article or chapter. But first, a paper. ...
The initial step in this paper-producing process is to determine what I really want to know about the theoretical connections across the mobile realm. Actually, there is a step before that. I first have to determine what I don't want to know about.
As a social scientist, I am not particularly interested in hardware specifications and manufacturing or model developments, such as the differences between the iPhone GS and the iPhone 4. I appreciate those, and I follow them on a consumer level, and I want mobile devices to keep gaining new abilities. But that's not want I want to write about.
Privacy concerns are integrated into user-generated content and mobile storytelling, but I think of those as ancillary to my studies at this time. Even though I am highly interested in location awareness, I am not focusing on achievement games, like FourSquare, or object location finding, via geocaching, or similar wayfaring, unless it is related to uncovering an embedded story, or something I think of as the "airrative," or story embedded in the air.
Back to the original issue, I'm not sure what theory or theories can cover all of that and the rest of it, but it's not my intent to find a master theory. At least not yet. I first want to look closely at storytelling with mobile devices, particularly nonfiction storytelling, which I anticipate being the core of my dissertation. So what does that involve?
Thinking of this as a relatively contained academic paper, or article, or chapter, and not as the basis of a lengthy dissertation just on theory, I started to look at all of the various realms this could include, such as cyberspace theory, museum studies, cognitive theory, immersion theory, etc.
I'm not sure where this will lead, but I plan to start by doing a literature review of the key theories in the realms in which I think the overlap is most critical. Those general areas are:
* New Media
* Locative Media
* Narrative Theory
* Interaction Theory
Here is a very quick Venn diagram that shows some points of overlap among those four, particularly in the realms of sharing information (stories) and connecting in social ways:
I also am interested in relations to spacetime and game design, but I think I have enough to consider for now. My plan is to take these four broad areas mentioned above and search through them for direct mobile storytelling ties, ones that I think inform the field, as a way to help to develop broader theoretical connections. I'm really not sure how this will turn out, until I begin. That's part of the fun. ... So onward!
Wednesday, June 23, 2010
Workspace and its story
Just struck me that my workspace now includes four PCs and an iMac, plus several mobile devices, such as an Android phone and iPods, with cameras and other electronic gear all over. So what is my main monitor sitting on? A printed version of "The Complete Works of William Shakespeare" and a Sunset magazine encyclopedia of Western gardens. I think I need to get outside more this summer.
Tuesday, June 22, 2010
Where do interesting academic paper prospects come from?
Dr. Fred Kemp of Texas Tech University calls such opportunities within the collective mind "disturbed knowledge," as opposed to the "shared knowledge," or the ideas we mostly agree upon.
This disturbed knowledge, according to Kemp, generally originates from one of five sources, or a combination of these:
1. There's a gap in the disciplinary knowledge;
2. Something about the disciplinary knowledge is just wrong;
3. Something about the disciplinary knowledge needs explanation, expansion, or further defense;
4. Some notable person in the field needs a revised or enlightened assessment;
5. The field itself needs a new branch or corollary or peripheral addition.
If your academic paper doesn't offer disturbed knowledge, then it probably is time to question again why you are bothering.
This disturbed knowledge, according to Kemp, generally originates from one of five sources, or a combination of these:
1. There's a gap in the disciplinary knowledge;
2. Something about the disciplinary knowledge is just wrong;
3. Something about the disciplinary knowledge needs explanation, expansion, or further defense;
4. Some notable person in the field needs a revised or enlightened assessment;
5. The field itself needs a new branch or corollary or peripheral addition.
If your academic paper doesn't offer disturbed knowledge, then it probably is time to question again why you are bothering.
Saturday, June 19, 2010
Digital or media literacy
After looking over several models and definitions of digital / media literacy, including the overly complicated graphic above, it seems clear that the phrases "digital literacy" and "media literacy" have become nearly synonymous. I tend to think of digital literacy as more device oriented, like being able to operate a smart phone, and media literacy as being able to decipher the messages -- textual, audio, video, etc. -- delivered through such devices. But the literature I read recently about the concepts doesn't seem to back such simple delineation (maybe I should make my argument in this matter). In fact, I think the scholarship muddies the pool from many different directions, making any distinctions between the two terms virtually meaningless. So maybe it would be more worthwhile to spend energy envisioning different levels of digital/media literacy, starting with a base level and an advanced level.
At the base level, users would be able to competently operate digital communication devices. That is not just being able to turn a device on, which, of course is an important first step, but base level users would be able to carry out all core functions of a device in the ways the device was designed to be used. Those core functions would be defined by the accompanying literature, suggesting the capabilities of the device and providing instructions for carrying out those tasks. A person who can turn on a cell phone and call/answer calls would not necessarily have a base level of literacy, unless that person also, for examples, could check voice mail, text message, take a picture with the phone, etc. That's not to say the person must be able to successful carry out every single task that the device is capable of performing, but the person should be able to perform the core tasks, either the talking points on the marketing, or the most substantially addressed functions in the user manual. I do realize that is a slippery definition of a parameter, but case by specific case, I think the line would be relatively easy to find for any particular device, with some limited subjectivity on the exact place to draw it, which would be besides the point anyway.
The advanced skills do not relate to how obscure the function might be but instead to the analysis, synthesis and creativity required to envision and execute the expression (think of the top point of Bloom's taxonomy pyramid). That would include generating new uses for the device that are not explicitly stated in the official accompanying materials. It would include symbol analysis and manipulation with the device, and it would include significant expansion of the capabilities described as uses for the devices. And by devices, I mean digital tools, so a piece of software would be a device, just as a scanner or cell phone would be. In some cases, then, a device will be used within a device, or they would be combined in new ways. I see this all as part of the shroud of technology, in which even the creators of devices can't foresee how they will be used and to what extent. A primary example of that was the initial press conference unveiling the iPod, hosted by Steve Jobs of all people, in which the device was described primarily as a portable hard drive (yet one that also could hold music). Apple, probably the most clairvoyant of mainstream new media companies, also didn't envision the computing appeals of the iPhone (originally rebuffing apps and emphasizing that the iPhone was not intended to be a mini-computer). And so on. The users who took these devices and made them do what they wanted, rather than follow the prescription of the company, should be considered as having advanced digital literacy. Advanced digital literacy also means having an awareness of what sources of information can be trusted, or how to check sources, before believing what can be seen. A general skepticism would be part of this skill set, yet also with the wherewithal to triangulate sources of information, or dig deeper into the information, to determine who is saying what and for what reason(s), to help gauge the credibility and weight. I'm starting to slip into a wide range of descriptors, that could be classified as "advanced," so suffice to circle around and say that, in general, advanced skills involve analysis, synthesis and creativity, while base skills essentially involve following directions and traditional social conventions.
In five to 10 years, advanced users, I think, will need to be know one skill above all others: the skill to learn.
If we look backward 10 years, there would be no hint of Facebook (2004), or five years, no Twitter (2006). Every year, it seems, a new technology appears that significantly shifts the communication/media landscape, or at least shakes it up. So I think it will become increasingly important for people to develop the advanced skill of ever-learning, to be open to learning new things, while those who can't keep up, or give up (I just am not going to learn another new program, or buy another new device!) will be left behind and fall further back each successive year. What might seem rebellious and cool, in a Luddite sort of way, really will become self destructive socially.
Well, this learn-to-learn philosophy was starting to sound too much like an echo in my mind, so I began looking around at some of the recent books I have read, and found the following in Seymour Papert's "The Children's Machine," which clearly inspired what I wrote just a few sentences ago:
"It's often said that we are entering the information age. This coming period could equally be called the age of learning. The sheer quantity of learning taking place in the world is already many times greater than in the past. ... Today, in industrialized countries, most people are doing jobs that did not exist when they were born. The most important skill determining a person's life pattern has already become the ability to learn new skills, to take in new concepts, to assess new situations, to deal with the unexpected. This will be increasingly true in the future. The competitive ability is the ability to learn."
When I try to imagine the future, those thoughts keep coming to mind, and I suspect that concept will be as clear as anyone can get.
Monday, June 14, 2010
"Writing for Scholarly Publication" by Anne Sigismund Huff
Just finished this book by Huff, professor of strategic management at the University of Colorado - Boulder. Well written, chatty and mind focusing. I do not intend to summarize the whole work or even promise to mention its most salient or provocative points (you will need to read it to determine those for yourself). But it did provoke some thoughts in me about the composition of academic scholarship, such as these:
* Huff's theme: Scholarly publication is a conversation. She gives great advice to find related articles in the journals of the field and says to imagine your work as a conversation with those pieces and their authors, a discussion around which many people at the party might gather. Most academic articles, frankly, are those boring small talks, in which someone drones (maybe even you), and you eventually want to stab the skewers into your ears. This reminds me of the sick attraction journalists have with the inverted pyramid. Formulaic writing has its place, of course, and every piece can't break the formula every time. Yet creativity within the formula should be possible, or the formula should be abandoned, especially in articles of this length and with this much time and energy put into them. Think about the traditional approach of telling readers what you are going to tell them, telling them and the telling them what you told them. I could understand that approach in some cases, like with elementary school students. But I don't think that's the academic market. Instead, as Huff rightly suggests, get to the point and then if you feel the desire to circle back around again, at least say something new in the second passing.
* Ideas are cheap (p. 14). Execution of ideas is where the capital is formed. This is becoming a new media creed, and I think this is where the people who argue that technology is making us "dumber" are walking around with bags on their heads. Information sharing has changed so dramatically that it is like learning the world is not flat. It probably always has been this way, that execution of ideas trumped ideas themselves, but now, the access to collective intelligence has destroyed our cognitive measuring tools. We always have measured how smart we are on an individual scale, yet now, tapping the collective effectively and efficiently (think media literacy, or calculators) creates a different sort of intelligence, and it's not the idea generation that is the issue, it's who can do something with the ideas. Hmmm, this is not coming out like I imagined. I'll try again. Ideas and even the first few levels of execution of ideas are so cheap now that they have virtually no cost to the producer (think about the cost of this now rambling blog post). Yet some people are able to turn ideas into something that's clearly worthwhile, and that does have value. Is the intelligence, then, in generating the idea, executing the idea or monetizing the idea? I can see this is getting way too fuzzy and long, so I'll work more on the ideas later. It won't cost me anything.
* Quit thinking about it and write it. And finish it. Again, completion of an idea doesn't cost anything. But, if nothing else, it has high value to me, or at least much higher value than the great American novel in my head, or the half-finished journal articles in my drawers or the letter to the editor that I never sent, etc. And someone else also might find the work valuable.
* Huff said, "keep the pipeline full," with individual articles, co-authored articles, mainstream pieces, niche pieces, efficiently getting work through your system without hitting dry patches. I think that is a highly beneficial strategy. As a staff writer for daily newspapers, I always kept literally hundreds of ideas at hand, maybe a few dozen that I had thought about to some extent and then another dozen at least that were in various stages of development, from background reading to interviews having been done to drafts completed. This kept productivity high for me but never made producing feel like a burden, because I almost always was working on what I wanted to do. If I felt inspired to write, I did. If I didn't feel like writing, I would make an interview call, or read some background, or whatever I felt like doing (or the least painful thing), and because something in my pipeline always was near completion, or ripened to the point of submission, the editors generally didn't hassle me much. That approach also is a great way to avoid "writer's block," since when I have felt blocked as a writer, I never felt blocked as a reader, or as an interviewer. In those ways, something related to publication always has been flowing for me.
* Be interesting (p. 47). Do we really have to tell writers this? If you have read many academic journals or newspapers, you know the answer. This probably should be the first filter applied. If you can't say something interesting, ...
* Make assertions about earlier work that reflects your judgement and agenda. And define key terms and new terms (p. 90). These both are critical writing techniques rarely used to full potential. If you are going to comment on someone else's scholarship, it seems much more interesting to actually comment on it, rather than just note it exists. Sometimes, a list of other scholars doing work on a particular line of inquiry can be enough, I suppose, or maybe it's a tip of the hat. But it might be richer to present an entry point into the exemplar, like a hyperlink made out of words. And terminology is significantly underrated in writing of all sorts. If you don't establish the key terms, master terms, whatever you want to call them, then it will be difficult to follow your lines of thought as you envision them.
Because the writing is so smooth, Huff's book is easy to read quickly. It also is helpful in a variety of ways. If nothing else, it offers clarity. All of those little details, nuances, tangents that actually muddle and distract the writing process, especially early in the iterations, slough off under Huff's straightforward approach with the end in mind.
* Huff's theme: Scholarly publication is a conversation. She gives great advice to find related articles in the journals of the field and says to imagine your work as a conversation with those pieces and their authors, a discussion around which many people at the party might gather. Most academic articles, frankly, are those boring small talks, in which someone drones (maybe even you), and you eventually want to stab the skewers into your ears. This reminds me of the sick attraction journalists have with the inverted pyramid. Formulaic writing has its place, of course, and every piece can't break the formula every time. Yet creativity within the formula should be possible, or the formula should be abandoned, especially in articles of this length and with this much time and energy put into them. Think about the traditional approach of telling readers what you are going to tell them, telling them and the telling them what you told them. I could understand that approach in some cases, like with elementary school students. But I don't think that's the academic market. Instead, as Huff rightly suggests, get to the point and then if you feel the desire to circle back around again, at least say something new in the second passing.
* Ideas are cheap (p. 14). Execution of ideas is where the capital is formed. This is becoming a new media creed, and I think this is where the people who argue that technology is making us "dumber" are walking around with bags on their heads. Information sharing has changed so dramatically that it is like learning the world is not flat. It probably always has been this way, that execution of ideas trumped ideas themselves, but now, the access to collective intelligence has destroyed our cognitive measuring tools. We always have measured how smart we are on an individual scale, yet now, tapping the collective effectively and efficiently (think media literacy, or calculators) creates a different sort of intelligence, and it's not the idea generation that is the issue, it's who can do something with the ideas. Hmmm, this is not coming out like I imagined. I'll try again. Ideas and even the first few levels of execution of ideas are so cheap now that they have virtually no cost to the producer (think about the cost of this now rambling blog post). Yet some people are able to turn ideas into something that's clearly worthwhile, and that does have value. Is the intelligence, then, in generating the idea, executing the idea or monetizing the idea? I can see this is getting way too fuzzy and long, so I'll work more on the ideas later. It won't cost me anything.
* Quit thinking about it and write it. And finish it. Again, completion of an idea doesn't cost anything. But, if nothing else, it has high value to me, or at least much higher value than the great American novel in my head, or the half-finished journal articles in my drawers or the letter to the editor that I never sent, etc. And someone else also might find the work valuable.
* Huff said, "keep the pipeline full," with individual articles, co-authored articles, mainstream pieces, niche pieces, efficiently getting work through your system without hitting dry patches. I think that is a highly beneficial strategy. As a staff writer for daily newspapers, I always kept literally hundreds of ideas at hand, maybe a few dozen that I had thought about to some extent and then another dozen at least that were in various stages of development, from background reading to interviews having been done to drafts completed. This kept productivity high for me but never made producing feel like a burden, because I almost always was working on what I wanted to do. If I felt inspired to write, I did. If I didn't feel like writing, I would make an interview call, or read some background, or whatever I felt like doing (or the least painful thing), and because something in my pipeline always was near completion, or ripened to the point of submission, the editors generally didn't hassle me much. That approach also is a great way to avoid "writer's block," since when I have felt blocked as a writer, I never felt blocked as a reader, or as an interviewer. In those ways, something related to publication always has been flowing for me.
* Be interesting (p. 47). Do we really have to tell writers this? If you have read many academic journals or newspapers, you know the answer. This probably should be the first filter applied. If you can't say something interesting, ...
* Make assertions about earlier work that reflects your judgement and agenda. And define key terms and new terms (p. 90). These both are critical writing techniques rarely used to full potential. If you are going to comment on someone else's scholarship, it seems much more interesting to actually comment on it, rather than just note it exists. Sometimes, a list of other scholars doing work on a particular line of inquiry can be enough, I suppose, or maybe it's a tip of the hat. But it might be richer to present an entry point into the exemplar, like a hyperlink made out of words. And terminology is significantly underrated in writing of all sorts. If you don't establish the key terms, master terms, whatever you want to call them, then it will be difficult to follow your lines of thought as you envision them.
Because the writing is so smooth, Huff's book is easy to read quickly. It also is helpful in a variety of ways. If nothing else, it offers clarity. All of those little details, nuances, tangents that actually muddle and distract the writing process, especially early in the iterations, slough off under Huff's straightforward approach with the end in mind.
Friday, June 11, 2010
Walter Ong's Secondary Orality and its relationship to mobile devices
Mobile devices offer us a variety of new abilities that might not be so new but also might be exponentially more powerful in the modern form.
That's a bit confusing, even to me, so I'll back up and try again. For many months, maybe even a year, I have been thinking that there are important foundational connections between mobile devices and oral traditions. Mobile devices, such as the iPhone, can be aware of our location, spatial features around us, the context of the situation, including what has happened to us before and what relationships we have with other people in the area, and so on. Which all sounds really amazing, until you think that any person interacting with another person (or crowd) could very well do the same thing without a machine.
In lecturing, for example, I might know quite a bit about my audience, including names, motivations and even how many times a particular audience member has heard me give this sort of talk before. I can connect socially with these people, be chatty, walk around the auditorium and talk to each individual. But what I can't do, and where the mobile devices show immense potential, is perform that same personalized routine simultaneously for thousands of different people at once, nurturing a collective and open and collaborative environment endlessly at all hours of the day, generously responding to each individual, all while giving the impression that this sort of feedback is authored and tailored just for the single recipient experiencing it.
Along those lines, it caught my attention when Walter Ong's Second Orality was mentioned briefly, starting on page 7, in:
Baehr, C., & Schaller, B. (2009). Writing for the internet: A guide to real communication in virtual space: Greenwood Press.
Among the intriguing traits identified by Ong (1982), and capsulized by Baehr and Schaller, is that oral culture speakers often adapted their storytelling in response to audience reaction. Could that be the origins of interactive storytelling? Location, spatial and contextual awareness are critical components in mobile delivery, but they also seem monumentally relevant to oral cultures. In turn, Ong's theories are definitely now on my reading list. Here are the sources I plan to find and examine:
From the Baehr and Schaller book:
Ong, W. (Ed.). (1967). The presence of the word: Minneapolis: University of Minnesota Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word: Routledge New York.
And a collection of various articles that make reference to mobile technology and secondary orality and new media, such as:
Potts, J. (2008). Who’s afraid of technological determinism? Another look at medium theory. Fibreculture Journal, 12.
Hartnell-Younga, E., & Vetereb, F. (2008). A means of personalising learning: Incorporating old and new literacies in the curriculum with mobile phones. Curriculum Journal, 19(4), 283-292.
Joyce, M. (2002). No one tells you this: Secondary orality and hypertextuality. Oral Tradition, 17(2).
Any other suggestions?
When I get through those, I'll report back what I find.
That's a bit confusing, even to me, so I'll back up and try again. For many months, maybe even a year, I have been thinking that there are important foundational connections between mobile devices and oral traditions. Mobile devices, such as the iPhone, can be aware of our location, spatial features around us, the context of the situation, including what has happened to us before and what relationships we have with other people in the area, and so on. Which all sounds really amazing, until you think that any person interacting with another person (or crowd) could very well do the same thing without a machine.
In lecturing, for example, I might know quite a bit about my audience, including names, motivations and even how many times a particular audience member has heard me give this sort of talk before. I can connect socially with these people, be chatty, walk around the auditorium and talk to each individual. But what I can't do, and where the mobile devices show immense potential, is perform that same personalized routine simultaneously for thousands of different people at once, nurturing a collective and open and collaborative environment endlessly at all hours of the day, generously responding to each individual, all while giving the impression that this sort of feedback is authored and tailored just for the single recipient experiencing it.
Along those lines, it caught my attention when Walter Ong's Second Orality was mentioned briefly, starting on page 7, in:
Baehr, C., & Schaller, B. (2009). Writing for the internet: A guide to real communication in virtual space: Greenwood Press.
Among the intriguing traits identified by Ong (1982), and capsulized by Baehr and Schaller, is that oral culture speakers often adapted their storytelling in response to audience reaction. Could that be the origins of interactive storytelling? Location, spatial and contextual awareness are critical components in mobile delivery, but they also seem monumentally relevant to oral cultures. In turn, Ong's theories are definitely now on my reading list. Here are the sources I plan to find and examine:
From the Baehr and Schaller book:
Ong, W. (Ed.). (1967). The presence of the word: Minneapolis: University of Minnesota Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word: Routledge New York.
And a collection of various articles that make reference to mobile technology and secondary orality and new media, such as:
Potts, J. (2008). Who’s afraid of technological determinism? Another look at medium theory. Fibreculture Journal, 12.
Hartnell-Younga, E., & Vetereb, F. (2008). A means of personalising learning: Incorporating old and new literacies in the curriculum with mobile phones. Curriculum Journal, 19(4), 283-292.
Joyce, M. (2002). No one tells you this: Secondary orality and hypertextuality. Oral Tradition, 17(2).
Any other suggestions?
When I get through those, I'll report back what I find.
Thursday, June 10, 2010
Does the Internet make you smarter or dumber?
The Washington Post in the past week published a binary pro/con arguing that the Internet makes us "smarter"/"dumber," starting with Clay Shirky's "smarter" piece, published on June 4. Shirky, a NYU prof, is just about to release a new book called "Cognitive Surplus: Creativity and Generosity in a Connected Age."
Nicholas Carr on the next day, June 5, authored the "dumber" piece. Carr recently released a book called, "The Shallows: What the Internet Is Doing to Our Brains."
Shirky also wrote a provocative book in 2005, called "Here Comes Everybody," that gives many examples of how openness on the Internet is making the world a better informed and mobilized place, maybe not a more capitalistically lucrative place, but a better place nonetheless.
I certainly feel Shirky has a much stronger base for his argument, which he presents solidly in his book, but his essay in this publication comes across as flippant, like the question is too bothersome to even answer.
Carr instead goes straight for the empirical and physiological hammer, saying "a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers."
His premise -- that we don't spend hour after hour alone with books anymore, which is making us evolve into idiots -- seems somewhat ironic contained in a generalist essay at less than 1,300 words. It also seems questionable at its core, since my understanding of the Internet is that it has inspired a resurgence in reading. All kinds of reading. News media organizations, for example, are attracting millions and millions of readers beyond what they ever were able to reach with print editions. Those organizations just can't make money off of it. So is this is capitalism issue, or a reading issue?
Shirky mentions the typical response by societies to foundation-shaking technologies. The first step is denial, of course, and that things were always better "in the olden days." Marshall McLuhan in his short booklet "The Medium is the Massage," has a passage about the pastoral myth generated by railroad expansion, in which the demonization of urban areas conveniently obscured the hardships of homesteading. The only medium I think truly lived up to those fears was television, just because of the way it was used by corporatists to turn people into consumption machines. When I watch public broadcasting, or the less commercialized sporting events, or even some of the benign content on the cooking channel, I can see the neutral skeleton of the machine, which could be used for so much more good. But this is not my rant about television. Back to the Internet, and does it make us smarter?
Shirky and so many others, including Henry Jenkins and Howard Rheingold, have made compelling cases in recent years about the superpowers that the Internet creates within us (and communally), giving us opportunities like never before to expend our cognitive surplus. But one aspect that doesn't seem to get much attention in this debate is the measuring tools of the non-monetary benefits (or costs).
In other words, how are we deciding if we are "smarter" or "dumber"? In what ways, and by whose yardstick?
Carr, for example, writes:
"Only when we pay deep attention to a new piece of information are we able to associate it 'meaningfully and systematically with knowledge already well established in memory,' writes the Nobel Prize-winning neuroscientist Eric Kandel."
I'm not sure how Kandel is measuring this, but, at least from the context of the rest of Carr's piece, I suspect this is another look through the elitist paradigm that pooh-poohs any intellectual gains outside of the privileged class and its narrow measuring tools. People who are not Nobel Prize-winning neuroscientists might not necessarily need deep and meaningful thought about a particular topic to feel like they know enough (and more than they would have without the Internet) to move on to something else.
This overall debate is fraught with complexities that simply can't be addressed in a combined 2,500 words, which makes the format questionable. These pieces could, though, start a much more "meaningful" discussion about what we value in knowledge, how we measure intelligence and how technological determinism plays a part in our evolution as a species, even physiologically, as Carr suggests.
As a new media practitioner and educator, the answer to this question of whether the Internet is making us "smarter" or "dumber" seems simplistically stupid, which is maybe why Shirky gives such a half-hearted effort.
Carr's summaries of "empirical data" meanwhile, without transparent access to the methods and results of those studies, again, seems ironically shallow.
How about providing hyperlinks to the original studies, so we can judge Carr's conclusions for ourselves? Oh, wait, that would just make us dumber.
Nicholas Carr on the next day, June 5, authored the "dumber" piece. Carr recently released a book called, "The Shallows: What the Internet Is Doing to Our Brains."
Shirky also wrote a provocative book in 2005, called "Here Comes Everybody," that gives many examples of how openness on the Internet is making the world a better informed and mobilized place, maybe not a more capitalistically lucrative place, but a better place nonetheless.
I certainly feel Shirky has a much stronger base for his argument, which he presents solidly in his book, but his essay in this publication comes across as flippant, like the question is too bothersome to even answer.
Carr instead goes straight for the empirical and physiological hammer, saying "a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers."
His premise -- that we don't spend hour after hour alone with books anymore, which is making us evolve into idiots -- seems somewhat ironic contained in a generalist essay at less than 1,300 words. It also seems questionable at its core, since my understanding of the Internet is that it has inspired a resurgence in reading. All kinds of reading. News media organizations, for example, are attracting millions and millions of readers beyond what they ever were able to reach with print editions. Those organizations just can't make money off of it. So is this is capitalism issue, or a reading issue?
Shirky mentions the typical response by societies to foundation-shaking technologies. The first step is denial, of course, and that things were always better "in the olden days." Marshall McLuhan in his short booklet "The Medium is the Massage," has a passage about the pastoral myth generated by railroad expansion, in which the demonization of urban areas conveniently obscured the hardships of homesteading. The only medium I think truly lived up to those fears was television, just because of the way it was used by corporatists to turn people into consumption machines. When I watch public broadcasting, or the less commercialized sporting events, or even some of the benign content on the cooking channel, I can see the neutral skeleton of the machine, which could be used for so much more good. But this is not my rant about television. Back to the Internet, and does it make us smarter?
Shirky and so many others, including Henry Jenkins and Howard Rheingold, have made compelling cases in recent years about the superpowers that the Internet creates within us (and communally), giving us opportunities like never before to expend our cognitive surplus. But one aspect that doesn't seem to get much attention in this debate is the measuring tools of the non-monetary benefits (or costs).
In other words, how are we deciding if we are "smarter" or "dumber"? In what ways, and by whose yardstick?
Carr, for example, writes:
"Only when we pay deep attention to a new piece of information are we able to associate it 'meaningfully and systematically with knowledge already well established in memory,' writes the Nobel Prize-winning neuroscientist Eric Kandel."
I'm not sure how Kandel is measuring this, but, at least from the context of the rest of Carr's piece, I suspect this is another look through the elitist paradigm that pooh-poohs any intellectual gains outside of the privileged class and its narrow measuring tools. People who are not Nobel Prize-winning neuroscientists might not necessarily need deep and meaningful thought about a particular topic to feel like they know enough (and more than they would have without the Internet) to move on to something else.
This overall debate is fraught with complexities that simply can't be addressed in a combined 2,500 words, which makes the format questionable. These pieces could, though, start a much more "meaningful" discussion about what we value in knowledge, how we measure intelligence and how technological determinism plays a part in our evolution as a species, even physiologically, as Carr suggests.
As a new media practitioner and educator, the answer to this question of whether the Internet is making us "smarter" or "dumber" seems simplistically stupid, which is maybe why Shirky gives such a half-hearted effort.
Carr's summaries of "empirical data" meanwhile, without transparent access to the methods and results of those studies, again, seems ironically shallow.
How about providing hyperlinks to the original studies, so we can judge Carr's conclusions for ourselves? Oh, wait, that would just make us dumber.
Subscribe to:
Posts (Atom)
Blog Archive
-
▼
2010
(53)
-
▼
June
(8)
- Tools in my digital toolbox (today)
- Putting a theoretical foundation under mobile stor...
- Workspace and its story
- Where do interesting academic paper prospects come...
- Digital or media literacy
- "Writing for Scholarly Publication" by Anne Sigism...
- Walter Ong's Secondary Orality and its relationshi...
- Does the Internet make you smarter or dumber?
-
▼
June
(8)