Here is a summary of my pilot study findings, comparing media exposure (a mobile app, a brochure, or wayside signs only) in The Village at Fort Vancouver National Historic Site. Not statistically significant but some promising potential appears.
The desktop presentation:
The raw PPT
Monday, December 6, 2010
Sunday, December 5, 2010
A thought about time and information
Had this moment of clarity today: There is no past. There is no future. There is only the present moment and the filtered reflections of bygone symbols mixed with projections of symbols and situations we might yet face.
Saturday, November 27, 2010
Tracing rhetorical "audience" through time
A podcast tracing the idea of rhetorical "audience" back through time, and across fields, including politics and education, and how I value this concept in my teaching.
Audience podcast
Audience podcast
Friday, November 19, 2010
Consubstantiality, or finding common ground with words
Burke's concept of consubstantiality, covered in an earlier blog post, is inspiring a podcast from me in response to the darkening binary political environment in America today. Are we Democrats, or Republicans, ... or are we Americans? Even better, are we humans? Or the best: Are we inhabitants of Earth? Each label we apply to ourselves (or others) limits the whole, or screens the whole, as Burke might say, obscuring the Truth. It seems to me that we could divide ourselves in any number of ways that would easily rival or surpass the true differences that separate the big political parties, both of which, at their hearts, are corporatist and militarist.
Finding common ground, rather than wedging apart (dividing and conquering, I suppose) meant to Burke looking for words to end "warfare," literally and figuratively. I interpret this as finding the parts where we agree, focusing on those, and building a sense of togetherness, despite other areas of difference. Is this Pollyanna-ish nonsense that never could work in reality? What if, as an earlier commentator on this blog suggested, both sides don't want to get along, and one wants to wield a stick, rather than a carrot? These are issues facing the two big political parties today, particularly the Democrats. President Obama addressed this topic of binary discourse in his press conference after the recent elections, in which Republicans made large gains in Congress. Senate Republican leader Mitch McConnell's reply? The American people want our parties to work together "to put aside the left-wing wish list." That doesn't really sound like working together, now, does it? ... This upcoming podcast will include commentary on Burke's consubstantiality concept and the many similar ideas that have come before it in the history of classical rhetoric, as well as modern examples of powerful people, such as Sen. Jay Rockefeller, suggesting that maybe what the world needs now is a little less partisanship (when haven't we heard that cry?) and more efforts to find common ground among the people of our country, to rebuild trust in the government, which is, by the way, us, not a them.
Finding common ground, rather than wedging apart (dividing and conquering, I suppose) meant to Burke looking for words to end "warfare," literally and figuratively. I interpret this as finding the parts where we agree, focusing on those, and building a sense of togetherness, despite other areas of difference. Is this Pollyanna-ish nonsense that never could work in reality? What if, as an earlier commentator on this blog suggested, both sides don't want to get along, and one wants to wield a stick, rather than a carrot? These are issues facing the two big political parties today, particularly the Democrats. President Obama addressed this topic of binary discourse in his press conference after the recent elections, in which Republicans made large gains in Congress. Senate Republican leader Mitch McConnell's reply? The American people want our parties to work together "to put aside the left-wing wish list." That doesn't really sound like working together, now, does it? ... This upcoming podcast will include commentary on Burke's consubstantiality concept and the many similar ideas that have come before it in the history of classical rhetoric, as well as modern examples of powerful people, such as Sen. Jay Rockefeller, suggesting that maybe what the world needs now is a little less partisanship (when haven't we heard that cry?) and more efforts to find common ground among the people of our country, to rebuild trust in the government, which is, by the way, us, not a them.
Thursday, November 18, 2010
Ways to get audio or video from YouTube.com
For various media projects I have been working on lately, I have needed to remix material on YouTube.
For video, I suggest trying YouTubeDownloader, which I think has a pretty good reputation for what it does, grabbing the YouTube file and producing an FLV file for download.
And, for audio, I just ran across an interesting site that seems to simply convert sound from YouTube. The site is called FLV2MP3.com, because YouTube files are stored as FLV files, and the most common audio compression is MP3. To get the MP3, then, you just copy and paste the YouTube URL into the box on FLV2MP3, press the convert button, and the mp3 pops right out. From there, you can embed it, download it, whatever. ...
For video, I suggest trying YouTubeDownloader, which I think has a pretty good reputation for what it does, grabbing the YouTube file and producing an FLV file for download.
And, for audio, I just ran across an interesting site that seems to simply convert sound from YouTube. The site is called FLV2MP3.com, because YouTube files are stored as FLV files, and the most common audio compression is MP3. To get the MP3, then, you just copy and paste the YouTube URL into the box on FLV2MP3, press the convert button, and the mp3 pops right out. From there, you can embed it, download it, whatever. ...
Friday, November 12, 2010
Foucault and Archaeology of Knowledge
Michel Foucault envisioned discourse as an artifact that could be dug up and examined, as from a particular period and place, a methodology of sorts that he called "Archaeology of Knowledge." From that, per James Herrick, he could determine what kinds of information could be known -- and said -- as "a matter of the social, historical and political conditions under which, for example, statements come to count as true or false."
Reading that recently inspired me to look again at the 2003 piece "Narrative Archeology" by Jeremy Hight, related to an emerging element of modern discourse: geolocation. Or, in other words, when a piece of discourse becomes directly connected to a place through a mobile device. That technological development seems to strengthen the Foucault metaphor, as Hight writes that "A city is a collection of data and sub-text to be read in the context of ethnography, history, semiotics, architectural patterns and forms, physical form and rhythm, juxtaposition, city planning, land usage shifts and other ways of interpretation and analysis. The city patterns can be equated to the patterns within literature: repetition, sub-text shift, metaphor, cumulative resonances, emergence of layers, decay and growth. A city is constructed in layers: infrastructure, streets, population, buildings. The same is true of the city in time: in shifts in decay and gentrification; in layers of differing architecture in form and layout resonating certain eras and modes in design, material, use of space and theory; in urban planning; in the physical juxtaposition of points and pointers from different times. Context and sub-text can be formulated as much in what is present and in juxtaposition as in what one learns was there and remains in faint traces (old signs barely visible on brick facades from businesses and neighborhood land usage long gone or worn splintering wooden posts jutting up from a railroad infrastructure decades dormant for example) or in what is no longer physically present at all and only is visible in recollection of the past." Digital historical interpretation that brings the past back to the present, flattens spacetime and allows history to be read fresh therefore seems to be an emerging extension of Foucault's ideas, worth juxtaposing.
Reading that recently inspired me to look again at the 2003 piece "Narrative Archeology" by Jeremy Hight, related to an emerging element of modern discourse: geolocation. Or, in other words, when a piece of discourse becomes directly connected to a place through a mobile device. That technological development seems to strengthen the Foucault metaphor, as Hight writes that "A city is a collection of data and sub-text to be read in the context of ethnography, history, semiotics, architectural patterns and forms, physical form and rhythm, juxtaposition, city planning, land usage shifts and other ways of interpretation and analysis. The city patterns can be equated to the patterns within literature: repetition, sub-text shift, metaphor, cumulative resonances, emergence of layers, decay and growth. A city is constructed in layers: infrastructure, streets, population, buildings. The same is true of the city in time: in shifts in decay and gentrification; in layers of differing architecture in form and layout resonating certain eras and modes in design, material, use of space and theory; in urban planning; in the physical juxtaposition of points and pointers from different times. Context and sub-text can be formulated as much in what is present and in juxtaposition as in what one learns was there and remains in faint traces (old signs barely visible on brick facades from businesses and neighborhood land usage long gone or worn splintering wooden posts jutting up from a railroad infrastructure decades dormant for example) or in what is no longer physically present at all and only is visible in recollection of the past." Digital historical interpretation that brings the past back to the present, flattens spacetime and allows history to be read fresh therefore seems to be an emerging extension of Foucault's ideas, worth juxtaposing.
Friday, November 5, 2010
Rhetoric as an end to warfare
Kenneth Burke considers finding common ground among people -- along the lines of consubstantiality, or identification -- as the only answer to our most pressing problem as humans, which is the alienation, or division, we feel from others.
In "A Rhetoric of Motives," per James Herrick, Burke writes: "If men were not apart from one another, there would be no need for the rhetorician to proclaim their unity. If men were wholly and truly of one substance, absolute communication would be of man's very essence."
As I am still feeling the bruises of yet another civil war-like political season, I wonder if Americans now have passed the point of no return in terms of consubstantiality. I don't feel hopeful at all that we can reach a period again in which we debate political issues together as Americans, trying to create the best country in the world, as opposed to Party A or Party B grasping for power and trying to dictate the ways in which the people in the other party live, which they really don't want to do.
It seems so long ago, in 2000, when a legitimate case could have been made that Republicans and Democrats were pretty much in the same place on many issues, arguing positions at least in the vicinity of each other. Ralph Nader made the case that the two parties were inseparable in ideas on the table, which was considered a bad thing. Of course, both parties at the time were concerned with a lot of negative matter, such as maintaining power and the two-party system, and feeding their corporate lamprey, and giving breaks to the rich, and a host of other slimy situations. But today, after about a decade of dramatically divisive rhetoric -- at first meant to separate the parties, but then manipulated as power grabs -- what are we left with in the ruins?
As Burke imagined, warfare! ... The bloody, bitter, hostile, horrible, hate-filled discourse of division. Unfortunately, mud-slinging, hate and character-assassination wins elections, and as long as it does, I suppose, politicians will go that route (they are, after all, politicians). But what do we as Americans get left with? Does anyone really feel good about the state of America right now? Does anyone feel like we are in this big community together?
Or do we feel divided? West Coasters versus East Coasters? City folk versus country folk? The intellectual elite versus the real people (who, apparently, are the ones you would want to sit down and drink a beer with)? War mongers / pacifists, who need to "man up." Capitalists / Socialists. Etc. Where is this getting us? Maybe instead we should be returning to Burke's suggestion of trying to find common ground, not as a form of pacifying "the enemy," which is us, by the way, but as a form realizing we are all working toward similar goals of creating a dynamic and fascinating place to live, where we can raise healthy and intelligent and happy children, and pursue what we want, when we want and how we want, and spend our lives enjoying each other, not dreading or hating each other. We don't live in two Americas. We aren't as different as we might feel that we are. We have differences, of course, but what would be the alternative, pure conformity? We all want a great country and great people and happiness. I think everyone should demand a resurgence of a rhetoric of unity from our leaders, not division. And vote out those who just continually tear us apart. That doesn't mean we eliminate debates, or differences of opinion, but we focus on the ground we share. We focus on rhetoric that brings us together. We don't focus on gaining power and leverage to boss others around. We focus on wielding words that unite us, and we return to Burke's noble effort "toward the elimination of warfare."
In "A Rhetoric of Motives," per James Herrick, Burke writes: "If men were not apart from one another, there would be no need for the rhetorician to proclaim their unity. If men were wholly and truly of one substance, absolute communication would be of man's very essence."
As I am still feeling the bruises of yet another civil war-like political season, I wonder if Americans now have passed the point of no return in terms of consubstantiality. I don't feel hopeful at all that we can reach a period again in which we debate political issues together as Americans, trying to create the best country in the world, as opposed to Party A or Party B grasping for power and trying to dictate the ways in which the people in the other party live, which they really don't want to do.
It seems so long ago, in 2000, when a legitimate case could have been made that Republicans and Democrats were pretty much in the same place on many issues, arguing positions at least in the vicinity of each other. Ralph Nader made the case that the two parties were inseparable in ideas on the table, which was considered a bad thing. Of course, both parties at the time were concerned with a lot of negative matter, such as maintaining power and the two-party system, and feeding their corporate lamprey, and giving breaks to the rich, and a host of other slimy situations. But today, after about a decade of dramatically divisive rhetoric -- at first meant to separate the parties, but then manipulated as power grabs -- what are we left with in the ruins?
As Burke imagined, warfare! ... The bloody, bitter, hostile, horrible, hate-filled discourse of division. Unfortunately, mud-slinging, hate and character-assassination wins elections, and as long as it does, I suppose, politicians will go that route (they are, after all, politicians). But what do we as Americans get left with? Does anyone really feel good about the state of America right now? Does anyone feel like we are in this big community together?
Or do we feel divided? West Coasters versus East Coasters? City folk versus country folk? The intellectual elite versus the real people (who, apparently, are the ones you would want to sit down and drink a beer with)? War mongers / pacifists, who need to "man up." Capitalists / Socialists. Etc. Where is this getting us? Maybe instead we should be returning to Burke's suggestion of trying to find common ground, not as a form of pacifying "the enemy," which is us, by the way, but as a form realizing we are all working toward similar goals of creating a dynamic and fascinating place to live, where we can raise healthy and intelligent and happy children, and pursue what we want, when we want and how we want, and spend our lives enjoying each other, not dreading or hating each other. We don't live in two Americas. We aren't as different as we might feel that we are. We have differences, of course, but what would be the alternative, pure conformity? We all want a great country and great people and happiness. I think everyone should demand a resurgence of a rhetoric of unity from our leaders, not division. And vote out those who just continually tear us apart. That doesn't mean we eliminate debates, or differences of opinion, but we focus on the ground we share. We focus on rhetoric that brings us together. We don't focus on gaining power and leverage to boss others around. We focus on wielding words that unite us, and we return to Burke's noble effort "toward the elimination of warfare."
Friday, October 29, 2010
Aristotle as Dumbledore?
I have a huge stack of books around here, begging for my attention, but I couldn't pass by this title on the library shelf the other day: "Harry Potter and Philosophy: If Aristotle Ran Hogwarts."
I have been reading a lot about Aristotle in English 5361, Theories of Invention in Writing, and simultaneously reading the Harry Potter series, so I naturally was curious about what David Baggett and Shawn E. Klein had to say in combining the two. I've just been skimming at this point, but I did spend some time looking over the indexed parts related to Aristotle, which seem primarily related to his moral philosophy. Aristotle judged people on actions, the authors argue, not words (hear that sophists?!), especially when good deeds are done because they are good and right, not because they bring some sort of reward. Another interesting point the authors made was that all of the important decisions we encounter in life take place in an emotional context. Aristotle would say, the authors contend, that a reasonable person gives emotions the "appropriate" weight, and to be virtuous, via the Doctrine of the Means, a response should not be too excessive or deficient in terms of emotions. Relating that to rhetoric, I think of the balance Aristotle creates in his artistic proofs of Ethos, Logos and Pathos. In Aristotle's view, the perfect rhetorical argument provides an ideal balance of those proofs, addressing authority of the viewpoint, a logical expression of the information and emotional touchstones. I hadn't really thought about this before, because emotional rhetorical appeals seem to be the default for many, if not most, people, but what happens to rhetoric without emotions? Is such expression even possible? Building or degrading authority or character -- the ethos -- seems to inherently provoke an emotional response from the audience, such as "that's not fair" or, "yeah, that person is a bum," even if that's not a core part of what's delivered. An argument without logic would provoke an emotional response of, "someone is trying to trick me." I can imagine many arguments without logic. In fact, those seem to come up quite often. And I can imagine many arguments that strip away the ethos, purposively, to get to the "root" of the issue, as in, it doesn't matter who is saying this, it just matters that it is being said. But emotions, and pathos, seem practically unavoidable. Another interesting point made by this book is that evil can be intelligent, such as Lord Voldemort, but it can't be wise. Dumbledore is quoted as saying that Voldemort's knowledge of magic is "perhaps more extensive than any wizard alive," but Aristotle argues that a person must be knowledgeable and do noble deeds for the sake of being good to truly be wise (and virtuous). Many of the, ahem, highly flawed characters in the books I have read so far -- Gilderoy Lockhart comes to mind -- wield rhetoric in self-serving, sophistic ways. So is Dumbledore, then, really a veiled representation of Aristotle? Hmmm ...
I have been reading a lot about Aristotle in English 5361, Theories of Invention in Writing, and simultaneously reading the Harry Potter series, so I naturally was curious about what David Baggett and Shawn E. Klein had to say in combining the two. I've just been skimming at this point, but I did spend some time looking over the indexed parts related to Aristotle, which seem primarily related to his moral philosophy. Aristotle judged people on actions, the authors argue, not words (hear that sophists?!), especially when good deeds are done because they are good and right, not because they bring some sort of reward. Another interesting point the authors made was that all of the important decisions we encounter in life take place in an emotional context. Aristotle would say, the authors contend, that a reasonable person gives emotions the "appropriate" weight, and to be virtuous, via the Doctrine of the Means, a response should not be too excessive or deficient in terms of emotions. Relating that to rhetoric, I think of the balance Aristotle creates in his artistic proofs of Ethos, Logos and Pathos. In Aristotle's view, the perfect rhetorical argument provides an ideal balance of those proofs, addressing authority of the viewpoint, a logical expression of the information and emotional touchstones. I hadn't really thought about this before, because emotional rhetorical appeals seem to be the default for many, if not most, people, but what happens to rhetoric without emotions? Is such expression even possible? Building or degrading authority or character -- the ethos -- seems to inherently provoke an emotional response from the audience, such as "that's not fair" or, "yeah, that person is a bum," even if that's not a core part of what's delivered. An argument without logic would provoke an emotional response of, "someone is trying to trick me." I can imagine many arguments without logic. In fact, those seem to come up quite often. And I can imagine many arguments that strip away the ethos, purposively, to get to the "root" of the issue, as in, it doesn't matter who is saying this, it just matters that it is being said. But emotions, and pathos, seem practically unavoidable. Another interesting point made by this book is that evil can be intelligent, such as Lord Voldemort, but it can't be wise. Dumbledore is quoted as saying that Voldemort's knowledge of magic is "perhaps more extensive than any wizard alive," but Aristotle argues that a person must be knowledgeable and do noble deeds for the sake of being good to truly be wise (and virtuous). Many of the, ahem, highly flawed characters in the books I have read so far -- Gilderoy Lockhart comes to mind -- wield rhetoric in self-serving, sophistic ways. So is Dumbledore, then, really a veiled representation of Aristotle? Hmmm ...
Friday, October 22, 2010
Platonic dialogue in the works
In an effort to create a contemporary (or relatively contemporary) Platonic dialogue, I have been working on piecing together journals and letters related to a Hawaiian pastor's calling to Fort Vancouver in the mid-1800s. This pastor, William Kaulehelehe, ended up being in the center of an international conflict at the fort, as a loyal British subject ousted from his home on the banks of the Columbia River, as the U.S. Army tried to bring order to the frontier in the Pacific Northwest. That's a much longer story, but my hope with this part of the dialogue is to present the rhetoric of the period as it influenced his decision but also as it reflected attitudes of the period, and rhetorical strategies.
I'm using the Twitter format as an inspiration and basically taking the actual historic text and adapting it only slightly to the faux-Twitter format.
First comes the script, a draft of which follows, with the analysis to come:
@RevBeaver: @HudsonsBayCo An ordinary, respectable countryman @FortVancouver, with his wife, might promote good behaviour of Sandwich Islanders
@ChiefFactor John (John McLoughlin): Need a trusty educated Hawaiian of good character to read the scriptures and assemble his people for public worship.
@GerritJudd (adviser to the Hawaiian king): @ChiefFactorJohn Wm. R. Kaulehelehe, @WRKaulehelehe!
McLoughlin: Need him to teach, too. And interpret.
Judd: Not as well-qualified as the first person selected but @WRKaulehelehe has good character, is faithful, industrious, and a skillful teacher. High recommendation.
McLoughlin: 10 pounds per annum
Judd: @WRKaulehelehe in regular standing as a member of the church. Wife accompanies him, no doubt will prove herself useful.
McLoughlin: 40 pounds per annum
Judd: @WRKaulehelehe @MaryKaai Go to the Columbia District? 3-4 weeks voyage away. Parish awaits.
Kaulehelehe: Aloha! @KawaiahaoChurch Aloha! @FortVancouver
And I'm working on a delivery prototype that will end up looking something like this:
I'm using the Twitter format as an inspiration and basically taking the actual historic text and adapting it only slightly to the faux-Twitter format.
First comes the script, a draft of which follows, with the analysis to come:
@RevBeaver: @HudsonsBayCo An ordinary, respectable countryman @FortVancouver, with his wife, might promote good behaviour of Sandwich Islanders
@ChiefFactor John (John McLoughlin): Need a trusty educated Hawaiian of good character to read the scriptures and assemble his people for public worship.
@GerritJudd (adviser to the Hawaiian king): @ChiefFactorJohn Wm. R. Kaulehelehe, @WRKaulehelehe!
McLoughlin: Need him to teach, too. And interpret.
Judd: Not as well-qualified as the first person selected but @WRKaulehelehe has good character, is faithful, industrious, and a skillful teacher. High recommendation.
McLoughlin: 10 pounds per annum
Judd: @WRKaulehelehe in regular standing as a member of the church. Wife accompanies him, no doubt will prove herself useful.
McLoughlin: 40 pounds per annum
Judd: @WRKaulehelehe @MaryKaai Go to the Columbia District? 3-4 weeks voyage away. Parish awaits.
Kaulehelehe: Aloha! @KawaiahaoChurch Aloha! @FortVancouver
And I'm working on a delivery prototype that will end up looking something like this:
Friday, October 15, 2010
Bringing order through language, a la Pico
Just two paragraphs about a Renaissance rhetorician named Pico in James Herrick's "The History and Theory of Rhetoric," p. 162, made me wonder if I hadn't stumbled across some of the forgotten roots of Ludwig Wittgenstein and Kenneth Burke.
The paragraphs describe Pico as an Italian humanist, with the "conviction that humans employ language to order the world and to work cooperatively within it." Language, in Pico's mind, gives humans the freedom to create their destiny and choose their paths in life, as a unique trait of the species. Our power to choose, and to create civilization, he reasoned, is a direct consequence of our "linguistic capacity" and our abilities to "probe the 'miracles concealed in the recesses of the world, in the depths of nature, and in the storehouses and mysteries of God.'"
There might not be a direct connection, but I sense traces of Wittgenstein's language games (the contextual symbolic manipulation traditions we use to bring order to the world) and Burke's symbolic action (the connecting and disconnecting of symbols as a form of sense making) in those overview statements. I would need to read more by Pico and directly compare and contrast those thoughts to the other two. But that could be an interesting exercise in philosophical genealogy.
The paragraphs describe Pico as an Italian humanist, with the "conviction that humans employ language to order the world and to work cooperatively within it." Language, in Pico's mind, gives humans the freedom to create their destiny and choose their paths in life, as a unique trait of the species. Our power to choose, and to create civilization, he reasoned, is a direct consequence of our "linguistic capacity" and our abilities to "probe the 'miracles concealed in the recesses of the world, in the depths of nature, and in the storehouses and mysteries of God.'"
There might not be a direct connection, but I sense traces of Wittgenstein's language games (the contextual symbolic manipulation traditions we use to bring order to the world) and Burke's symbolic action (the connecting and disconnecting of symbols as a form of sense making) in those overview statements. I would need to read more by Pico and directly compare and contrast those thoughts to the other two. But that could be an interesting exercise in philosophical genealogy.
Saturday, October 9, 2010
Print / The Renaissance, Internet / The Digital Age
I asked a question in class recently about rhetoric in the Renaissance era of European history, in terms of how much the printing press had fueled the massive changes of that time period. It had made me wonder how similar the Internet era of American history is, and in what ways the digital age is akin to the shifting of human culture that happened around the Renaissance. I thought I had read something connected to that somewhere, and I finally found again today the piece that must have been lodged in my brain.
Clay Shirky, a NYU professor, is on my personal list of Top 10 thinkers right now in relation to new media, and I highly recommend his books "Here Comes Everybody" and "Cognitive Surplus." But the following paragraph actually was in a pro-con piece he wrote for the Wall Street Journal, opposite Nicholas Carr, which I recently used as a discussion prompt in one of my Creative Media and Digital Culture courses:
"Print fueled the Protestant Reformation, which did indeed destroy the Church's pan-European hold on intellectual life. What the 16th-century foes of print didn't imagine—couldn't imagine—was what followed: We built new norms around newly abundant and contemporary literature. Novels, newspapers, scientific journals, the separation of fiction and non-fiction, all of these innovations were created during the collapse of the scribal system, and all had the effect of increasing, rather than decreasing, the intellectual range and output of society."
Just that one paragraph raises so many more thoughts and questions for me, such as: Are there parallels between the Church's pan-European hold on intellectual life and the mainstream media's hold on intellectual life in the United States before the Internet? Are the Luddites of this age any different, or are these people who complain about technology just another perpetual human archetype? Because of the historic changes during the Renaissance, can we now, with confidence, predict that new communication forms will increase the intellectual range and output of our society in the long run, despite the many not-so-smart displays that also will come with that growth (people admittedly do a lot of stupid things with new technology today)?
I might be hypersensitive to the technology bashing, but I think that the Internet truly is changing us, and our capabilities, and transforming us -- yes, evolving us -- into a different sort of animal, just as the printing press and printed word did for people half a millennium ago. Do you see parallels as well? Or am I just not thinking deeply enough about this?
Clay Shirky, a NYU professor, is on my personal list of Top 10 thinkers right now in relation to new media, and I highly recommend his books "Here Comes Everybody" and "Cognitive Surplus." But the following paragraph actually was in a pro-con piece he wrote for the Wall Street Journal, opposite Nicholas Carr, which I recently used as a discussion prompt in one of my Creative Media and Digital Culture courses:
"Print fueled the Protestant Reformation, which did indeed destroy the Church's pan-European hold on intellectual life. What the 16th-century foes of print didn't imagine—couldn't imagine—was what followed: We built new norms around newly abundant and contemporary literature. Novels, newspapers, scientific journals, the separation of fiction and non-fiction, all of these innovations were created during the collapse of the scribal system, and all had the effect of increasing, rather than decreasing, the intellectual range and output of society."
Just that one paragraph raises so many more thoughts and questions for me, such as: Are there parallels between the Church's pan-European hold on intellectual life and the mainstream media's hold on intellectual life in the United States before the Internet? Are the Luddites of this age any different, or are these people who complain about technology just another perpetual human archetype? Because of the historic changes during the Renaissance, can we now, with confidence, predict that new communication forms will increase the intellectual range and output of our society in the long run, despite the many not-so-smart displays that also will come with that growth (people admittedly do a lot of stupid things with new technology today)?
I might be hypersensitive to the technology bashing, but I think that the Internet truly is changing us, and our capabilities, and transforming us -- yes, evolving us -- into a different sort of animal, just as the printing press and printed word did for people half a millennium ago. Do you see parallels as well? Or am I just not thinking deeply enough about this?
Sunday, October 3, 2010
New York Times story on museum apps
Lots of interesting information here about other folks trying to apply mobile technology to "museums."
From Picassos to Sarcophagi, Guided by Phone Apps
From Picassos to Sarcophagi, Guided by Phone Apps
Friday, October 1, 2010
Onward to TwHistory!
As I mentioned a few posts ago, I have been working on a sort of TwHistory project, or historical interpretation through Twitter, for the Fort Vancouver National Historic Site as part of the content for the Fort Vancouver Mobile module based on William Kaulehelehe.
One of the TwHistory founders, Tom Caswell, has corresponded with me about this idea and given me some advice. A great resource in getting started with this sort of thing can be found on the TwHistory site here, as a FAQ.
One of the first steps in this process is to create your story's Twitter characters. So I have been chipping away at those characters needed to recreate the conversation, through letters, that brought Kaulehelehe to the fort.
Here is my list so far:
@KanakaWilliam, where the main story will take place
@RevBeaver, a smarmy reverend involved in the story
@ChiefFactorJohn, John McLoughlin, the chief factor of the fort
@GerritJudd, the missionary in Hawaii who recommended Kaulehelehe to the fort Kaulehelehe – @WRKaulehelehe, the protagonist
@GHAtkinson, another smarmy reverend involved in the story
@MaryKaai, wife of William Kaulehelehe
@KawaiahaoChurch, the church where William came from
@RevSamuelDamon, yet one more reverend
@HudsonsBayCo, the organization that ran the fort
Once I finish the script, I will plug the lines into Twitter, and voila, the conversation will come to life again, at least in theory. I'll let you know how it goes.
One of the TwHistory founders, Tom Caswell, has corresponded with me about this idea and given me some advice. A great resource in getting started with this sort of thing can be found on the TwHistory site here, as a FAQ.
One of the first steps in this process is to create your story's Twitter characters. So I have been chipping away at those characters needed to recreate the conversation, through letters, that brought Kaulehelehe to the fort.
Here is my list so far:
@KanakaWilliam, where the main story will take place
@RevBeaver, a smarmy reverend involved in the story
@ChiefFactorJohn, John McLoughlin, the chief factor of the fort
@GerritJudd, the missionary in Hawaii who recommended Kaulehelehe to the fort Kaulehelehe – @WRKaulehelehe, the protagonist
@GHAtkinson, another smarmy reverend involved in the story
@MaryKaai, wife of William Kaulehelehe
@KawaiahaoChurch, the church where William came from
@RevSamuelDamon, yet one more reverend
@HudsonsBayCo, the organization that ran the fort
Once I finish the script, I will plug the lines into Twitter, and voila, the conversation will come to life again, at least in theory. I'll let you know how it goes.
Friday, September 24, 2010
Analysis of the Society of Professional Journalists' Code of Ethics
Because my TwHistory idea seems to fit better under the guidelines of the second assignment in Dr. Rich Rice's Engl 5361 class (Theories of Invention in Writing), I'm going to first focus on a rhetorical analysis of the Society of Professional Journalists' Code of Ethics.
Journalism and rhetoric are soulmates, I suppose, in the ways in which we frame our vision of the society we experience through media. News media portray (and magnify) such a tiny fragment of life that the rhetorical emphasis is profound, and I wondered what basis upon which do we build our discourse. Are we Platonic idealists, or sophist pragmatists?
This code could help to form a better understanding of that position. It is meant to guide journalistic decisions toward a better community of practitioners but also a better society as a whole, a very Athenian ideal.
My analysis will examine the rhetorical choices made in the document itself, looking for direct connections to the classical foundations of rhetoric and to particular rhetors that separate those two primary positions of thought.
It's important to also note that this code is a voluntary commitment for journalists to make. It is not enforced in any way by a central institution, which means its power, fittingly enough, is purely rhetorical. It provides a framework for a messy and complicated job, and the execution of the framework typically involves a dialectic process, since no document ever could possible cover all of the variations of possible actions a journalist could take. Most ethical discussions in a newsroom are not black and white. They are in essence Platonic dialogues, searching for an agreed upon truth, in which extensive discussion leads to a moment of enlightenment, decision and action.
My analysis will be offered as a short slideshow video, prompting thought about the division between sophistry and Platonic idealism in the modern world.
Shape-shifting of the mobile phone
The mobile phone is getting physical, or blending into the physical world. ... This TEDTalk looks at what could be next.
Thursday, September 23, 2010
HistoryPin
Fascinating to see something like HistoryPin emerge, especially with a Google partnership, and the engine that comes behind that:
Very promising idea. Geolocated augmented reality data with mobile devices has been difficult to get to work properly on a large scale in the past, such as with Wikitude and Layar. I assume the AR overlays are the logical progression of where this service is going long term, although it appears also to be a desktop system as is. But if you test this out, let me know how it works for you and what you think.
Very promising idea. Geolocated augmented reality data with mobile devices has been difficult to get to work properly on a large scale in the past, such as with Wikitude and Layar. I assume the AR overlays are the logical progression of where this service is going long term, although it appears also to be a desktop system as is. But if you test this out, let me know how it works for you and what you think.
Friday, September 17, 2010
Trying to tap the idea of a Tweet story
I think it was the John Quincy Adams Twitter feed, created by the Massachusetts Historical Society about a year ago, that first caught my attention. I had been using Twitter as a note-taking/sharing service for several months, but this exposed another interesting application of the service to me: Twitter as a way to tell stories.
Over the next few months, I every so often came across other groups or people trying the platform for storytelling, including the folks at TwHistory, sinking The Titanic again and recounting the 1847 Pioneer Trek of Mormon settlers and recreating Gettysburg. I also since have seen more attempts at this, even from purely fictional (and comedic) directions, such as the recutting of the film "Ferris Bueller's Day Off" as a string of Tweets (they also incorporate FourSquare, but that's another blog post).
Anyway, when Dr. Rich Rice's Theories of Invention in Writing course this semester offered me an assignment involving creating a piece of rhetoric to be analyzed, "such as a scene from a movie," I thought this is just the excuse I need to try my own version of TwHistory (I really like that term) and to combine it with my work at the Fort Vancouver National Historic Site, and the Fort Vancouver Mobile project.
So I have this historical anecdote in hand, dug up by the project's assistant director Jon Nelson. This story, told through a variety of documents generated by different people, is related to the first module of the FVM project, recounting how Hawaiian pastor William Kaulehelehe was contacted and brought to Fort Vancouver in the mid-1800s. My plan is to create Twitter accounts for all of those characters and then tell the story, through their real words, via this modern form, then analyze the rhetoric they used to bring Kaulehelehe from his tropical paradise to this rainy frontier outpost. This all also will be delivered through a node in the Kaulehelehe module of the Fort Vancouver Mobile project, so I might end up creating a Twitter-like service to pull it off, just to give me a bit more control of the output. But we'll see.
To analyze the piece rhetorically, I plan to put together the story and have pop-up bubbles on a Camtasia-like presentation provide commentary on the rhetoric. How does that sound?
Over the next few months, I every so often came across other groups or people trying the platform for storytelling, including the folks at TwHistory, sinking The Titanic again and recounting the 1847 Pioneer Trek of Mormon settlers and recreating Gettysburg. I also since have seen more attempts at this, even from purely fictional (and comedic) directions, such as the recutting of the film "Ferris Bueller's Day Off" as a string of Tweets (they also incorporate FourSquare, but that's another blog post).
Anyway, when Dr. Rich Rice's Theories of Invention in Writing course this semester offered me an assignment involving creating a piece of rhetoric to be analyzed, "such as a scene from a movie," I thought this is just the excuse I need to try my own version of TwHistory (I really like that term) and to combine it with my work at the Fort Vancouver National Historic Site, and the Fort Vancouver Mobile project.
So I have this historical anecdote in hand, dug up by the project's assistant director Jon Nelson. This story, told through a variety of documents generated by different people, is related to the first module of the FVM project, recounting how Hawaiian pastor William Kaulehelehe was contacted and brought to Fort Vancouver in the mid-1800s. My plan is to create Twitter accounts for all of those characters and then tell the story, through their real words, via this modern form, then analyze the rhetoric they used to bring Kaulehelehe from his tropical paradise to this rainy frontier outpost. This all also will be delivered through a node in the Kaulehelehe module of the Fort Vancouver Mobile project, so I might end up creating a Twitter-like service to pull it off, just to give me a bit more control of the output. But we'll see.
To analyze the piece rhetorically, I plan to put together the story and have pop-up bubbles on a Camtasia-like presentation provide commentary on the rhetoric. How does that sound?
Friday, September 10, 2010
Platonic or sophistic?
Reading recently Plato's disparagement of the sophists, including the debate of Absolute Truth versus relative truth, it reminded me of the contemporary remnants of a similar polarizing discussion in social science, or the positivist versus the naturalistic perspectives. To begin with, I think a middle ground is possible, in which relative truths help us, through dialectic, reach toward greater truths that are closer to the ideal of Truth. I also think positivist and naturalistic approaches work best in tandem, rather than in opposition. But as a pragmatist, I think Absolute Truth is unattainable, and the naturalistic / sophist approach fits much better with navigating reality, particularly when studying the complexities of communication -- mixing humans, messages, channels and context. The sophists clearly needed more training on ethics (or more concern with it), but, the core of their beliefs, that each position in an argument can be presented persuasively, could be used as part of the dialectical process rather than considered outside of it. If we are dealing with humans, I think, truth has to be thought of as relative and negotiated, just because virtually everything we do is mediated or filtered through symbols or compressed and manipulated in some way. The only way to truly recount something in history is through a time machine, and even people who witness the same scene at the same time from very similar perspectives will interpret what happened differently. Multiply that scenario by 7 billion people, and the search for Truth almost seems laughable (sorry, Plato!). But, like every Utopian dream, or ideal, that doesn't mean we shouldn't keep trying to reach for it. Maybe the Truth just hasn't been revealed to us (or maybe it's just me). And it all will be clear one day, when we have evolved our knowledge base broadly enough to really understand things. Until then, I think being highly aware of the screen of symbols and the use of rhetoric, in all of its forms, different perspectives, etc., bring us closer to an understanding of Truth, in that we live a socially negotiated and mediated existence, and that virtually every stimulus that reaches us is then interpreted through the paradigm we have built throughout our lives.
Friday, September 3, 2010
Overviews of rhetoric
Recently read two overviews of rhetoric, covering thousands of years in the field:
Herrick, J. (2004). The history and theory of rhetoric. Boston: Allyn and Bacon.
And Bizzell, P., & Herzberg, B. (2001). The rhetorical tradition: Readings from classical times to the present (2nd ed.). Boston: Bedford Books of St. Martin's Press.
Each of which raised many questions that I am assuming will be handled later in the books, but a few immediate thoughts on what I read:
Wayne Booth, a prominent literary studies critic, is quoted in Herrick as saying that rhetoric holds "entire dominion over all verbal pursuits. Logic, dialectic, grammar, philosophy, history, poetry, all are rhetoric." Then, what form of expressions, exactly, aren't rhetoric? I understand a rock isn't rhetoric, but when I start expressing thoughts about the rock in some way, talking about it, photographing it, classifying it, stacking it in a particular way, etc., then that is all rhetoric, right? What about a list of random words, or numbers, is that rhetorical in some way, because I am expressing the randomness of it all, and that there is such a thing as randomness (because order is a human construction) and giving rhetorical order by such nonorder? If everything we express is rhetoric, that seems sort of limiting to talk about, so I'm looking for ideas about where the line gets drawn, at least from Booth's perspective (and that of others who similarly propose a very large tent for this field).
Herrick also argues that rhetoric is "response-inviting." This seems contrary to much of the political speech I think about, intended to either be so vague as meaningless or so coded as to mean certain specific things to certain special interest groups or meant to present a defensible position, but I just don't think of rhetoric as always promoting interaction. Propaganda, for example, would have to be rhetoric, and I don't think of it as particularly welcoming to debate. I like the idea of rhetoric and its soulmate argumentation inducing more speech, but to imply that it "is" response-inviting seems a bit broad to me. Am I wrong?
On a related subject, in Herrick's section about rhetoric as community building, I wondered if the wedge tactics employed so skillfully by both major political parties today aren't the dysfunctional side of this coin that will lead to our country's, and our community's, ruin. We keep splitting ourselves up into binary issues, focusing incessantly on how we are different more than we are alike, and that might work for politicians trying to degrade the ethos of their opponents, but I think it is clearly tearing us down as a nation. If we are always voting for the least worst option, then we are never voting for the best option, and I think the worst of rhetoric -- neo-sophists? -- is at the heart of that characterization.
Bizzell and Herzberg, by the way, offer the all-important "canons" of rhetoric as:
1. Invention
2. Arrangement
3. Style
4. Memory
5. Delivery
Which made me think there must be a better arrangement of that, at least in terms of an anagram. So here are some options, courtesy of this Internet anagram maker:
These are the MAIDS of rhetoric, keeping everything tidy.
MAD IS you who forgets the canons of rhetoric!
DAM IS the word I say when I remember the canons, like, "Dam, I can remember those canons!"
AS DIM as I might be, I can remember the canons.
And so on ...
In terms of acronyms, by the way, I noticed someone in the MOO used ELP for Aristotle's three forms of persuasive appeal: Ethos, Logos, Pathos. So if I ever need ELP remembering that, ...
One last side note. Herrick states that rhetor should be pronounced RAY-tor. This is the first time I have heard it described that way, and every time I have heard someone pronounce it, they have said rhet-OR, or RHET-or, but never RAY-tor. If rhetoric is pronounced rhet..., then why would it be RAY, or should it be RAY-tor-ic? Help! I don't want to be the one dumb guy at a conference who keeps mispronouncing a core term of the field.
Herrick, J. (2004). The history and theory of rhetoric. Boston: Allyn and Bacon.
And Bizzell, P., & Herzberg, B. (2001). The rhetorical tradition: Readings from classical times to the present (2nd ed.). Boston: Bedford Books of St. Martin's Press.
Each of which raised many questions that I am assuming will be handled later in the books, but a few immediate thoughts on what I read:
Wayne Booth, a prominent literary studies critic, is quoted in Herrick as saying that rhetoric holds "entire dominion over all verbal pursuits. Logic, dialectic, grammar, philosophy, history, poetry, all are rhetoric." Then, what form of expressions, exactly, aren't rhetoric? I understand a rock isn't rhetoric, but when I start expressing thoughts about the rock in some way, talking about it, photographing it, classifying it, stacking it in a particular way, etc., then that is all rhetoric, right? What about a list of random words, or numbers, is that rhetorical in some way, because I am expressing the randomness of it all, and that there is such a thing as randomness (because order is a human construction) and giving rhetorical order by such nonorder? If everything we express is rhetoric, that seems sort of limiting to talk about, so I'm looking for ideas about where the line gets drawn, at least from Booth's perspective (and that of others who similarly propose a very large tent for this field).
Herrick also argues that rhetoric is "response-inviting." This seems contrary to much of the political speech I think about, intended to either be so vague as meaningless or so coded as to mean certain specific things to certain special interest groups or meant to present a defensible position, but I just don't think of rhetoric as always promoting interaction. Propaganda, for example, would have to be rhetoric, and I don't think of it as particularly welcoming to debate. I like the idea of rhetoric and its soulmate argumentation inducing more speech, but to imply that it "is" response-inviting seems a bit broad to me. Am I wrong?
On a related subject, in Herrick's section about rhetoric as community building, I wondered if the wedge tactics employed so skillfully by both major political parties today aren't the dysfunctional side of this coin that will lead to our country's, and our community's, ruin. We keep splitting ourselves up into binary issues, focusing incessantly on how we are different more than we are alike, and that might work for politicians trying to degrade the ethos of their opponents, but I think it is clearly tearing us down as a nation. If we are always voting for the least worst option, then we are never voting for the best option, and I think the worst of rhetoric -- neo-sophists? -- is at the heart of that characterization.
Bizzell and Herzberg, by the way, offer the all-important "canons" of rhetoric as:
1. Invention
2. Arrangement
3. Style
4. Memory
5. Delivery
Which made me think there must be a better arrangement of that, at least in terms of an anagram. So here are some options, courtesy of this Internet anagram maker:
These are the MAIDS of rhetoric, keeping everything tidy.
MAD IS you who forgets the canons of rhetoric!
DAM IS the word I say when I remember the canons, like, "Dam, I can remember those canons!"
AS DIM as I might be, I can remember the canons.
And so on ...
In terms of acronyms, by the way, I noticed someone in the MOO used ELP for Aristotle's three forms of persuasive appeal: Ethos, Logos, Pathos. So if I ever need ELP remembering that, ...
One last side note. Herrick states that rhetor should be pronounced RAY-tor. This is the first time I have heard it described that way, and every time I have heard someone pronounce it, they have said rhet-OR, or RHET-or, but never RAY-tor. If rhetoric is pronounced rhet..., then why would it be RAY, or should it be RAY-tor-ic? Help! I don't want to be the one dumb guy at a conference who keeps mispronouncing a core term of the field.
Thursday, August 26, 2010
Assignment ideas for English 5361
Here are a few proposals:
1. An iterative progression, from Assignment No. 1 to No. 3, building a multimodal presentation that maybe starts with textual discourse and then expands into other media, such as video or audio (podcast).
2. A formal debate of classical perspectives. A scenario is introduced in the MOO, and students are given a specific theoretical lens through which to examine and analyze the situation. Maybe pit two contrasting schools of thought, having half the students argue one side and half the other.
3. A profile of a person in the field and a presentation of that person's primary contribution. Maybe pick a "Top 10" rhetoricians list, Aristotle, Foucault, Burke, etc., and have each of us pick one and give a summary or introductory presentation on that person and the primary contributions to the field.
4. Creating a case study of contemporary expression and looking at it through various classical perspectives. So we could create a new piece of rhetoric, in whatever form (in my case, say, a snippet of historical interpretation delivered through a mobile device) and then examine the skeleton of the piece from the inside, through the perspective of either a specific rhetorical paradigm or a cluster/school of thought.
5. Or how about something that extends rhetorical theory into visual rhetoric, like maybe a thorough examination of an image using the classical perspectives on rhetoric as a foundation of the analysis.
1. An iterative progression, from Assignment No. 1 to No. 3, building a multimodal presentation that maybe starts with textual discourse and then expands into other media, such as video or audio (podcast).
2. A formal debate of classical perspectives. A scenario is introduced in the MOO, and students are given a specific theoretical lens through which to examine and analyze the situation. Maybe pit two contrasting schools of thought, having half the students argue one side and half the other.
3. A profile of a person in the field and a presentation of that person's primary contribution. Maybe pick a "Top 10" rhetoricians list, Aristotle, Foucault, Burke, etc., and have each of us pick one and give a summary or introductory presentation on that person and the primary contributions to the field.
4. Creating a case study of contemporary expression and looking at it through various classical perspectives. So we could create a new piece of rhetoric, in whatever form (in my case, say, a snippet of historical interpretation delivered through a mobile device) and then examine the skeleton of the piece from the inside, through the perspective of either a specific rhetorical paradigm or a cluster/school of thought.
5. Or how about something that extends rhetorical theory into visual rhetoric, like maybe a thorough examination of an image using the classical perspectives on rhetoric as a foundation of the analysis.
Friday, August 13, 2010
Mobile Apps: Shifting Dynamics of a Digital World
Some interesting perspectives shared during this panel on mobile apps at the Commonwealth Club:
Monday, July 26, 2010
What is "mobile"?
This term, I think, needs better defining before we can really study the technology in meaningful ways:
Untitled from Brett Oppegaard on Vimeo.
Saturday, July 24, 2010
Future readings
I'm near the end of Dr. Craig Baehr's 5365 course at Texas Tech, Studies in Composition: Internet Writing, and I want to note some of the many interesting topics raised in that class that deserve further review:
* Othermindedness, hypertext and networked writing forms, prompted by Michael Joyce. Joyce is a creative and provocative author (and, in essence, I think, a futurist) who can see the potential for new writing forms emerging in our networked technological environment. If, as I believe, the human experience is a narrative experience, Joyce pictures that sort of paradigm as a free-flowing and abstract series of connections that each of us uniquely can make, ever so easily, informed by a growing number of channels of increasingly interconnected media. Joyce's disorienting style, mirroring the twists and turns hypertext, could very well be the standard writing style of the future, but it certainly is difficult to go from where we are now, with traditional linear stories, to Joyce's visions, even with a high interest level in exploring that ground. My sense is that some parameters are healthy for a story. I would even argue those parameters are essential for sense-making. Maybe that is the author's role in the future, just to set up boundaries, and characters, and an environment within which to operate? But a completely open-ended story, like surfing the Internet feels, doesn't seem to me to make enough connections to the author, or authors, to qualify. I prefer stories less restricted than, say, Choose Your Own Adventure books, which typically offered just a couple of choices per juncture. But a juncture that offers unlimited choices also is problematic, I think, in terms of engaging with a story. How many choices should be available? What scope should a story have, can it have? It is impossible to generalize, but I think my readings in the future will be looking for more examinations into the story scope and open structure questions.
* New media theories. Dr. Baehr and Dr. Bob Schaller (of Stephen F. Austin State University) recently released a book called "Writing for the Internet," and the second chapter struck me as a provocative overview of the struggle academics are having with the idea of "new media." What is new media? I think I can identify media, as something mediated, but what is new? That seems to be a much trickier question. And what makes "new media" different than "old media"? There are all sorts of attractive entry points in this discussion, including critical theory (which "seeks change in the dominant social order," Littlejohn and Foss, 2008) and who the authors note are considered the "Mount Rushmore" of the field, at least from a mass communication viewpoint: Harold Innis, Walter Ong, Neil Postman and Marshall McLuhan. I have read a lot of McLuhan and Postman but only a few articles or chapters by Ong and Innis. Ong, in particular, intrigues me with his concept of Second Orality, because I have been looking for a way to connect orality with mobile devices. If you have any other suggestions of places to look, please comment and let me know. Technological determinism, or the idea that technology inherently shapes our culture and society, also is a profound path to follow.
* The Sociosemantic Web by Peter Morville. While I have been aware of meta-tagging for a long time, I didn't think there was much importance to it, until I considered the ramifications outlined by Morville. The concept reminds me quite a bit of Clay Shirky's description of Wikipedia, and how the collective work of millions, mostly little by little, can create an invaluable resource for all of humankind. Tags, placed on data by anyone, can help us all tap into the collective wisdom of the world, and build toward the fabled semantic web. These self-regulated folksonomies do a job that no company could ever afford, to make sense of and give order to everything on the Internet. This is particularly helpful in the less capitalistic areas of information, places where people are enjoying knowledge just for the simple sake of knowing things about what they are interested in, or toppling the information monolith. More power to that!
* Othermindedness, hypertext and networked writing forms, prompted by Michael Joyce. Joyce is a creative and provocative author (and, in essence, I think, a futurist) who can see the potential for new writing forms emerging in our networked technological environment. If, as I believe, the human experience is a narrative experience, Joyce pictures that sort of paradigm as a free-flowing and abstract series of connections that each of us uniquely can make, ever so easily, informed by a growing number of channels of increasingly interconnected media. Joyce's disorienting style, mirroring the twists and turns hypertext, could very well be the standard writing style of the future, but it certainly is difficult to go from where we are now, with traditional linear stories, to Joyce's visions, even with a high interest level in exploring that ground. My sense is that some parameters are healthy for a story. I would even argue those parameters are essential for sense-making. Maybe that is the author's role in the future, just to set up boundaries, and characters, and an environment within which to operate? But a completely open-ended story, like surfing the Internet feels, doesn't seem to me to make enough connections to the author, or authors, to qualify. I prefer stories less restricted than, say, Choose Your Own Adventure books, which typically offered just a couple of choices per juncture. But a juncture that offers unlimited choices also is problematic, I think, in terms of engaging with a story. How many choices should be available? What scope should a story have, can it have? It is impossible to generalize, but I think my readings in the future will be looking for more examinations into the story scope and open structure questions.
* New media theories. Dr. Baehr and Dr. Bob Schaller (of Stephen F. Austin State University) recently released a book called "Writing for the Internet," and the second chapter struck me as a provocative overview of the struggle academics are having with the idea of "new media." What is new media? I think I can identify media, as something mediated, but what is new? That seems to be a much trickier question. And what makes "new media" different than "old media"? There are all sorts of attractive entry points in this discussion, including critical theory (which "seeks change in the dominant social order," Littlejohn and Foss, 2008) and who the authors note are considered the "Mount Rushmore" of the field, at least from a mass communication viewpoint: Harold Innis, Walter Ong, Neil Postman and Marshall McLuhan. I have read a lot of McLuhan and Postman but only a few articles or chapters by Ong and Innis. Ong, in particular, intrigues me with his concept of Second Orality, because I have been looking for a way to connect orality with mobile devices. If you have any other suggestions of places to look, please comment and let me know. Technological determinism, or the idea that technology inherently shapes our culture and society, also is a profound path to follow.
* The Sociosemantic Web by Peter Morville. While I have been aware of meta-tagging for a long time, I didn't think there was much importance to it, until I considered the ramifications outlined by Morville. The concept reminds me quite a bit of Clay Shirky's description of Wikipedia, and how the collective work of millions, mostly little by little, can create an invaluable resource for all of humankind. Tags, placed on data by anyone, can help us all tap into the collective wisdom of the world, and build toward the fabled semantic web. These self-regulated folksonomies do a job that no company could ever afford, to make sense of and give order to everything on the Internet. This is particularly helpful in the less capitalistic areas of information, places where people are enjoying knowledge just for the simple sake of knowing things about what they are interested in, or toppling the information monolith. More power to that!
Friday, July 16, 2010
Expectations of the visual media today
It wasn't that long ago, really, that websites were primarily text. I remember thinking we were doing something quite innovative in the mid-1990s, when a photographer and I put together a primarily visual section of a news story online, essentially a slide show, with maybe 20 images (a typical news story in print might have three to five accompanying images, at most). Such an effort today (at least the posting online part) could be done by a child, almost effortlessly. I also recall thinking espn.com was doing something quite innovative a couple of years ago, when it started streaming its video content online on its home page. Now, when I want to watch a highlight of something I'm interested in viewing, I certainly don't hang around for 22 minutes on SportsCenter, waiting for a five-second clip. I go to the espn.com website, find what I want, and leave in less than a minute, usually. That approach has a certain efficiency that I love but also a direct path that I loathe, aware that while taking that line I will never stumble upon something else interesting along the way, something that I might not even know I want to know. In a supermarket metaphor, if I could get my carrots without walking past the cookies, I might be better off, but I might not, and vice versa. That walking of the path -- at least in news, not commercialism -- is helpful to keeping your mind open to the world of knowledge bubbling up around you. You might never know when some little piece can trigger an important thought (or at least something you think could be important), and the partisanship chasm that has developed between the two major political parties in America almost certainly has been nurtured by neither side ever really having to listen to the other anymore. But that's another topic. Visual media on the web today must deliver what I want, when I want, with increasing quality. A grainy YouTube video was fine two or three years ago, but not anymore. I'm disappointed with anything but HD. I also want basic controls, being able to stop the stream, reverse, skip to the next segment, etc. I don't necessarily want to insert my commentary in whatever I see, like what Viddler (viddler.com) offers, although I do appreciate that option. For now, I'm content just with high quality video and basic remote control options. I do like to have embedding code available, in case I want to share, and I suppose, the day will come, when I do want to jump into the stream, and add tags and commentary to everything I watch. I'm just not there yet. But the web is already, and it will never goes backward in terms of its visual emphasis. I can't imagine anymore, for example, the allure of an all-text website (except, of course, my text-heavy www.mobilestorytelling.net, which has only two images at this point, a banner and a photo of the iPhone; that, by the way, is more of a time issue than design decision). If you can show me any that purposively are going back to more text, please share.
Sunday, July 11, 2010
Wednesday, July 7, 2010
How am I a reflection of the digital tools I use?
In creating the list of digital tools I use right now, several thoughts about identity came to mind. I seem to be in transition from commercial software to a fully open source existence, emanating from the cloud.
I recently dropped several Microsoft products (or only use them in case of emergencies) in favor of OpenOffice and the Google toolkit, especially the Chrome browser and Google Docs. FreeMind, Audacity, Skype, FileZilla, iTunes, etc., all of which I think are important programs for me to have right now, all of which are free and open source. The lone commercial holdouts for me are Adobe's Creative Suite (especially InDesign and Photoshop) and EndNote (although I was on the fence in the beginning between EndNote and Zotero; if I had to do it over again, I might have went with Zotero, but I think it's too late in the dissertation process now to switch). Quality level is the primary reason I stay with those programs. As soon as I have an open source option of similar quality with CS4, I assume I would switch.
Cost is a factor, too, since CS4, for example, is about $700, and that's with a student discount. I think EndNote was about $150, again, with a discount. I know these companies need to make money, but, as a consumer at least, I have a hard time justifying that much cost for basic digital products (with no packaging or delivery costs). I prefer the "freemium" model, in which I get to use basic services for free, but if I am using the software for particularly complicated, or proprietary, actions, that give the software its business niche, then I don't mind paying for it. What I don't like paying for is the basic functions, like saving a photo without a watermark. I wouldn't even mind the commercial model of this system, I suppose, if the versioning wasn't such a rip off.
As an example, I bought CS4 about a year ago, and now Adobe wants me to pay full price to upgrade to CS5, for just a few new features. I don't think that's fair. Honda doesn't come back to me a year or two after I buy a new car and say, "You know, we've added a lot of features to later models, and your car just won't be compatible with the roads in the near future." Instead, as long as I can find gas and spare parts, that Honda should work just fine for the rest of my life. Why can't my computers and software work that way, particularly when I buy top-end products?
Microsoft has pulled this versioning trick on me too many times to count (pushing me through all of the versions of Windows), building obsolescence into new versions of software and hardware, forcing customers, like me, to either upgrade or lose the functionality that we already had bought (Chrome OS, where are you?). I find the business model offensive. So, in reflection, I think my tool choice demonstrates that I am fed up with the heavy-handed capitalistic money grab of the system.
The sneaky commercialization of "free" software also is concerning. I was listening to Pandora yesterday, and, for the first time for me, an audible advertisement played. It was a shock to hear that ad for McDonald's. I found it so abrasive that I immediately turned Pandora off and plan to uninstall it soon, if that policy doesn't change quickly.
Pandora just plays music in a genre, based on similar musicians, like an Internet radio station. It really doesn't do anything so special, except play without advertisements (and, at times, introduce me to a new artist, just like a radio station). So now that it is playing with advertisements, I find very little use for it, and will turn back to public radio stations (without advertisements), or other commercial-free Internet radio, or play from my CD collection, which I have digitized and imported into iTunes (yes, saving the hard copies, with respect for copyright concerns). If nothing else, I think, my choices for digital tools show I am moving away from corporate control systems and further embracing the wonders of the altruistic collective (for as long as that lasts).
I recently dropped several Microsoft products (or only use them in case of emergencies) in favor of OpenOffice and the Google toolkit, especially the Chrome browser and Google Docs. FreeMind, Audacity, Skype, FileZilla, iTunes, etc., all of which I think are important programs for me to have right now, all of which are free and open source. The lone commercial holdouts for me are Adobe's Creative Suite (especially InDesign and Photoshop) and EndNote (although I was on the fence in the beginning between EndNote and Zotero; if I had to do it over again, I might have went with Zotero, but I think it's too late in the dissertation process now to switch). Quality level is the primary reason I stay with those programs. As soon as I have an open source option of similar quality with CS4, I assume I would switch.
Cost is a factor, too, since CS4, for example, is about $700, and that's with a student discount. I think EndNote was about $150, again, with a discount. I know these companies need to make money, but, as a consumer at least, I have a hard time justifying that much cost for basic digital products (with no packaging or delivery costs). I prefer the "freemium" model, in which I get to use basic services for free, but if I am using the software for particularly complicated, or proprietary, actions, that give the software its business niche, then I don't mind paying for it. What I don't like paying for is the basic functions, like saving a photo without a watermark. I wouldn't even mind the commercial model of this system, I suppose, if the versioning wasn't such a rip off.
As an example, I bought CS4 about a year ago, and now Adobe wants me to pay full price to upgrade to CS5, for just a few new features. I don't think that's fair. Honda doesn't come back to me a year or two after I buy a new car and say, "You know, we've added a lot of features to later models, and your car just won't be compatible with the roads in the near future." Instead, as long as I can find gas and spare parts, that Honda should work just fine for the rest of my life. Why can't my computers and software work that way, particularly when I buy top-end products?
Microsoft has pulled this versioning trick on me too many times to count (pushing me through all of the versions of Windows), building obsolescence into new versions of software and hardware, forcing customers, like me, to either upgrade or lose the functionality that we already had bought (Chrome OS, where are you?). I find the business model offensive. So, in reflection, I think my tool choice demonstrates that I am fed up with the heavy-handed capitalistic money grab of the system.
The sneaky commercialization of "free" software also is concerning. I was listening to Pandora yesterday, and, for the first time for me, an audible advertisement played. It was a shock to hear that ad for McDonald's. I found it so abrasive that I immediately turned Pandora off and plan to uninstall it soon, if that policy doesn't change quickly.
Pandora just plays music in a genre, based on similar musicians, like an Internet radio station. It really doesn't do anything so special, except play without advertisements (and, at times, introduce me to a new artist, just like a radio station). So now that it is playing with advertisements, I find very little use for it, and will turn back to public radio stations (without advertisements), or other commercial-free Internet radio, or play from my CD collection, which I have digitized and imported into iTunes (yes, saving the hard copies, with respect for copyright concerns). If nothing else, I think, my choices for digital tools show I am moving away from corporate control systems and further embracing the wonders of the altruistic collective (for as long as that lasts).
Wednesday, June 30, 2010
Tools in my digital toolbox (today)
As I reflected recently on the digital tools I use now, versus, say two years ago, it struck me just how fast all of those things in the toolbox have changed. I know, it's cliche to say everything is changing fast, but everything is changing fast, gal-darnit.
For example, I looked at the computer I migrated from two years ago and was surprised to see very little of the same software that feels so familiar and comfortable to me now on my new machine.
For writing, I only keep Microsoft Office around anymore for emergencies (can I dislike that bloated piece of junk more? Well, there is IE, too, and Vista; I'm sensing a pattern ...) I now typically use OpenOffice but am already beginning to become a bigger and bigger believer in GoogleDocs and working from the cloud.
Google also has won me over with its browser Chrome. Chrome is so fast and sleek and, well, just amazingly fantastic, I can't imagine going back to Mozilla, and I won't even mention that bloated piece of junk IE. Can the Chrome OS get released soon enough? I would love to get rid of that bloated piece of junk Windows 7 (already unceremoniously dumped that bloated piece of junk Vista).
Anyway, in terms of core tools that I use professionally, Adobe's Creative Suite also quickly has become essential. I bring out Photoshop and InDesign almost daily and used Flash and Dreamweaver to design a couple of my key web sites. The full version of Adobe Acrobat has become really handy, often, and I use Premiere for video editing.
None of those tools were on my old machine, except OpenOffice (yet I still was using Microsoft Office at that time, due to compatibility issues with OO that since have been resolved). Two years ago, I wasn't using EndNote, or Twitter, or Skype or iTunes, all of which I use just about daily now. And this doesn't take into account the mobile apps on my Android phone, which is less than two years old, either. Audacity, the open source audio editing program, might be the tool with the most longevity right now for me. I definitely prefer open source software. Not just because I am a poor student. But I think there is something vibrant and special about software developed for the love of the software, not for profits.
Part of all of this, I suppose, is related to the ripening of my dissertation studies, but I also remember just a few months ago exploring both mind mapping and Venn diagram software and determining I would never have a use for either of those. Turns out, the mind mapping software (FreeMind, specifically) just this week turned out to be the exact tool I needed to help me organize my dissertation reading list. I was feeling frustrated with the list, and EndNote, as great as that is, just wasn't allowing me to visualize what I needed to do to string a thread through my reading list. So, in a moment of frustration, I thought I would give mind mapping another shot and ended up downloading FreeMind. I watched a couple of short web tutorials on YouTube and then started playing around with it. I don't know now if I could ever have even continued the reading list without it. I feel like telling everyone I know. So I am, at least those few who read this blog and actually would continue this far into such a rambling post. And the even fewer of you who have a reading list to plug into it, but for those of you who do, this is for you!
Through FreeMind, I not only was able to visualize my sourcing tree for the dissertation, but I have been able to score the sources in terms of list value, and mark the pieces I still need to read, and get copies of. Emboldened, then I thought I would try a quick Venn diagram of my unifying themes for the dissertation, and amazingly enough, that sort of worked, too.
So what's next? What do I want to bring into the toolbox, or pick up afresh? Maybe Final Cut Pro. ... I think Adobe Premiere is OK as a video editor. But I sense there could be something much better. I have heard so many good things about Final Cut Pro (also that it is difficult to learn). I would like to work more on my video editing, which is something that communicators of all types will need to be better at in the future. I enjoy doing that kind of work, too. Is there an opensource video editor that actually works well? Maybe I'll look into that first. Otherwise, I'll soon start looking for Final Cut Pro tutorials.
For example, I looked at the computer I migrated from two years ago and was surprised to see very little of the same software that feels so familiar and comfortable to me now on my new machine.
For writing, I only keep Microsoft Office around anymore for emergencies (can I dislike that bloated piece of junk more? Well, there is IE, too, and Vista; I'm sensing a pattern ...) I now typically use OpenOffice but am already beginning to become a bigger and bigger believer in GoogleDocs and working from the cloud.
Google also has won me over with its browser Chrome. Chrome is so fast and sleek and, well, just amazingly fantastic, I can't imagine going back to Mozilla, and I won't even mention that bloated piece of junk IE. Can the Chrome OS get released soon enough? I would love to get rid of that bloated piece of junk Windows 7 (already unceremoniously dumped that bloated piece of junk Vista).
Anyway, in terms of core tools that I use professionally, Adobe's Creative Suite also quickly has become essential. I bring out Photoshop and InDesign almost daily and used Flash and Dreamweaver to design a couple of my key web sites. The full version of Adobe Acrobat has become really handy, often, and I use Premiere for video editing.
None of those tools were on my old machine, except OpenOffice (yet I still was using Microsoft Office at that time, due to compatibility issues with OO that since have been resolved). Two years ago, I wasn't using EndNote, or Twitter, or Skype or iTunes, all of which I use just about daily now. And this doesn't take into account the mobile apps on my Android phone, which is less than two years old, either. Audacity, the open source audio editing program, might be the tool with the most longevity right now for me. I definitely prefer open source software. Not just because I am a poor student. But I think there is something vibrant and special about software developed for the love of the software, not for profits.
Part of all of this, I suppose, is related to the ripening of my dissertation studies, but I also remember just a few months ago exploring both mind mapping and Venn diagram software and determining I would never have a use for either of those. Turns out, the mind mapping software (FreeMind, specifically) just this week turned out to be the exact tool I needed to help me organize my dissertation reading list. I was feeling frustrated with the list, and EndNote, as great as that is, just wasn't allowing me to visualize what I needed to do to string a thread through my reading list. So, in a moment of frustration, I thought I would give mind mapping another shot and ended up downloading FreeMind. I watched a couple of short web tutorials on YouTube and then started playing around with it. I don't know now if I could ever have even continued the reading list without it. I feel like telling everyone I know. So I am, at least those few who read this blog and actually would continue this far into such a rambling post. And the even fewer of you who have a reading list to plug into it, but for those of you who do, this is for you!
Through FreeMind, I not only was able to visualize my sourcing tree for the dissertation, but I have been able to score the sources in terms of list value, and mark the pieces I still need to read, and get copies of. Emboldened, then I thought I would try a quick Venn diagram of my unifying themes for the dissertation, and amazingly enough, that sort of worked, too.
So what's next? What do I want to bring into the toolbox, or pick up afresh? Maybe Final Cut Pro. ... I think Adobe Premiere is OK as a video editor. But I sense there could be something much better. I have heard so many good things about Final Cut Pro (also that it is difficult to learn). I would like to work more on my video editing, which is something that communicators of all types will need to be better at in the future. I enjoy doing that kind of work, too. Is there an opensource video editor that actually works well? Maybe I'll look into that first. Otherwise, I'll soon start looking for Final Cut Pro tutorials.
Sunday, June 27, 2010
Putting a theoretical foundation under mobile storytelling
One weakness I have noticed in discourse about mobile devices -- and mobile storytelling in particular -- is a general lack of a specific theoretical foundation from which to build. There are many, many new media theories, and, of course, general communication (or old media) theories, and there are innumerable theories from related fields, such as psychology, sociology, anthropology, etc. But what are the theories that are key to mobile media? That's a difficult question to answer.
The best source I have found so far to start such a discussion is "Digital Cityscapes," edited by Adriana de Souza e Silva and Daniel Sutko (e Silva, A., & Sutko, D. (2009). Digital cityscapes: Merging digital and urban playspaces: Peter Lang Pub Inc.). That collection has six articles in the first section, from a variety of authors, focused primarily on theory. I also have found a smattering of articles that address the core issues of the field to some degree. But, again, the information is scarce. Part of that is just the infancy of the field, but I also think part of that is scholars mostly working on the micro level at this point (myself included), instead of taking the time to step back and look more generally about holistic issues related to the "mobile" life.
So I have begun to work on making broader theoretical connections, at least in terms of mobile storytelling, and soon will start posting about them here as well as linking them to www.mobilestorytelling.net. I might even try to eventually develop those thoughts into a book article or chapter. But first, a paper. ...
The initial step in this paper-producing process is to determine what I really want to know about the theoretical connections across the mobile realm. Actually, there is a step before that. I first have to determine what I don't want to know about.
As a social scientist, I am not particularly interested in hardware specifications and manufacturing or model developments, such as the differences between the iPhone GS and the iPhone 4. I appreciate those, and I follow them on a consumer level, and I want mobile devices to keep gaining new abilities. But that's not want I want to write about.
Privacy concerns are integrated into user-generated content and mobile storytelling, but I think of those as ancillary to my studies at this time. Even though I am highly interested in location awareness, I am not focusing on achievement games, like FourSquare, or object location finding, via geocaching, or similar wayfaring, unless it is related to uncovering an embedded story, or something I think of as the "airrative," or story embedded in the air.
Back to the original issue, I'm not sure what theory or theories can cover all of that and the rest of it, but it's not my intent to find a master theory. At least not yet. I first want to look closely at storytelling with mobile devices, particularly nonfiction storytelling, which I anticipate being the core of my dissertation. So what does that involve?
Thinking of this as a relatively contained academic paper, or article, or chapter, and not as the basis of a lengthy dissertation just on theory, I started to look at all of the various realms this could include, such as cyberspace theory, museum studies, cognitive theory, immersion theory, etc.
I'm not sure where this will lead, but I plan to start by doing a literature review of the key theories in the realms in which I think the overlap is most critical. Those general areas are:
* New Media
* Locative Media
* Narrative Theory
* Interaction Theory
Here is a very quick Venn diagram that shows some points of overlap among those four, particularly in the realms of sharing information (stories) and connecting in social ways:
I also am interested in relations to spacetime and game design, but I think I have enough to consider for now. My plan is to take these four broad areas mentioned above and search through them for direct mobile storytelling ties, ones that I think inform the field, as a way to help to develop broader theoretical connections. I'm really not sure how this will turn out, until I begin. That's part of the fun. ... So onward!
The best source I have found so far to start such a discussion is "Digital Cityscapes," edited by Adriana de Souza e Silva and Daniel Sutko (e Silva, A., & Sutko, D. (2009). Digital cityscapes: Merging digital and urban playspaces: Peter Lang Pub Inc.). That collection has six articles in the first section, from a variety of authors, focused primarily on theory. I also have found a smattering of articles that address the core issues of the field to some degree. But, again, the information is scarce. Part of that is just the infancy of the field, but I also think part of that is scholars mostly working on the micro level at this point (myself included), instead of taking the time to step back and look more generally about holistic issues related to the "mobile" life.
So I have begun to work on making broader theoretical connections, at least in terms of mobile storytelling, and soon will start posting about them here as well as linking them to www.mobilestorytelling.net. I might even try to eventually develop those thoughts into a book article or chapter. But first, a paper. ...
The initial step in this paper-producing process is to determine what I really want to know about the theoretical connections across the mobile realm. Actually, there is a step before that. I first have to determine what I don't want to know about.
As a social scientist, I am not particularly interested in hardware specifications and manufacturing or model developments, such as the differences between the iPhone GS and the iPhone 4. I appreciate those, and I follow them on a consumer level, and I want mobile devices to keep gaining new abilities. But that's not want I want to write about.
Privacy concerns are integrated into user-generated content and mobile storytelling, but I think of those as ancillary to my studies at this time. Even though I am highly interested in location awareness, I am not focusing on achievement games, like FourSquare, or object location finding, via geocaching, or similar wayfaring, unless it is related to uncovering an embedded story, or something I think of as the "airrative," or story embedded in the air.
Back to the original issue, I'm not sure what theory or theories can cover all of that and the rest of it, but it's not my intent to find a master theory. At least not yet. I first want to look closely at storytelling with mobile devices, particularly nonfiction storytelling, which I anticipate being the core of my dissertation. So what does that involve?
Thinking of this as a relatively contained academic paper, or article, or chapter, and not as the basis of a lengthy dissertation just on theory, I started to look at all of the various realms this could include, such as cyberspace theory, museum studies, cognitive theory, immersion theory, etc.
I'm not sure where this will lead, but I plan to start by doing a literature review of the key theories in the realms in which I think the overlap is most critical. Those general areas are:
* New Media
* Locative Media
* Narrative Theory
* Interaction Theory
Here is a very quick Venn diagram that shows some points of overlap among those four, particularly in the realms of sharing information (stories) and connecting in social ways:
I also am interested in relations to spacetime and game design, but I think I have enough to consider for now. My plan is to take these four broad areas mentioned above and search through them for direct mobile storytelling ties, ones that I think inform the field, as a way to help to develop broader theoretical connections. I'm really not sure how this will turn out, until I begin. That's part of the fun. ... So onward!
Wednesday, June 23, 2010
Workspace and its story
Just struck me that my workspace now includes four PCs and an iMac, plus several mobile devices, such as an Android phone and iPods, with cameras and other electronic gear all over. So what is my main monitor sitting on? A printed version of "The Complete Works of William Shakespeare" and a Sunset magazine encyclopedia of Western gardens. I think I need to get outside more this summer.
Tuesday, June 22, 2010
Where do interesting academic paper prospects come from?
Dr. Fred Kemp of Texas Tech University calls such opportunities within the collective mind "disturbed knowledge," as opposed to the "shared knowledge," or the ideas we mostly agree upon.
This disturbed knowledge, according to Kemp, generally originates from one of five sources, or a combination of these:
1. There's a gap in the disciplinary knowledge;
2. Something about the disciplinary knowledge is just wrong;
3. Something about the disciplinary knowledge needs explanation, expansion, or further defense;
4. Some notable person in the field needs a revised or enlightened assessment;
5. The field itself needs a new branch or corollary or peripheral addition.
If your academic paper doesn't offer disturbed knowledge, then it probably is time to question again why you are bothering.
This disturbed knowledge, according to Kemp, generally originates from one of five sources, or a combination of these:
1. There's a gap in the disciplinary knowledge;
2. Something about the disciplinary knowledge is just wrong;
3. Something about the disciplinary knowledge needs explanation, expansion, or further defense;
4. Some notable person in the field needs a revised or enlightened assessment;
5. The field itself needs a new branch or corollary or peripheral addition.
If your academic paper doesn't offer disturbed knowledge, then it probably is time to question again why you are bothering.
Saturday, June 19, 2010
Digital or media literacy
After looking over several models and definitions of digital / media literacy, including the overly complicated graphic above, it seems clear that the phrases "digital literacy" and "media literacy" have become nearly synonymous. I tend to think of digital literacy as more device oriented, like being able to operate a smart phone, and media literacy as being able to decipher the messages -- textual, audio, video, etc. -- delivered through such devices. But the literature I read recently about the concepts doesn't seem to back such simple delineation (maybe I should make my argument in this matter). In fact, I think the scholarship muddies the pool from many different directions, making any distinctions between the two terms virtually meaningless. So maybe it would be more worthwhile to spend energy envisioning different levels of digital/media literacy, starting with a base level and an advanced level.
At the base level, users would be able to competently operate digital communication devices. That is not just being able to turn a device on, which, of course is an important first step, but base level users would be able to carry out all core functions of a device in the ways the device was designed to be used. Those core functions would be defined by the accompanying literature, suggesting the capabilities of the device and providing instructions for carrying out those tasks. A person who can turn on a cell phone and call/answer calls would not necessarily have a base level of literacy, unless that person also, for examples, could check voice mail, text message, take a picture with the phone, etc. That's not to say the person must be able to successful carry out every single task that the device is capable of performing, but the person should be able to perform the core tasks, either the talking points on the marketing, or the most substantially addressed functions in the user manual. I do realize that is a slippery definition of a parameter, but case by specific case, I think the line would be relatively easy to find for any particular device, with some limited subjectivity on the exact place to draw it, which would be besides the point anyway.
The advanced skills do not relate to how obscure the function might be but instead to the analysis, synthesis and creativity required to envision and execute the expression (think of the top point of Bloom's taxonomy pyramid). That would include generating new uses for the device that are not explicitly stated in the official accompanying materials. It would include symbol analysis and manipulation with the device, and it would include significant expansion of the capabilities described as uses for the devices. And by devices, I mean digital tools, so a piece of software would be a device, just as a scanner or cell phone would be. In some cases, then, a device will be used within a device, or they would be combined in new ways. I see this all as part of the shroud of technology, in which even the creators of devices can't foresee how they will be used and to what extent. A primary example of that was the initial press conference unveiling the iPod, hosted by Steve Jobs of all people, in which the device was described primarily as a portable hard drive (yet one that also could hold music). Apple, probably the most clairvoyant of mainstream new media companies, also didn't envision the computing appeals of the iPhone (originally rebuffing apps and emphasizing that the iPhone was not intended to be a mini-computer). And so on. The users who took these devices and made them do what they wanted, rather than follow the prescription of the company, should be considered as having advanced digital literacy. Advanced digital literacy also means having an awareness of what sources of information can be trusted, or how to check sources, before believing what can be seen. A general skepticism would be part of this skill set, yet also with the wherewithal to triangulate sources of information, or dig deeper into the information, to determine who is saying what and for what reason(s), to help gauge the credibility and weight. I'm starting to slip into a wide range of descriptors, that could be classified as "advanced," so suffice to circle around and say that, in general, advanced skills involve analysis, synthesis and creativity, while base skills essentially involve following directions and traditional social conventions.
In five to 10 years, advanced users, I think, will need to be know one skill above all others: the skill to learn.
If we look backward 10 years, there would be no hint of Facebook (2004), or five years, no Twitter (2006). Every year, it seems, a new technology appears that significantly shifts the communication/media landscape, or at least shakes it up. So I think it will become increasingly important for people to develop the advanced skill of ever-learning, to be open to learning new things, while those who can't keep up, or give up (I just am not going to learn another new program, or buy another new device!) will be left behind and fall further back each successive year. What might seem rebellious and cool, in a Luddite sort of way, really will become self destructive socially.
Well, this learn-to-learn philosophy was starting to sound too much like an echo in my mind, so I began looking around at some of the recent books I have read, and found the following in Seymour Papert's "The Children's Machine," which clearly inspired what I wrote just a few sentences ago:
"It's often said that we are entering the information age. This coming period could equally be called the age of learning. The sheer quantity of learning taking place in the world is already many times greater than in the past. ... Today, in industrialized countries, most people are doing jobs that did not exist when they were born. The most important skill determining a person's life pattern has already become the ability to learn new skills, to take in new concepts, to assess new situations, to deal with the unexpected. This will be increasingly true in the future. The competitive ability is the ability to learn."
When I try to imagine the future, those thoughts keep coming to mind, and I suspect that concept will be as clear as anyone can get.
Monday, June 14, 2010
"Writing for Scholarly Publication" by Anne Sigismund Huff
Just finished this book by Huff, professor of strategic management at the University of Colorado - Boulder. Well written, chatty and mind focusing. I do not intend to summarize the whole work or even promise to mention its most salient or provocative points (you will need to read it to determine those for yourself). But it did provoke some thoughts in me about the composition of academic scholarship, such as these:
* Huff's theme: Scholarly publication is a conversation. She gives great advice to find related articles in the journals of the field and says to imagine your work as a conversation with those pieces and their authors, a discussion around which many people at the party might gather. Most academic articles, frankly, are those boring small talks, in which someone drones (maybe even you), and you eventually want to stab the skewers into your ears. This reminds me of the sick attraction journalists have with the inverted pyramid. Formulaic writing has its place, of course, and every piece can't break the formula every time. Yet creativity within the formula should be possible, or the formula should be abandoned, especially in articles of this length and with this much time and energy put into them. Think about the traditional approach of telling readers what you are going to tell them, telling them and the telling them what you told them. I could understand that approach in some cases, like with elementary school students. But I don't think that's the academic market. Instead, as Huff rightly suggests, get to the point and then if you feel the desire to circle back around again, at least say something new in the second passing.
* Ideas are cheap (p. 14). Execution of ideas is where the capital is formed. This is becoming a new media creed, and I think this is where the people who argue that technology is making us "dumber" are walking around with bags on their heads. Information sharing has changed so dramatically that it is like learning the world is not flat. It probably always has been this way, that execution of ideas trumped ideas themselves, but now, the access to collective intelligence has destroyed our cognitive measuring tools. We always have measured how smart we are on an individual scale, yet now, tapping the collective effectively and efficiently (think media literacy, or calculators) creates a different sort of intelligence, and it's not the idea generation that is the issue, it's who can do something with the ideas. Hmmm, this is not coming out like I imagined. I'll try again. Ideas and even the first few levels of execution of ideas are so cheap now that they have virtually no cost to the producer (think about the cost of this now rambling blog post). Yet some people are able to turn ideas into something that's clearly worthwhile, and that does have value. Is the intelligence, then, in generating the idea, executing the idea or monetizing the idea? I can see this is getting way too fuzzy and long, so I'll work more on the ideas later. It won't cost me anything.
* Quit thinking about it and write it. And finish it. Again, completion of an idea doesn't cost anything. But, if nothing else, it has high value to me, or at least much higher value than the great American novel in my head, or the half-finished journal articles in my drawers or the letter to the editor that I never sent, etc. And someone else also might find the work valuable.
* Huff said, "keep the pipeline full," with individual articles, co-authored articles, mainstream pieces, niche pieces, efficiently getting work through your system without hitting dry patches. I think that is a highly beneficial strategy. As a staff writer for daily newspapers, I always kept literally hundreds of ideas at hand, maybe a few dozen that I had thought about to some extent and then another dozen at least that were in various stages of development, from background reading to interviews having been done to drafts completed. This kept productivity high for me but never made producing feel like a burden, because I almost always was working on what I wanted to do. If I felt inspired to write, I did. If I didn't feel like writing, I would make an interview call, or read some background, or whatever I felt like doing (or the least painful thing), and because something in my pipeline always was near completion, or ripened to the point of submission, the editors generally didn't hassle me much. That approach also is a great way to avoid "writer's block," since when I have felt blocked as a writer, I never felt blocked as a reader, or as an interviewer. In those ways, something related to publication always has been flowing for me.
* Be interesting (p. 47). Do we really have to tell writers this? If you have read many academic journals or newspapers, you know the answer. This probably should be the first filter applied. If you can't say something interesting, ...
* Make assertions about earlier work that reflects your judgement and agenda. And define key terms and new terms (p. 90). These both are critical writing techniques rarely used to full potential. If you are going to comment on someone else's scholarship, it seems much more interesting to actually comment on it, rather than just note it exists. Sometimes, a list of other scholars doing work on a particular line of inquiry can be enough, I suppose, or maybe it's a tip of the hat. But it might be richer to present an entry point into the exemplar, like a hyperlink made out of words. And terminology is significantly underrated in writing of all sorts. If you don't establish the key terms, master terms, whatever you want to call them, then it will be difficult to follow your lines of thought as you envision them.
Because the writing is so smooth, Huff's book is easy to read quickly. It also is helpful in a variety of ways. If nothing else, it offers clarity. All of those little details, nuances, tangents that actually muddle and distract the writing process, especially early in the iterations, slough off under Huff's straightforward approach with the end in mind.
* Huff's theme: Scholarly publication is a conversation. She gives great advice to find related articles in the journals of the field and says to imagine your work as a conversation with those pieces and their authors, a discussion around which many people at the party might gather. Most academic articles, frankly, are those boring small talks, in which someone drones (maybe even you), and you eventually want to stab the skewers into your ears. This reminds me of the sick attraction journalists have with the inverted pyramid. Formulaic writing has its place, of course, and every piece can't break the formula every time. Yet creativity within the formula should be possible, or the formula should be abandoned, especially in articles of this length and with this much time and energy put into them. Think about the traditional approach of telling readers what you are going to tell them, telling them and the telling them what you told them. I could understand that approach in some cases, like with elementary school students. But I don't think that's the academic market. Instead, as Huff rightly suggests, get to the point and then if you feel the desire to circle back around again, at least say something new in the second passing.
* Ideas are cheap (p. 14). Execution of ideas is where the capital is formed. This is becoming a new media creed, and I think this is where the people who argue that technology is making us "dumber" are walking around with bags on their heads. Information sharing has changed so dramatically that it is like learning the world is not flat. It probably always has been this way, that execution of ideas trumped ideas themselves, but now, the access to collective intelligence has destroyed our cognitive measuring tools. We always have measured how smart we are on an individual scale, yet now, tapping the collective effectively and efficiently (think media literacy, or calculators) creates a different sort of intelligence, and it's not the idea generation that is the issue, it's who can do something with the ideas. Hmmm, this is not coming out like I imagined. I'll try again. Ideas and even the first few levels of execution of ideas are so cheap now that they have virtually no cost to the producer (think about the cost of this now rambling blog post). Yet some people are able to turn ideas into something that's clearly worthwhile, and that does have value. Is the intelligence, then, in generating the idea, executing the idea or monetizing the idea? I can see this is getting way too fuzzy and long, so I'll work more on the ideas later. It won't cost me anything.
* Quit thinking about it and write it. And finish it. Again, completion of an idea doesn't cost anything. But, if nothing else, it has high value to me, or at least much higher value than the great American novel in my head, or the half-finished journal articles in my drawers or the letter to the editor that I never sent, etc. And someone else also might find the work valuable.
* Huff said, "keep the pipeline full," with individual articles, co-authored articles, mainstream pieces, niche pieces, efficiently getting work through your system without hitting dry patches. I think that is a highly beneficial strategy. As a staff writer for daily newspapers, I always kept literally hundreds of ideas at hand, maybe a few dozen that I had thought about to some extent and then another dozen at least that were in various stages of development, from background reading to interviews having been done to drafts completed. This kept productivity high for me but never made producing feel like a burden, because I almost always was working on what I wanted to do. If I felt inspired to write, I did. If I didn't feel like writing, I would make an interview call, or read some background, or whatever I felt like doing (or the least painful thing), and because something in my pipeline always was near completion, or ripened to the point of submission, the editors generally didn't hassle me much. That approach also is a great way to avoid "writer's block," since when I have felt blocked as a writer, I never felt blocked as a reader, or as an interviewer. In those ways, something related to publication always has been flowing for me.
* Be interesting (p. 47). Do we really have to tell writers this? If you have read many academic journals or newspapers, you know the answer. This probably should be the first filter applied. If you can't say something interesting, ...
* Make assertions about earlier work that reflects your judgement and agenda. And define key terms and new terms (p. 90). These both are critical writing techniques rarely used to full potential. If you are going to comment on someone else's scholarship, it seems much more interesting to actually comment on it, rather than just note it exists. Sometimes, a list of other scholars doing work on a particular line of inquiry can be enough, I suppose, or maybe it's a tip of the hat. But it might be richer to present an entry point into the exemplar, like a hyperlink made out of words. And terminology is significantly underrated in writing of all sorts. If you don't establish the key terms, master terms, whatever you want to call them, then it will be difficult to follow your lines of thought as you envision them.
Because the writing is so smooth, Huff's book is easy to read quickly. It also is helpful in a variety of ways. If nothing else, it offers clarity. All of those little details, nuances, tangents that actually muddle and distract the writing process, especially early in the iterations, slough off under Huff's straightforward approach with the end in mind.
Friday, June 11, 2010
Walter Ong's Secondary Orality and its relationship to mobile devices
Mobile devices offer us a variety of new abilities that might not be so new but also might be exponentially more powerful in the modern form.
That's a bit confusing, even to me, so I'll back up and try again. For many months, maybe even a year, I have been thinking that there are important foundational connections between mobile devices and oral traditions. Mobile devices, such as the iPhone, can be aware of our location, spatial features around us, the context of the situation, including what has happened to us before and what relationships we have with other people in the area, and so on. Which all sounds really amazing, until you think that any person interacting with another person (or crowd) could very well do the same thing without a machine.
In lecturing, for example, I might know quite a bit about my audience, including names, motivations and even how many times a particular audience member has heard me give this sort of talk before. I can connect socially with these people, be chatty, walk around the auditorium and talk to each individual. But what I can't do, and where the mobile devices show immense potential, is perform that same personalized routine simultaneously for thousands of different people at once, nurturing a collective and open and collaborative environment endlessly at all hours of the day, generously responding to each individual, all while giving the impression that this sort of feedback is authored and tailored just for the single recipient experiencing it.
Along those lines, it caught my attention when Walter Ong's Second Orality was mentioned briefly, starting on page 7, in:
Baehr, C., & Schaller, B. (2009). Writing for the internet: A guide to real communication in virtual space: Greenwood Press.
Among the intriguing traits identified by Ong (1982), and capsulized by Baehr and Schaller, is that oral culture speakers often adapted their storytelling in response to audience reaction. Could that be the origins of interactive storytelling? Location, spatial and contextual awareness are critical components in mobile delivery, but they also seem monumentally relevant to oral cultures. In turn, Ong's theories are definitely now on my reading list. Here are the sources I plan to find and examine:
From the Baehr and Schaller book:
Ong, W. (Ed.). (1967). The presence of the word: Minneapolis: University of Minnesota Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word: Routledge New York.
And a collection of various articles that make reference to mobile technology and secondary orality and new media, such as:
Potts, J. (2008). Who’s afraid of technological determinism? Another look at medium theory. Fibreculture Journal, 12.
Hartnell-Younga, E., & Vetereb, F. (2008). A means of personalising learning: Incorporating old and new literacies in the curriculum with mobile phones. Curriculum Journal, 19(4), 283-292.
Joyce, M. (2002). No one tells you this: Secondary orality and hypertextuality. Oral Tradition, 17(2).
Any other suggestions?
When I get through those, I'll report back what I find.
That's a bit confusing, even to me, so I'll back up and try again. For many months, maybe even a year, I have been thinking that there are important foundational connections between mobile devices and oral traditions. Mobile devices, such as the iPhone, can be aware of our location, spatial features around us, the context of the situation, including what has happened to us before and what relationships we have with other people in the area, and so on. Which all sounds really amazing, until you think that any person interacting with another person (or crowd) could very well do the same thing without a machine.
In lecturing, for example, I might know quite a bit about my audience, including names, motivations and even how many times a particular audience member has heard me give this sort of talk before. I can connect socially with these people, be chatty, walk around the auditorium and talk to each individual. But what I can't do, and where the mobile devices show immense potential, is perform that same personalized routine simultaneously for thousands of different people at once, nurturing a collective and open and collaborative environment endlessly at all hours of the day, generously responding to each individual, all while giving the impression that this sort of feedback is authored and tailored just for the single recipient experiencing it.
Along those lines, it caught my attention when Walter Ong's Second Orality was mentioned briefly, starting on page 7, in:
Baehr, C., & Schaller, B. (2009). Writing for the internet: A guide to real communication in virtual space: Greenwood Press.
Among the intriguing traits identified by Ong (1982), and capsulized by Baehr and Schaller, is that oral culture speakers often adapted their storytelling in response to audience reaction. Could that be the origins of interactive storytelling? Location, spatial and contextual awareness are critical components in mobile delivery, but they also seem monumentally relevant to oral cultures. In turn, Ong's theories are definitely now on my reading list. Here are the sources I plan to find and examine:
From the Baehr and Schaller book:
Ong, W. (Ed.). (1967). The presence of the word: Minneapolis: University of Minnesota Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word: Routledge New York.
And a collection of various articles that make reference to mobile technology and secondary orality and new media, such as:
Potts, J. (2008). Who’s afraid of technological determinism? Another look at medium theory. Fibreculture Journal, 12.
Hartnell-Younga, E., & Vetereb, F. (2008). A means of personalising learning: Incorporating old and new literacies in the curriculum with mobile phones. Curriculum Journal, 19(4), 283-292.
Joyce, M. (2002). No one tells you this: Secondary orality and hypertextuality. Oral Tradition, 17(2).
Any other suggestions?
When I get through those, I'll report back what I find.
Thursday, June 10, 2010
Does the Internet make you smarter or dumber?
The Washington Post in the past week published a binary pro/con arguing that the Internet makes us "smarter"/"dumber," starting with Clay Shirky's "smarter" piece, published on June 4. Shirky, a NYU prof, is just about to release a new book called "Cognitive Surplus: Creativity and Generosity in a Connected Age."
Nicholas Carr on the next day, June 5, authored the "dumber" piece. Carr recently released a book called, "The Shallows: What the Internet Is Doing to Our Brains."
Shirky also wrote a provocative book in 2005, called "Here Comes Everybody," that gives many examples of how openness on the Internet is making the world a better informed and mobilized place, maybe not a more capitalistically lucrative place, but a better place nonetheless.
I certainly feel Shirky has a much stronger base for his argument, which he presents solidly in his book, but his essay in this publication comes across as flippant, like the question is too bothersome to even answer.
Carr instead goes straight for the empirical and physiological hammer, saying "a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers."
His premise -- that we don't spend hour after hour alone with books anymore, which is making us evolve into idiots -- seems somewhat ironic contained in a generalist essay at less than 1,300 words. It also seems questionable at its core, since my understanding of the Internet is that it has inspired a resurgence in reading. All kinds of reading. News media organizations, for example, are attracting millions and millions of readers beyond what they ever were able to reach with print editions. Those organizations just can't make money off of it. So is this is capitalism issue, or a reading issue?
Shirky mentions the typical response by societies to foundation-shaking technologies. The first step is denial, of course, and that things were always better "in the olden days." Marshall McLuhan in his short booklet "The Medium is the Massage," has a passage about the pastoral myth generated by railroad expansion, in which the demonization of urban areas conveniently obscured the hardships of homesteading. The only medium I think truly lived up to those fears was television, just because of the way it was used by corporatists to turn people into consumption machines. When I watch public broadcasting, or the less commercialized sporting events, or even some of the benign content on the cooking channel, I can see the neutral skeleton of the machine, which could be used for so much more good. But this is not my rant about television. Back to the Internet, and does it make us smarter?
Shirky and so many others, including Henry Jenkins and Howard Rheingold, have made compelling cases in recent years about the superpowers that the Internet creates within us (and communally), giving us opportunities like never before to expend our cognitive surplus. But one aspect that doesn't seem to get much attention in this debate is the measuring tools of the non-monetary benefits (or costs).
In other words, how are we deciding if we are "smarter" or "dumber"? In what ways, and by whose yardstick?
Carr, for example, writes:
"Only when we pay deep attention to a new piece of information are we able to associate it 'meaningfully and systematically with knowledge already well established in memory,' writes the Nobel Prize-winning neuroscientist Eric Kandel."
I'm not sure how Kandel is measuring this, but, at least from the context of the rest of Carr's piece, I suspect this is another look through the elitist paradigm that pooh-poohs any intellectual gains outside of the privileged class and its narrow measuring tools. People who are not Nobel Prize-winning neuroscientists might not necessarily need deep and meaningful thought about a particular topic to feel like they know enough (and more than they would have without the Internet) to move on to something else.
This overall debate is fraught with complexities that simply can't be addressed in a combined 2,500 words, which makes the format questionable. These pieces could, though, start a much more "meaningful" discussion about what we value in knowledge, how we measure intelligence and how technological determinism plays a part in our evolution as a species, even physiologically, as Carr suggests.
As a new media practitioner and educator, the answer to this question of whether the Internet is making us "smarter" or "dumber" seems simplistically stupid, which is maybe why Shirky gives such a half-hearted effort.
Carr's summaries of "empirical data" meanwhile, without transparent access to the methods and results of those studies, again, seems ironically shallow.
How about providing hyperlinks to the original studies, so we can judge Carr's conclusions for ourselves? Oh, wait, that would just make us dumber.
Nicholas Carr on the next day, June 5, authored the "dumber" piece. Carr recently released a book called, "The Shallows: What the Internet Is Doing to Our Brains."
Shirky also wrote a provocative book in 2005, called "Here Comes Everybody," that gives many examples of how openness on the Internet is making the world a better informed and mobilized place, maybe not a more capitalistically lucrative place, but a better place nonetheless.
I certainly feel Shirky has a much stronger base for his argument, which he presents solidly in his book, but his essay in this publication comes across as flippant, like the question is too bothersome to even answer.
Carr instead goes straight for the empirical and physiological hammer, saying "a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers."
His premise -- that we don't spend hour after hour alone with books anymore, which is making us evolve into idiots -- seems somewhat ironic contained in a generalist essay at less than 1,300 words. It also seems questionable at its core, since my understanding of the Internet is that it has inspired a resurgence in reading. All kinds of reading. News media organizations, for example, are attracting millions and millions of readers beyond what they ever were able to reach with print editions. Those organizations just can't make money off of it. So is this is capitalism issue, or a reading issue?
Shirky mentions the typical response by societies to foundation-shaking technologies. The first step is denial, of course, and that things were always better "in the olden days." Marshall McLuhan in his short booklet "The Medium is the Massage," has a passage about the pastoral myth generated by railroad expansion, in which the demonization of urban areas conveniently obscured the hardships of homesteading. The only medium I think truly lived up to those fears was television, just because of the way it was used by corporatists to turn people into consumption machines. When I watch public broadcasting, or the less commercialized sporting events, or even some of the benign content on the cooking channel, I can see the neutral skeleton of the machine, which could be used for so much more good. But this is not my rant about television. Back to the Internet, and does it make us smarter?
Shirky and so many others, including Henry Jenkins and Howard Rheingold, have made compelling cases in recent years about the superpowers that the Internet creates within us (and communally), giving us opportunities like never before to expend our cognitive surplus. But one aspect that doesn't seem to get much attention in this debate is the measuring tools of the non-monetary benefits (or costs).
In other words, how are we deciding if we are "smarter" or "dumber"? In what ways, and by whose yardstick?
Carr, for example, writes:
"Only when we pay deep attention to a new piece of information are we able to associate it 'meaningfully and systematically with knowledge already well established in memory,' writes the Nobel Prize-winning neuroscientist Eric Kandel."
I'm not sure how Kandel is measuring this, but, at least from the context of the rest of Carr's piece, I suspect this is another look through the elitist paradigm that pooh-poohs any intellectual gains outside of the privileged class and its narrow measuring tools. People who are not Nobel Prize-winning neuroscientists might not necessarily need deep and meaningful thought about a particular topic to feel like they know enough (and more than they would have without the Internet) to move on to something else.
This overall debate is fraught with complexities that simply can't be addressed in a combined 2,500 words, which makes the format questionable. These pieces could, though, start a much more "meaningful" discussion about what we value in knowledge, how we measure intelligence and how technological determinism plays a part in our evolution as a species, even physiologically, as Carr suggests.
As a new media practitioner and educator, the answer to this question of whether the Internet is making us "smarter" or "dumber" seems simplistically stupid, which is maybe why Shirky gives such a half-hearted effort.
Carr's summaries of "empirical data" meanwhile, without transparent access to the methods and results of those studies, again, seems ironically shallow.
How about providing hyperlinks to the original studies, so we can judge Carr's conclusions for ourselves? Oh, wait, that would just make us dumber.
Wednesday, May 5, 2010
Friday, April 9, 2010
Monday, April 5, 2010
Sunday, March 28, 2010
Fort Vancouver Mobile content moving to a new forum
Because the Fort Vancouver Mobile project is starting to gain steam, I have created separate blogs just for that.
The first one is a behind-the-scenes look at the project, primarily for the production team and other mobile technology researchers.
It can be found here: FortVancouverMobileSubRosa.blogspot.com, or the simpler URL: FortVancouverMobile.net.
Sub rosa, by the way, was an ancient sign of secrecy, in which a rose was hung above the doorway (translation = "under the rose"), pledging participants of a meeting to keep the contents confidential. I, of course, am taking the polar opposite approach, opening up the process to everyone who is interested, but I like the symbolism of the label anyway.
The first one is a behind-the-scenes look at the project, primarily for the production team and other mobile technology researchers.
It can be found here: FortVancouverMobileSubRosa.blogspot.com, or the simpler URL: FortVancouverMobile.net.
Sub rosa, by the way, was an ancient sign of secrecy, in which a rose was hung above the doorway (translation = "under the rose"), pledging participants of a meeting to keep the contents confidential. I, of course, am taking the polar opposite approach, opening up the process to everyone who is interested, but I like the symbolism of the label anyway.
Wednesday, March 10, 2010
Long-term funding for the Fort Vancouver Mobile project
I'm trying to put together an organizational chart that keeps track of funding options available for the Fort Vancouver Mobile project. I'm starting here with a Gantt chart, using Open Project, but I just don't think this helps much, even if it was filled out in more depth. While the overlapping periods of involvement are interesting, I really would like to have a tool that charts deadlines, application periods, resources needed (such as letters of commitment, matching funds, resumes from participants, etc.) and the other aspects of this work, including when single sourcing could come into play (and when I already have completed an application for this group in previous years). I also would like to see in visual form the overlapping periods of grant preparation, to try to avoid those as much as possible, or at least to be prepared for the extra work.
Any suggestions on alternatives to the Gantt for this, preferably with open source software, or at least through a common program that I already have, such as Access or Excel?
Friday, March 5, 2010
Google SketchUp
Another example of Google's free technology creating interesting digital information:
Athens, Greece, in 3D
Athens, Greece, in 3D
Wednesday, March 3, 2010
Fort Vancouver Mobile: Starting a discussion on narrative threads
Here is a memo I shared with the Fort Vancouver Mobile team recently to get them to start thinking about specific stories we could tell with this new technology.
Do you think this document sufficiently inspires thought about the potential stories?
What more would you want to know before engaging in a discussion about this matter?
Do you think this document sufficiently inspires thought about the potential stories?
What more would you want to know before engaging in a discussion about this matter?
Tuesday, February 23, 2010
Fort Vancouver Mobile: Orientation PPT
Here is the PPT I will use on Feb. 24 to orient our founding partners on the Fort Vancouver Mobile project:
Fort Vancouver Mobile intro
Fort Vancouver Mobile intro
Saturday, February 20, 2010
Fort Vancouver Mobile: RQ (2nd draft)
Here's another attempt at my dissertation research question (with guidance from, among others, Dr. Rich Rice):
What constitutes the best practices of mixed reality design when blending mobile interactive narrative with location-based historical interpretation? In what ways do these practices impact users?
What do you think? ... Still seems a bit wordy to me, with too many adjectives and clauses. Might also still be too broad. I would like to make it more concise and direct. Will keep working on it.
What constitutes the best practices of mixed reality design when blending mobile interactive narrative with location-based historical interpretation? In what ways do these practices impact users?
What do you think? ... Still seems a bit wordy to me, with too many adjectives and clauses. Might also still be too broad. I would like to make it more concise and direct. Will keep working on it.
Monday, February 15, 2010
Draft of grant language
To create a strong grant proposal, particularly on the federal level, I need to highly refine the first couple of paragraphs to meet the needs of the funder. Here's a rough draft for the Fort Vancouver Mobile project:
Mobile technology is changing the ways in which we access and expect information. But how is this changing us in the process? And what are we losing along the way?
The Fort Vancouver Mobile research project not only is examining new approaches and documenting best practices in the study of a variety of mobile-oriented humanities, it also is developing innovative uses of this emerging technology field for public programming and education, mixing traditional and new media, creating a mixed reality that promises to engage people in our shared history in immersive and interactive environments. That will include making accessible a variety of digital resources and assets.
Mobile technology is changing the ways in which we access and expect information. But how is this changing us in the process? And what are we losing along the way?
The Fort Vancouver Mobile research project not only is examining new approaches and documenting best practices in the study of a variety of mobile-oriented humanities, it also is developing innovative uses of this emerging technology field for public programming and education, mixing traditional and new media, creating a mixed reality that promises to engage people in our shared history in immersive and interactive environments. That will include making accessible a variety of digital resources and assets.
Sunday, February 7, 2010
Fort Vancouver Mobile: Project timeline
As the Fort Vancouver Mobile project now emerges from incubation, it's time to start putting some specific goals and a timeline in place. Here are the plans right now for 2010, but these mostly are rough dates, which likely will be adjusted as we go:
Feb. 24 -- Initial meeting to bring together all of the interested participants in one room. This critical session will provide an overview of the project, start making connections among the partners, lay out the resources available, start the brainstorming of specific content production, set goals and make plans for what's ahead.
March 14 -- Choose the initial historic storyline in The Village, upon which to base the first round of mobile interactive narrative tests. Begin in-depth research on that storyline and organizing the script, storyboard, production and distribution models.
March 15 -- Dissertation research question formalized.
March 23 -- Digital Humanities Start-Up Grant application due.
May 1 -- Dissertation preproposal due.
June 1 -- Complete the script and storyboards for first test project, with production and distribution models in place.
June 19-20 -- Bridgade Encampment, special event at the fort. Complete the third and final segment of the user survey. Use that information to tailor and adjust the story in progress.
July 1 -- Gather feedback on script and storyboards. Refine, refine, refine.
July 17-18 -- Soldier's Bivouac, special event at the fort.
Aug. 1-23 -- Major content creation period, plus editing, implementation and testing. At the end of this period, there will be something significant to use as a proof of concept.
Aug. 18 -- America's Historical and Cultural Organizations / Media Makers grants, through the National Endowment for the Humanities are due.
Aug. 23 -- WSU Vancouver digital storytelling class, focused on the development of the Fort Vancouver Mobile project, begins.
Sept. 18 -- Campfires and Candlelight Tour, special event at the fort, the most attended annual event, with a crowd of about 5,000 expected. Beta test project, if ready.
Oct. 9 and Oct. 23 -- Tales of the Engage, the first major fort event based in The Village area; our goal is to at least have something really solid and ready to continue beta testing during this event, preferably test on Oct. 9, then retest on Oct. 23.
Dec. 11 -- Christmas at the Fort, special event.
Dec. 13 -- WSU Vancouver digital storytelling class ends.
Dec. 15 -- Doctoral coursework complete.
Feb. 24 -- Initial meeting to bring together all of the interested participants in one room. This critical session will provide an overview of the project, start making connections among the partners, lay out the resources available, start the brainstorming of specific content production, set goals and make plans for what's ahead.
March 14 -- Choose the initial historic storyline in The Village, upon which to base the first round of mobile interactive narrative tests. Begin in-depth research on that storyline and organizing the script, storyboard, production and distribution models.
March 15 -- Dissertation research question formalized.
March 23 -- Digital Humanities Start-Up Grant application due.
May 1 -- Dissertation preproposal due.
June 1 -- Complete the script and storyboards for first test project, with production and distribution models in place.
June 19-20 -- Bridgade Encampment, special event at the fort. Complete the third and final segment of the user survey. Use that information to tailor and adjust the story in progress.
July 1 -- Gather feedback on script and storyboards. Refine, refine, refine.
July 17-18 -- Soldier's Bivouac, special event at the fort.
Aug. 1-23 -- Major content creation period, plus editing, implementation and testing. At the end of this period, there will be something significant to use as a proof of concept.
Aug. 18 -- America's Historical and Cultural Organizations / Media Makers grants, through the National Endowment for the Humanities are due.
Aug. 23 -- WSU Vancouver digital storytelling class, focused on the development of the Fort Vancouver Mobile project, begins.
Sept. 18 -- Campfires and Candlelight Tour, special event at the fort, the most attended annual event, with a crowd of about 5,000 expected. Beta test project, if ready.
Oct. 9 and Oct. 23 -- Tales of the Engage, the first major fort event based in The Village area; our goal is to at least have something really solid and ready to continue beta testing during this event, preferably test on Oct. 9, then retest on Oct. 23.
Dec. 11 -- Christmas at the Fort, special event.
Dec. 13 -- WSU Vancouver digital storytelling class ends.
Dec. 15 -- Doctoral coursework complete.
Thursday, February 4, 2010
Fort Vancouver Mobile: The research question
It's time to get serious about not just a research question, but the research question, the one that will shape my work on the Fort Vancouver Mobile project and be the engine of my dissertation. Since I like to work transparently, I thought I might as well post the drafts and progression of that question here, for feedback (and in case it might help someone else develop ideas of their own).
I have to start somewhere, so here are my initial thoughts on the question.
This project, at least at this point, is expected to be an experimental study intended to validate the hypothesis that mobile devices offer unprecedented potential for delivering immersive and interactive narratives. That, of course, is too broad, even for a dissertation, and it doesn't really say anything. "Potential" takes me nowhere, and what I really want to study is the creation and audience response to mobile content, compared to other kinds of media.
Just producing mobile content right now, in a form that is usable and coherent and accessible, could be a complex and exasperating undertaking. There is no industry-standard platform to deliver mobile interactive narratives. There aren't even many good options in that regard.
And then the questions start emerging about a genre of place, where setting takes on a level of importance potentially equivalent to character and plot, or maybe even more important than those pillars of storytelling, because place / space is what distinguishes mobile as new and different than any other medium. It seems like the most important questions about the field will be focused on place / space in a mixed-reality environment, with one foot in the real world and one in the digital universe. These devices now allow awareness of location but also of spatial relationships to other things and context of all sorts, from user profiles to environmental conditions. How is that different than anything else humans have experienced before? The potential paths of discovery appear endless.
To create a descriptive and analytical study of this sort invariably will mean also leaving a lot of work for later. That is a liberating perspective, I think, in that I don't have to answer every question I can imagine. I just need to start by answering one good one.
I originally was inspired by this field not because I love talking on my cellular telephone. In fact, that is my least favorite ability of the mobile communication device. Instead, I find it endlessly fascinating that virtually all of the information of the world can be delivered to me wherever I am, illuminating whatever intrigues me at that second, especially in relation to something I am experiencing first hand in real space. From my extensive study of narrative, and belief that it is the core perspective through which humans view the world (or as each person sees the next event as the next chapter in a life story), I also am curious about what the combination of that information delivery style with such omnipresent data could lead to, in terms of immersion in knowledge and generation of wisdom. Delving into such territory alone, though, like sitting in the back of the library reading book after book from the shelves, doesn't seem nearly as interesting as an interactive environment could be. Each additional person brought into the story could dramatically affect the dynamic of the content, the interplay, the experience itself. How could collective intelligence in such a situation accomplish amazing feats that we could never even approach individually?
More pragmatically, I wonder what this environment will look like, be like. A game? That thought keeps coming back to me. If it's not a game, at least in the broad sense, with familial relations to other games, as Wittgenstein envisioned that term, then what motivates users?
It seems to me that the concept of edutainment starts to enter the picture here. Before I continue this thought, a bit of background. The Fort Vancouver Mobile project does have certain parameters in place to make it attractive to a variety of partners, including the Fort Vancouver National Historic Site. In that regard, the content to be created and studied needs to be based on either the historical importance of the site, which was the end of the Oregon Trail in the early days, or the regional and national significance of the area related to the reign of the Hudson's Bay Company and the initial U.S. Army presence in the Northwest. So while some people in this field are looking purely at fiction within mobile games, and creating interesting projects like that, my intent is to work as much as possible in the realm of nonfiction, with some creative flexibility inherent in historical reconstructions. Back to the edutainment concept, I envision people coming to the Fort Vancouver site wanting to learn more about the place. That information can be delivered in many ways, from brochures to ranger lectures to living history presentations. A mobile content experience could be another option, and, in that respect, I want to find out how such material can be delivered most effectively and powerfully while also learning about how it is received, hopefully identifying best practices for increasing motivational interests. In other words, how can interactive mobile content be developed best, enriching the visitor experience in unique ways, while encouraging further involvement in this type of content, compelling deeper and deeper exploration of the material?
Those are some of my initial thoughts. That covers a lot of ground, not all of which I will be able to study. And here is a first draft of the research question, a starting point in such an investigation:
Does the intersection of place and space, as accessed through mobile interactive narrative, increase a user's interest, engagement and motivation toward related knowledge about a subject?
I have to start somewhere, so here are my initial thoughts on the question.
This project, at least at this point, is expected to be an experimental study intended to validate the hypothesis that mobile devices offer unprecedented potential for delivering immersive and interactive narratives. That, of course, is too broad, even for a dissertation, and it doesn't really say anything. "Potential" takes me nowhere, and what I really want to study is the creation and audience response to mobile content, compared to other kinds of media.
Just producing mobile content right now, in a form that is usable and coherent and accessible, could be a complex and exasperating undertaking. There is no industry-standard platform to deliver mobile interactive narratives. There aren't even many good options in that regard.
And then the questions start emerging about a genre of place, where setting takes on a level of importance potentially equivalent to character and plot, or maybe even more important than those pillars of storytelling, because place / space is what distinguishes mobile as new and different than any other medium. It seems like the most important questions about the field will be focused on place / space in a mixed-reality environment, with one foot in the real world and one in the digital universe. These devices now allow awareness of location but also of spatial relationships to other things and context of all sorts, from user profiles to environmental conditions. How is that different than anything else humans have experienced before? The potential paths of discovery appear endless.
To create a descriptive and analytical study of this sort invariably will mean also leaving a lot of work for later. That is a liberating perspective, I think, in that I don't have to answer every question I can imagine. I just need to start by answering one good one.
I originally was inspired by this field not because I love talking on my cellular telephone. In fact, that is my least favorite ability of the mobile communication device. Instead, I find it endlessly fascinating that virtually all of the information of the world can be delivered to me wherever I am, illuminating whatever intrigues me at that second, especially in relation to something I am experiencing first hand in real space. From my extensive study of narrative, and belief that it is the core perspective through which humans view the world (or as each person sees the next event as the next chapter in a life story), I also am curious about what the combination of that information delivery style with such omnipresent data could lead to, in terms of immersion in knowledge and generation of wisdom. Delving into such territory alone, though, like sitting in the back of the library reading book after book from the shelves, doesn't seem nearly as interesting as an interactive environment could be. Each additional person brought into the story could dramatically affect the dynamic of the content, the interplay, the experience itself. How could collective intelligence in such a situation accomplish amazing feats that we could never even approach individually?
More pragmatically, I wonder what this environment will look like, be like. A game? That thought keeps coming back to me. If it's not a game, at least in the broad sense, with familial relations to other games, as Wittgenstein envisioned that term, then what motivates users?
It seems to me that the concept of edutainment starts to enter the picture here. Before I continue this thought, a bit of background. The Fort Vancouver Mobile project does have certain parameters in place to make it attractive to a variety of partners, including the Fort Vancouver National Historic Site. In that regard, the content to be created and studied needs to be based on either the historical importance of the site, which was the end of the Oregon Trail in the early days, or the regional and national significance of the area related to the reign of the Hudson's Bay Company and the initial U.S. Army presence in the Northwest. So while some people in this field are looking purely at fiction within mobile games, and creating interesting projects like that, my intent is to work as much as possible in the realm of nonfiction, with some creative flexibility inherent in historical reconstructions. Back to the edutainment concept, I envision people coming to the Fort Vancouver site wanting to learn more about the place. That information can be delivered in many ways, from brochures to ranger lectures to living history presentations. A mobile content experience could be another option, and, in that respect, I want to find out how such material can be delivered most effectively and powerfully while also learning about how it is received, hopefully identifying best practices for increasing motivational interests. In other words, how can interactive mobile content be developed best, enriching the visitor experience in unique ways, while encouraging further involvement in this type of content, compelling deeper and deeper exploration of the material?
Those are some of my initial thoughts. That covers a lot of ground, not all of which I will be able to study. And here is a first draft of the research question, a starting point in such an investigation:
Does the intersection of place and space, as accessed through mobile interactive narrative, increase a user's interest, engagement and motivation toward related knowledge about a subject?
Subscribe to:
Posts (Atom)
Blog Archive
-
▼
2010
(53)
-
►
June
(8)
- Tools in my digital toolbox (today)
- Putting a theoretical foundation under mobile stor...
- Workspace and its story
- Where do interesting academic paper prospects come...
- Digital or media literacy
- "Writing for Scholarly Publication" by Anne Sigism...
- Walter Ong's Secondary Orality and its relationshi...
- Does the Internet make you smarter or dumber?
-
►
June
(8)