Blog Post #7: Proposal for Final Project

For my final project I will be writing a strategy guide for the Latin American Youtube channel of Riot Games, Inc. I have as reference the strategy guide I wrote for the Latin American Facebook page, however, this previous guide is much shorter and very streamlined (it was intended for a non-social-media-professional, non-marketing/publishing audience). I want to take this opportunity to delve into some topics on community cultivation and open sourcing fan content for channel use. I personally make substantial use of community content in practice but would like to explain in depth to those who are unfamiliar with open source systems or the power of creating a network of contributors. Cost is not a limiting factor for this company, thus we have options that many do not in terms of financially supporting quality creators, and I want to really dig into the options and rewards for not creating all content in house. We already offer all of our in-house graphic content on our website for others to download and use, so I feel there is a basis for reciprocity here.

Given these interests, I would love resource recommendations regarding the creation of open source systems and how best to create a long-term reward structure to honor community content. We already see a lot of interest from the community in creating content whether we reward it or not, but I think that with cultivation we will be able to push for a higher quality bar and help support artists who would otherwise not be able to make a living on their craft.

Additionally I would be interested in any sources that talk about youtube production, creator networks, and “nerd management,” because a large component of this guide will be laying out how to select and hire a team to create content and manage the channel and how to manage that team for best success. [For example, the main social media team at Riot is deliberately managed separately from the general publishing arm of the company because of the necessarily fast pace of content consumption and adaptability required to publish relevant posts on a medium such as Facebook. A manager with the mindset of a producer or project manager was thought to be potentially problematic or stifling to this process, because the plan-ahead-and-stand-firm attitude required to produce large scale projects would hamper the needed flexibility. As mentioned today in class, the Agile method is more effective for the way social media is done at Riot, because being current is such a critical component of our style of relationship building.]

I feel that I have a solid enough analytical background that I should be able to cover goal setting, testing, and success measuring from my previous knowledge base, but these are also topics I will draw from class resources to support. I also feel that the class readings have provided solid resources for addressing topics of network building and the role and usefulness of networks, which should at least be an introductory term in my analysis.


Blog Pos #6: Politics

This section of readings looked at online privacy (or lack of it) as well as at forms of censorship. I took three core things away from these texts regarding the creation of accountability for online invasions of privacy:

1. Assange and his colleagues and co-leakers can crusade for a secret free world all they want, but it will never be enough to create real accountability — for that, laws need to change, the big boys in the government need to step up to the plate and create checks and balances like we did for those pesky absolute monarchs in the 1700s.

2. Accountability better come fast and bring transparency with it, or the code of self-censorship that is so prevalent in China will become so deeply ingrained in global internet usage that real freedom of speech will cease to exist (Parker, and MacKinnon pg 80).

3. Creating accountability in the United States for large-scale corporate and government invasion of privacy seems nearly impossible given the shocking indifference of the public and the blatant disregard of entities like Facebook and the NSA for already existing oversight bodies (such as Congress) (MacKinnon).

Perhaps the most provocative of the readings was that authored by Bruce Sterling on the three giants, Assange, Stallman, and Snowden, and the world that created the need for such bizarre heroes: “And, yeah, by the way, Microsoft, Apple, Cisco, Google et al, they are all the blood brothers of Huawei in China — because they are intelligence assets posing as commercial operations. They are surveillance marketers. They give you free stuff in order to spy on you and pass that info along the value chain. Personal computers can have users, but social media has livestock.” Wow Sterling. Keep it coming. And he does, with a writing style that looks like blunt force trauma but leaves you with goosebumps. What does it mean to be livestock, shamelessly lied to at every turn, and how do we remedy it? Lanier writes, “We sanction secretive spheres in order to have our civilian sphere. We furthermore structure democracy so that the secretive spheres are contained and accountable to the civilian sphere, though that’s not easy. There is certainly an ever-present danger of betrayal. Too much power can accrue to those we have sanctioned to hold confidences, and thus we find that keeping a democracy alive is hard, imperfect, and infuriating work.” Democracy is Lanier’s solution, but I find myself wondering if having just enough democracy is what allows the NSA &co to do what it does. In other words, if we are placated by the freedoms we do have into shrugging and looking the other way.

MacKinnon writes, “China’s parliament now has an ‘e-parliament’ website, on which it invites members of the public to make policy suggestions. Outsiders…would be very surprised to learn about some of the suggestions people are allowed to post” (43). Yet MacKinnon never gives us any sense that these suggestions are used in any way. “The government…points to such websites to demonstrate that it is listening to public opinion and taking it seriously.” Yet, as with my earlier skepticism about romanticizing the revolutionary power of social networks given the lack of concrete results, this to me stinks of being nothing more than a pressure valve. If we FEEL that we have the right to participate, we are much easier livestock to placate. As Sterling writes, “Computers were invented as crypto-ware and spy-ware and control-ware. That’s what Alan Turing was all about. That’s where computing came from, that’s the scene’s original sin, and also its poisoned apple.” The digital age came about because of a system invented for spying, and is now run by companies with no accountability to rights and responsibilities because they do not have citizens. “the political discourse in the United States and in many other democracies now depends increasingly on privately owned and operated digital intermediaries. Whether unpopular, controversial, and contested speech has the right to exist on these platforms is left up to unelected corporate executives, who are under no legal obligation to justify their decisions” (MacKinnon 86). So what does this mean for the future?

I have no answer to that. I like Lanier’s idea that democracy is a work in progress, and MacKinnon’s push for greater regulation of private digital intermediaries. But I think if no way is found, in this country at least, to spark the shaking off of our collective apathy, we will not see a lot of real change.

MPPDIGITAL Blog Post #5: Persuasion/GOTV

What struck me most about this block’s readings was the exploration of the role of data and analytics in contemporary politics. In his article on email fundraising  Leichtman  writes, “I can’t stress enough that messages should be tailored as much to the individual as possible. If you want an example look towards Amazon’s email program and what happens after you browse items, you often get a follow up email with those items a few days after…Past behavior is a good indicator of future behavior, and you have data you can leverage to make your asks personal and effective.” While his advice to micro-target is somewhat oversimplified (it requires a strong and talented team of analysts to get and put into use the kind of data required) I still felt the importance of his statement. I have seen first hand the difference between a product created based on data and one created by instinct.

In Brian Christian’s piece on the role of A/B testing in the Obama campaign he writes, “Most shocking of all to Obama’s team was just how poorly their instincts served them during the test. Almost unanimously, staffers expected that a video of Obama speaking at a rally would handily outperform any still photo. But in fact the video fared 30.3 percent worse than even the turquoise image. Had the team listened to instinct—if it had kept “Sign Up” as the button text and swapped out the photo for the video—the sign-up rate would have slipped to 70 percent of the baseline. (“Assumptions tend to be wrong,” as Siroker succinctly puts it.)” Instinct is not enough, especially when dealing with a huge population. Nor is data enough without analysis. He continues, “without the rigorous data collection and controls of A/B testing, the team might not even have known why their numbers had fallen, chalking it up perhaps to some decline in enthusiasm for the candidate rather than to the inferior site revamp. Instead, when the rate jumped to 140 percent of baseline, the team knew exactly what, and whom, to thank.” In other words, data collection paired with testing and analytics eliminates the burden of guesswork from a process that involves making thousands of decisions, all of which could be impactful either for better or worse. This rings very true to me, as I have seen it repeatedly in my own work: products created based on previous testing and data collection that are released with measurable performance indicators not only perform better but can add their own data into the feedback loop, if the KPI’s are good (as discussed here).

This brings us to my favorite piece: “A More Perfect Union,” by Sasha Issenberg. This piece rhapsodizes not over democratic wins or loses, but over the ability to predict them. While a bit sensationalist (“The campaign didn’t just know who you were; it knew exactly how it could turn you into the type of person it wanted you to be”) Issenberg illustrates how data and analytics offer a level of finesse and granularity in persuasion tactics that was previously unavailable except when dealing with local politics or small groups. Moreover, the piece stresses the impact of not just data but also testing, and the difference between ‘data driven decisions’ and a real feedback loop.

I would argue that given the shift towards finesse, Obama did run a persuasion campaign– if not in 2008 then certainly in 2012.

Issenberg writes, ‘Obama’s media-buying strategy proved particularly hard to decipher. In early September, as part of his standard review, Lundry noticed that the week after the Democratic convention, Obama had aired 68 ads in Dothan, Alabama, a town near the Florida border…. Even though the area was known to savvy ad buyers as one of the places where a media market crosses state lines, Dothan TV stations reached only about 9,000 Florida voters, and around 7,000 of them had voted for John McCain in 2008. “This is a hard-core Republican media market,” Lundry says. “It’s incredibly tiny. But they were advertising there.” Romney’s advisors might have formed a theory about the broader media environment, but whatever was sending Obama hunting for a small pocket of votes was beyond their measurement. “We could tell,” says McGoldrick, “that there was something in the algorithms that was telling them what to run.”’ Because of the algorithms and the data, persuasion was no longer a blunt instrument to be deployed on national channels but a scalpel to be sent in to deal with specific pockets of voters classed as persuadable and useful.


Blog Post #4: Media in the Digital Age

Despite the issues I have with Shirky, who (in my opinion) tends to rhapsodize about the progress of the internet shockwave in such enraptured detail that he forgets to look at the end result, his work is impossible for me to put down. His two pieces in our unit on the media captured my attention because they articulated something that I felt when first reading the New York Times Innovation Report but never found a clear way to express. That is, while the report was brutally honest about articulating the problems, it was underwhelming for me in its proposed solutions.

In his blog, Shirky writes, “When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en bloc. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.” Thus while it is admirable in some ways that the NY Times even got a committee together to write such a report, I (like Shirky) am unimpressed with the conclusions. Call me a romantic, but the idea of slash and burn makes much more sense to me in this context. As Shirky puts it, “Society doesn’t need newspapers. What we need is journalism… When we shift our attention from ‘save newspapers’ to ‘save society’, the imperative changes from ‘preserve the current institutions’ to ‘do whatever works.’ And what works today isn’t the same as what used to work.”

Again however in Shirky’s blog I find myself frustrated with his lack of commitment to outcomes (his talk of “Even the revolutionaries can’t predict what will happen” does not placate me). Luckily for me and my thirst for specificity, the collaboration between Anderson, Bell, and Shirky on Post Industrial Journalism goes the distance (or at least a few more steps down the road).  The report asserts, “The journalist has not been replaced but displaced, moved higher up the editorial chain from the production of initial observations to a role that emphasizes verification and interpretation, bringing sense to the streams of text, audio, photos and video produced by the public” (22). That is, the journalist must go from being the boots on the ground to the curator. Winer, another blogger from our readings, seems to agree. He quotes Bill Keller, who writes, “newspapers have at least two important assets that none of the digital newcomers even pretend to match. One is that we deploy worldwide a corps of trained, skilled reporters to witness events and help our readers understand them. This work is expensive, laborious, sometimes unpopular, and occasionally perilous.” Winer argues that this ‘expensive, laborious, unpopular, perilous work’ is already being done free of charge by people who already were present to witness and now, because of the internet, can share that information without the expense of plane tickets and hotel bills and overtime pay.

What does this mean? Is the dreaded Buzzfeed listicle a viable form of journalism, or should this raise the same concerns as the idea that curation alone can cure the filter bubble? I’m not sure I have an answer (I’m waffling as badly as Shirky).

What I do agree with is that the industry cannot be remade by ‘a commitment to working with the business side of the company‘. While resources exist to help viable businesses become more tech savvy, such as the Mary Meeker Report, the media industry cannot be considered a viable business. Given the restructuring of advertising and the unbundling of news, Print Journalism is not an industry living in the black. Riot Games can read the report, see the emphasis on mobile, and start putting more contrasting colors into their marketing material so it shows up more clearly on a phone screen. The New York Times has much larger problems.

News is so different from the Wikipedia model because the timing of the information matters and a stub that takes months to be fleshed out will be useless by the time it achieves relative completion. However,  if journalism is as necessary to the masses as we all believe, then someone will come up with a solution.

Blog Post #3: Wikipedia Review

Wikipedia tag: TheNikolaia

Transistor is a science fiction action role playing video game developed and published by Supergiant Games, and the studio’s first title since the release of their critically acclaimed Bastion. The game was released on May 20, 2014 for Windows and PlayStation 4, with other platforms possible in the future.”

So starts the concise wikipedia article on Transistor, which has six short sections including references and links. I chose the article out of curiosity, because it is a classic example of an area where the long tail should apply. This game is small, gorgeous, and niche. It had essentially a single artist who designed the entire thing, and takes some creative risks with combat style. The game is also– by general video game standards– much less complex than the average role playing pc/console game, and the studio is relatively new having only released one previous game. It got rave reviews from across the gaming world on release, and garnered a lot more attention than the typical indie game. I expected this to lead to a small but dedicated population pushing to create an in-depth wikipedia page on this gem of a game, which seems like a classic long-tail-type: small audience, dedicated fans.

However, the post is lacking in many ways. It uses terminology which is BOTH undefined and unreferenced, such as “isometric point of view” (basically just a view from above that is slightly angled, so as not to be looking straight down– it allows for a better view of the art and cleaner perspective on the environment). It also fails to use any references or links for gameplay and plot. While the information is correct and obviously comes from knowledgeable sources, there are many high quality playthrough videos available on youtube or other summaries that could be referenced to give readers source material. Additionally, traditional video game summary pages often have more sectioning and subdivision. In the case of Transistor it might have been useful to include some basic categories besides ‘Plot,’ like ‘Characters’ and ‘Setting.’ Some game pages go as far as to summarize individually the cinematic cut scenes that take place to move the plot along, though this usually happens with particular studios like Blizzard Entertainment, which is famous for its amazing cinematics.

I was also extremely disappointed with the gameplay section. There is a LOT of information and references available on gameplay, even in basic reviews like the one cited above.  The studio’s own website also has a lot of information about the game that could have supplemented every section of the article, yet it is not referenced once, and is only included en passant as an external link.

There was also, for me, a section entirely left out: the art. It is a stylized game done in 3D with stunning colors and an unusual cityscape. The main artist Jen Zee is both somewhat of a genius and at the center of a lot of recent hype around women in game art (female artists, not female characters) on forums like polycount (an online community for game artists, especially 3D artists, that works as part-showcase, part-community, part-open-source-help-forum) because of her extreme talent. Again, there is a lot of information about the people behind Transistor available on the Supergiant Games Team page, and not just the rhapsodizing of the art community to reference. Given the vast amount of internet attention paid to the art, it is a glaring hole in the article.

Additionally, the music section is lacking references, details, and links. Given its centrality in both the plot and the critical response to the game, this is also an oversight.

Despite all of these flaws, the article reads well and gives enough information to cover the casual reader’s interests. While I am surprised that the article is this shallow despite the excitement the game generated, perhaps this is due to the sort of cultural ADD of the gamer scene and the internet in general, as discussed by Sifry in The Big DisconnectTransistor has not been around long enough, nor is the world in which it is set complex enough to generate the kind of cult following that games like Bioshock have generated, despite the urban dystopian science fiction thematic similarities. Perhaps as the dust settles more of the long tail effect will take hold, and the current article will gain a little more depth and a little more quality.


A quick note to start: due to limited space I have chosen one topic of the three to engage with more deeply.

As a student of religion and the child of psychologists, I find internet ethics fascinating. Thus the filter bubble and the ethical questions surrounding it fascinate me. On the one hand, people clearly expect or want certain things from the internet by this point. Interconnectedness, fast and equal access, ability to practice free speech, and customizability (via algorithms) among others. On the other hand, there is a sense that maybe people also need some things above and beyond what they want, or that the internet delivers a fast food diet when people really need to broaden their horizons. In other words, the filter bubble will keep humanity ignorant, or possibly make it more so.

On the one hand, it is very easy to see the filter bubble at work: on Facebook, it only takes up to two clicks on one friend’s content to start to see more of their posts showing up regularly. It only takes a day to convert your newsfeed to bait articles (see above link). Our online lives are designed to respond to our every click, our every search, our every view. The speed at which a very narrow internet identity forms is frightening, given that exposure to diversity is crucial to developing tolerance.

On the other hand, one could argue that nothing much of human behavior on the internet is new, it is just easier to see the patterns because of the sheer scale and visibility. Shirky argues that technology fundamentally changes human behavior once it has been around long enough to be invisible. I feel mixed about his assertion, especially in the context of social networks. I suppose the way to put it would be that HOW people do things might change, but WHAT they do and WHY they do it does not seem to. Before the filter bubble was in my Facebook feed it was on my radio– I grew up listening to NPR like any good liberal white kid with graduate-degree-sporting parents. Can we really argue that the car on the way home from school playing NPR is not its own filter bubble? Can we argue that people don’t choose neighborhoods or schools or churches or towns because they ‘feel at home’ there? Can we even argue that exposure to new ideas or facts that contradict prejudices even makes a difference?

According to this article, among others, exposure to facts does not necessarily improve ignorance: “It’s one of the great assumptions underlying modern democracy that an informed citizenry is preferable to an uninformed one… If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight. In the end, truth will out. Won’t it? Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.”

Thus would eliminating the filter bubble or introducing curation really make any difference whatsoever? Yet there is a strong correlation between press freedom and democracy, and as Shirky so eloquently argues, social media is very much its own kind of press.

I find myself unable to come down on one side or the other of whether the filter bubble is something to be deeply concerned about. In online articles it is a topic fraught with underlying (often unstated) beliefs and ethical urgency, and I am fascinated enough by it to hope that it deepens into a conversation with more nuance and perhaps a little more realism (or what I, probably misconstruing my own opinions for fact, think of as realism) about human tendencies.


In the second chapter of Shirky’s text, a three-rung ladder of group undertaking is introduced. Shirky calls the rungs Sharing, Cooperation, and Collective Action. Chapters 4, 5 & 6 of the text mirror this 3-part breakdown thematically in many ways:

Chapter 4 builds on Chapter 3 to round out Shirky’s theory of publishing and communication (sharing) in the digital age. Chapter 5 discusses, using the example of Wikipedia, a form of cooperation that makes use of the long tail to gather many small contributions into an ongoing, continuously improving online knowledge pool. Chapter 6 dives into the new possibilities of collective action given the ease of organizing beyond the boundaries imposed by geography and institutions and the limits imposed by the previous effort required to share information.

There are a few points I want to raise about these chapters. While it is true that Shirky’s text is intentionally painted in broad strokes (who could write about all the bizarre corners of the internet in one cohesive book??), it is also true that even his broad strokes are  somewhat dated. For example, Shirky makes only a passing reference to Facebook and doesn’t raise Youtube at all– two of the giants in today’s internet experience. Facebook in particular has pushed the sharing of content to an extreme of ease that even goes beyond email, because ‘sharing’ a piece of content on a friend’s wall can end up reaching the newsfeeds of thousands of unintended viewers. Facebook itself gives ‘shares’ a higher weight in the algorithm it uses to calculate engagement in the insights panel offered to businesses for this exact reason– shares drive virality at a significantly higher rate than likes or comments.

As someone who works in social media marketing using Facebook as a primary tool, the shallowness of Shirky’s discussion was somewhat problematic. Additionally, his concept of user-generated content fails to take into account the phenomenon of Youtube channels like this one, in which Bunny, a camo-wearing texan, reviews products from infomercials inside her home. Just watch this video for a little taste– it is Shirky’s ‘drivel’ at full volume. And it has over four million views. The simple dismissal of user generated content as ‘not created for general consumption’ doesn’t really cover the scenario of homegirl Bunny and her wubble bubble’s enormous viewership. Her content is clearly not created for her friends and family, but for a giant audience who loves something about Bunny enough to watch 15 minutes and 18 seconds of content about the wubble bubble.

I was also intrigued by Shirky’s discussion of Wikipedia, which I found to be comprehensive and interesting.  Taking into consideration the web 2.0 readings as well as the internet causing lack of shelf space to disappear as an issue (as discussed in our long tail readings), Wikipedia offers a perfect example of Shirky’s argument that things work best when they become so entrenched as to be invisible. For a print-based encyclopedia, no ‘shelf space’, in the form of page space, would be given to something like Shirky’s asphalt stub. It would be ridiculous to take up space in a set of volumes designed to be informative and full of knowledge with something as shallow as a stub. And even for an internet encyclopedia that is based on a methodology derived from print-editing, the stub seems absurd. Only in the context of a native web application like Wikipedia could such antiquated concerns simply not be considered. O’Reilly writes, “Google, by contrast [from Netscape] began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage.” This seems to me to sum up perfectly the space inhabited by Wikipedia.

To conclude, I will just flag one final issue, which is that in Chapter 6 Shirky seems to set out to discuss collective action. He laudes the ability of the internet to create un-dispersible groups forming to act. Yet he never really discusses the role of the internet ON collaborative action, only the act of forming for the sake of action. Perhaps I am setting the criteria for ‘action’ too high– conversation is a form of action that social media and internet applications drive at extreme volume. Perhaps also his case study is at fault– there are other cases where more action was taken. Perhaps the next few chapters will dig deeper.