What did people measure for the QS assignment?

For our observations, we sent out a survey on what students collected for the QS assignment and compiled the responses into several visualizations:

image.png

image-2.png

image-3.png

image-4.png

image-5.png

image-6.png

 

Finally, the link to our raw data

 

-Julia, Niall and Charles

Like This Post

The articles we read for today focused on the lack of an overarching consensus structure to today’s digital realms, and the network structure’s effects on society. “Data Occupations”  compared the Quantified Self movement to Occupy Wall Street, where each individual gathering their own data / making their own stance in a common space can be seen as a larger group effect, but is organized from the bottom up not the top down. There are no decisions made about who can say what, in what way or when, no hierarchical structures planned out for content to be created within.

Ulises Ali Mejias in “Computers as Socializing Tools” echoed this thought in his comments about the system of “tagging” in modern social media. Each person creates their own content and tags it in a manner by which it can be found, but there is no hierarchical structure of agreed-upon tags and tagging conventions a from which a user chooses. He writes, “… the digital network does not facilitate all kinds of social behaviors equally, it merely conserves or solidifies those behaviors that can be observed, measured, and quantified.” In this way, the users behaviors are adapted to the format of the content sharing, conforming our actions to be computer-readable.

We are encouraged into this format because “The economics of the network are such that a node’s existence depends on its ability to obtain attention from others, to allow its movements to be monitored and its history to be known.” We desire the economic currency of likes, shares, and followers to validate our own existence as a hub node in the network. We try to maximize our economic values by gaming the system, by posting content at peak times or tailoring our content to target certain audiences that will guarantee that our online actions will garner attention. He writes that we need no censorship online because we censor our own content in order to highlight what we want to be seen, to maximize the presence of our digital personas.

Mejias also makes the connection of algorithms as allegories for social acts – to friend, to like, to follow. Is our digital society shaping the way our actual society functions? Have the definitions of friend, like, and follow changed because of their digital values? Is this necessarily a bad thing?

  • Julia, Niall and Charles

Subjective Maps

Today’s readings and discussion made us think about how information is presented. The readings emphasize the surface-level depth of powerpoint and how map makers must decide the important geographic details to include in any given map. Group 2 gave a good summary about how what is deemed important is turned into a “white lie” and how maps tell incomplete truths in order to achieve their specific purposes. The example of different maps of Davidson College, one tooled towards drivers coming into town and one for pedestrians walking around campus (as well as the least realistic, most visually appealing one) show how maps created for different purposes tell different stories.

In class discussion, we took this idea further and looked at how the presentation of information can not only tell different stories, but make arguments. Staying with maps, we looked at how different countries (mainly India and China) “fudge” their borders according to what land they each state is part of their country. We then looked at a Gerrymandering map to see the effect that redistricting can have on election outcomes. In the first case, governments present maps for country boundaries differently in order to stake land claims, and so that their citizens only think they own the land (and it is not disputed). In the second, an argument was made for reformation, particularly fighting against gerrymandering.

In response to the question that group 2 poses: “Does a universally good map exist?”, we don’t think so. We saw in class that maps are tailored to specific purposes, as it is impossible to include all the information in a location (say, a city) on a two dimensional, finite sheet of paper or screen. However, there are pros and cons to different types of maps, as a more detailed one holds more information but takes longer to understand. Any lies present should be justified as improving the effectiveness of that map’s intentions (for example, readability of highway maps for commuters). On that note, our preferred map of Davidson would be close to the second map, but less detailed, as it would be the most useful for us students. We mainly walk on this campus, so the map appeals to us for that reason, but we would prefer a map that’s easier to understand at a glance.

The above question also touches on a main theme of this course. Is there an unbiased way to present data? Do data presenters consider mainly internal biases or the needs of their audience when telling white lies?

An interactive map of Davidson College
Charles, Julia, Niall

Observing Stress

As observers last week, we sent out a survey asking about stress level and workload, as well as how typical that week’s workload was. The results were intuitive but interesting, and are presented below in the form of various graphs.

Raw data here.

key

stress-by-hours

stress-by-number-of-activities

stress-by-typicality

typicality-by-hour

typicality-by-number-of-activities

self-quantification-vs-stress

By Julia, Charles, and Niall.

Who Knew Cat Videos Could Mean so Much

In a world where we are constantly being surveilled – to the point where we don’t even notice it – being reminded of this fact is a jolt of unwanted awareness. In his data project, “I know where your cat lives,” Owen Mundy exemplifies, through the use of location stamps on the shared photos of our cats, the uneasy feeling we get when we remember we are being watched. While the surveillance on us tends to be more of keeping track of our data – our searches, views, likes, and clicks – the idea of direct video surveillance isn’t forgotten.

owen-ruined-the-internet

In the act of surveillance there exists an inherent power struggle between those being watched and those doing the watching. The numerous cat videos uploaded to the internet are a reminder of this power dynamic between the viewer and the actor (or in this case, the cats). Radha O’Meara claims that these cat videos are so appealing because they offer “…viewers two key pleasures: to imagine the possibility of freedom from surveillance, and to experience the power of administering surveillance as unproblematic.” Because the cats do not react to being viewed, or filmed, by their humans, they are seen as unselfconsciousness animals, living their lives without regard for their audience. Due to this unperturbed nature of cats, we do not feel guilty in surveilling them, and we get the pleasure of viewing a subject who is seemingly unaware of their status as actor.

However, surveillance often moves beyond the unselfconscious subject, and enters the realm of self conscious human actors acting upon each other. For instance, when one person takes embarassing snapchat videos of another, that person gains power over his peer, as they have video evidence that lasts beyond the moment. Better snapchat than saved to iCloud, the peer might think, and wait for the 24 hour story to expire.

This surveillance also helps us shift the power dynamic in a seemingly powerless situation. For instance, if someone is being pulled over or confronted by an officer, they can start recording the interaction on their phones for evidence of the event. This gives them a feeling of power, that their voice will not be lost and their word will be backed-up with ‘hard’ data. But how much power does being a recorder ultimately give? How much does the opinion of those who view it matter?

– Julia, Niall and Charles

 

 

TMI: Why We Don’t Focus on the Biases in Data Media

Group Two posed the question “Can we be given access to too much information?” As we discussed in class, even “too much information” may not be enough. As in the archives of Thomas Jefferson’s writings, there exist ghosts in the data, knowledge and narratives which can only be detected by searching for their absence.

We briefly discussed the issue of quality versus quantity of data; while a data set can be so large as to seem exhaustive, it still might contain biases and underlying assumptions embedded in the method of data collection. Additionally, data still needs to be interpreted in order to be visualized to a human audience. As we discovered in class on Friday, and by Lauren Klein’s analysis of The Papers of Thomas Jefferson, interpreting and visualizing data leads to the necessity of excluding things, highlighting other things, and placing it all within a narrative which the interpreter constructs, knowingly or unknowingly, based on their own agenda and biases. One must uncover the biases in the datasets, and those of the interpreters, to get to the “truthiness” of the evidence contained within.

As mentioned in Group Two’s post, the media sometimes blurs the truth of information. While sometimes it’s true that the media intentionally creates inaccurate information, we should also consider the possibility that there is simply too much information, and we must select what information we want to present and discuss. This abundance of information can accidentally create misunderstandings or vague information because at some point, there is too much information to be conveniently processed and presented.

However, the media doesn’t expect us to linger long on any one piece of inaccurate news; as Nicholas Carr mentions in his article, “Is Google Making Us Stupid?”, the internet is shaping our attention spans towards quick thoughts and continual moving from link to link. “The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. ” Carr highlights how it is in a company’s own interest to keep consumers moving along the internet in order to receive information about the consumer’s preferences, so as to make personalized (and even predictive) advertisements.

It is under this big-data-fueled directive of web browsing that we as consumers of the internet fail to pay too much attention to the quality or quantity of data packaged and presented to us, or taken from us. We must ask then, is this invasion of data-privacy a cost we are willing to pay for the use of free internet browsing, better tailored ads, and a streamlined capitalistic internet experience?

– Julia, Niall, and Charles