Questions for Data [...]

Audrey Watters writes down a series of questions to consider when thinking about data:

Is this meaningful data? Are “test scores” or “grades” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “learning”. How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “a good student” or “a good teacher” or “a good education”? (Source)

Continuing this conversation, Jim Groom suggests that the key question is:

The real kicker is, how do we get anyone to not only acknowledge this process of extraction and monetization (because I think folks have), but to actually feel empowered enough to even care (Source)

Speaking about assemblages, Ian Guest posits that:

When data is viewed in different ways, with different machines, different knowledge may be produced. (Source)

Benjamin Doxtdater makes the link between power and data:

The operation of power continues to evolve when Fitbits and Facebook track our data points, much like a schoolmaster tracks our attendance and grades.(Source)

Kin Lane provides the cautionary tale of privacy and security violations via APIs, in which he suggests:

Make sure we are asking the hard questions about the security and privacy of data and content we are running through machine learning APIs. Make sure we are thinking deeply about what data and content sets we are running through the machine learning APIs, and reducing any unnecessary exposure of personal data, content, and media.(Source)

Emily Talmage questions the intent behind platform economy and the desire for correlations that detach values from the human face:

For whatever reason – maybe because they are too far away from actual children – investors and their policy-makers don’t seem to see the wickedness of reducing a human child in all his wonder and complexity to a matrix of skills, each rated 1, 2, 3 or 4. [source}(

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on