Another important point: more broadly considered, we make choices, we have responsibilities, we have values. All three have multiple dimensions that we (should) consider in our actions.
True, and important. The argument to pursue (as with the breast pump) is maybe the question of how institutions and elite men find, use and abuse their power?
What is made invisible and what is veiled and what we choose not to see are what undermines claims to objectivity. The last point, which involves (self-)reflection, is equally of importance.
invisible like the royal “we” for the queen? or institutionalised as, for example, a corporation?
Valuable insight, yet the _misinformed_ policy that arises from a lack of a data or uncritically considered data is, from experience, potentially even far worse
there are reasons to doubt this is true: https://www.kdnuggets.com/2014/05/target-predict-teen-pregnancy-inside-story.html
Works by Indigenous Data Sovereignty movements could be useful here too. But maybe you have included cites about this in another chapter?
https://fnigc.ca/sites/default/files/docs/indigenous_data_sovereignty_toward_an_agenda_11_2016.pdf
https://nnigovernance.arizona.edu/good-data-practices-indigenous-data-sovereignty?fbclid=IwAR3FVotCxC34JUQPnlTKbNxYcaaFkt5KKD_l0S1ZG9XW31G07alD5O5ubAQ
The work of Helena Suarez Val can be useful here too. She analyses the tracking of feminicides in Uruguay and around Latin America in general. https://warwick.ac.uk/fac/cross_fac/cim/people/helena-suarez-val/, https://sites.google.com/view/feminicidiouruguay
to be fair, the image itself didn’t encode sexism into the field - the image’s choice and persistence was a manifestation of the sexism of those involved.
perhaps i missed a clear definition of how you’re using the word “bias.” This sentence implies that it can be fixed “before” but not after. Depending on the use it either cannot be fixed at all (because it’s a philosophical problem) or - as I would argue with biased algs - it can be fixed after the fact with a complete technical restructuring or reevaluation.
there’s some sexism coded here - the user doesn’t seem to want to be compared to a woman either. Worth calling out the gender implications of this face match (even in a footnote)
technically, no codebase is bug-free, though I do understand the point. Perhaps something like “her code was written to the library’s specifications, yet…”
also LGBTQIA2S+ folks
we don’t yet know what “reflexive design” would entail. I’d love to see a clearer - and more pointed - word here. The Allegheny algorithm is racist.
I would encourage including a section at the end of this chapter (and every chapter!) that bullet points some concrete ways of enacting the chapter title “bringing back the bodies”. Such a list would not need to claim to be comprehensive but could leave the reader with tangible starting points to move from theory to better data science practices!
Want to leave this comment somewhere: I would love to see more of Mimi and Mother Cyborg’s “A People’s Guide to AI” incorporated! They discuss this so well! https://www.alliedmedia.org/peoples-ai
“that we give it”? I think current wording reinforces a false and fear-based narrative that current AI systems literally search the web for anything they can find about you, and while that could become more common in the future the public deserves to know the current reality, un-exaggerated.
One of the more formidable, but elusive elements of your premise is conditioning recipients to more astutely filter and evaluate information and its sources so that tightly focused data is put to better use. It is part of the vast “rebalancing” of power, privilege, systems, structures and other factors you so doggedly pursue. You deal with many critical concepts in this book that have broad implications - which can be both a blessing and a curse. Off to a great start.
I am not sure I understand this table or the division between real and imagined objectivity. Also as someone who writes about feminist ethics I am troubled to see that a preoccupation with ethics is considered un-real and uncontextual or individual-focused. I generally don’t get this table, sorry I won’t join in the enthusiasm of other commentators!
Thanks for voicing your concerns, Aristea. You’re right that feminist ethics is an important conversation, and we’ll be sure to acknowledge that in our expansion of this table as we revise.
Could expand on what Benjamin adds to the debate about imaginaries of big data?
It is not entirely clear to me how/why you get back to the argument about the necessity of data feminism - and whether the book will recount instances of data feminism at a later stage. Also, I would like to see some further clarification as to how the examples provided above are about bodies rather than well, structural inequality.
I find the jump from maternal health to women who are dying quite abrupt - surely there are many nuances and aspects in maternal health and illness.
These are great examples. I wonder if it would be useful to signal the US focus?
Great examples.
Usually readers learning about future chapters get at least a paragraph per chapter as a preview to the work. I realize that you want to challenge academic prose style (and its long windedness), but you might want to show how your themes represent both related and contrasting strains of thought.
Todd Presner’s early work on aerial perspective and GIS might be useful here, since he places it in a European intellectual heritage.
Would love to include Jacque’s work here!
Jacque Wernimont’s work on life counts and death counts seems obviously important for this chapter, which joins them together in the concept of “maternal mortality”
Because there are a few projects about missing or murdered indigenous women, it might be useful to call out how different actors respond to a perceived lack in data reporting.
Because you are calling out another case of metadata activism with #MeToo (as opposed to the dataviz activism that seems to be the assumed subject of this book), it looks like you need to say something about “raw data” (and its mythologies) sooner in the text.
Like others, I am not sure if the rhetorical address is calling too much attention to itself. Maybe you want to think about emphasizing questions about your potential audience (data scientists AND feminists) more in your introduction.
I wonder if more could be said about the difference between the mothering stats and the sports stats, since the baby book stats are obviously feminized and sports stats are obviously masculinized, as a way to make connections in this chapter. Perhaps look at Jill Walker Rettberg on mother apps in _Seeing Ourselves Through Technology_?
Good point! We’ll see if we can work in an additional analysis along these lines.
maybe “model outputs”? These are different kinds of bias, though: interpersonal bias, sampling bias, and biased estimation (which may or may not matter, but there’s also unequal distribution of errors, which can be called a bias but doesn’t have a formal label). You somewhat go through those in the remainder of the paragraph, since the lack of clarity and multiplicity is your entire point, but also add in institutional bias, which is missing in this sentence. Maybe make this multiplicity more explicit?
See also Keiran Healy, “The Performativity of Networks.” Network models create the reality they purport to describe. I try to do an empirical version of this critique in my own work, https://www.mominmalik.com/malik_chapter2.pdf
See also “Shirley” from “Shirley cards”: https://www.cjc-online.ca/index.php/journal/article/view/2196, which I’ve linked to facial recognition in presentations but I’m sure somebody has done systematically. See also Vox’s video on it, https://www.youtube.com/watch?v=d16LNHIEJzs.
Great point! There’s also a long history here, I think it’s Elizabeth Yale who works on automation being intimately connected to desire to replace lower classes who could rise up since the middle ages, and fears of the lower class being projected onto automation taking over.
While you can’t include my heresay in the book, I can confirm this is a problem with all the attempted technical solutions to fairness. They all preserve fairness as an aspect of the data as it is, none of them have the ability to incorporate historical inequity and injustice (understandable, since it would have to be quantified to include in a model, but it reveals the limitations of technical fixes).
Ben has a paper with Lily Hu about this as well, I believe, who also has fantastic work on this topic. She has a paper, I don’t think out quite yet, critiquing the counterfactual framework for fairness (i.e., “if this person were white, would they have received the same score?” as though we could “toggle” race as separate from everything else).
This could be a few things, but I think “mathematical formalism” or “modeling the world” would be better in this cell than algorithms
Great suggestions!
Maybe use the example of the reaction to Sonia Sotomayor’s “wise Latina” comment?
I like this suggestion!
Did that come from Haraway or Thomas Nagel? It sounds like you’re attributing it to Haraway, and if it did indeed come from Nagel and Haraway is critiquing it or using it as a critique, this phrasing is misleading. Maybe even “Haraway critiques this so-called ‘view from nowhere.’”
Couple of things here.
First, pet peeve, I don’t like how people in machine learning casually conflate “algorithm” and “model.” It’s common usage to say “train an algorithm” but I would say that it’s an algorithm that trains a model. Anyway, until I publish on this you can only go with common usage, but this is an issue for you below when you say “algorithmic model.” That brings in confusion. Are these algorithms? Models? Data? Are there algorithms that are not (statistical) models? (Yes, the vast majority of algorithms.) Are there models that are not algorithms? (Yes, models can be abstracted away from implementing algorithms, although to be used they need some implementation.) Are there data that are not models? (Debatable, but all data is theory-laden, as in “experimenter’s regress”. Henry Collins, 1981, “Son of Seven Sexes: The Social Destruction of a Physical Phenomenon.”) Having some breakdown/explanation of terms might help.
Second, the anecdote is compelling, but how many people were similarly targeted who didn’t turn out to be pregnant? How many people were predicted to be pregnant, but who had purchased pregnancy kits 9 months prior? (Which is a strong signal) Out of people *without* such strong, obvious signals, how many identifications were accurate?
Third, language. “Infer” is a technical term in statistics, and refers to “statistical significance.” Specifically, statistics posits a hypothetical underlying “truth” that produces data with some noise, designs functions (“estimators”) that can take in data and reverse-engineer properties of that underlying “truth”, which is called “estimation”. Inference is about whether the signal we get from data is strong enough to make conclusions about that underlying “truth.”
”Inference” is sometimes used in machine learning in a more colloquial way, and used to describe exactly the tasks that are statistical “estimation.”
I would recommend: “detect” pregnant customers. Later down: “when analyzed together, can detect whether or not a customer is pregnant, and if so, give a prediction of when they are due to give birth.“ Instead of “pregnancy prediction algorithm”, perhaps “pregnancy model”.
I suggest explicitly giving examples of listening to and reading narratives, forming coalitions, and consuming theory produced by marginalized people is an alternative way to have reliable, “true” knowledge about experiences outside of our own that does not reduce to needing data.
Same point; this is a great descriptive point, but what it the prescription? Accept this state of affairs, and gather “counter-data”? (To take a term from Morgan Currie, Britt S Paris, Irene Pasquetto and Jennifer Pierre, “The conundrum of police officer-involved homicides: Counter-data in Los Angeles County”, Big Data & Society, https://dx.doi.org/10.1177/2053951716663566) I would say the strategy should be a mix. For police homicides, one would be to believe black people when they say the police are a force of terrorism in their communities, and have been since police forces first formed. Maybe forming coalitions will require “proving” racism to non-black people with data, but as data feminists we should recognize both that this may be necessary but is still wrong.
Chemist turned philosopher of science Michael Polanyi is quite influential in history of science and in STS. He wrote this in _The Tacit Dimension_ (1966):
“The declared aim of modern science is to establish a strictly detached, objective knowledge. Any falling short of this ideal is accepted only as a temporary imperfection, which we must aim at eliminating. But suppose that tacit thought forms an indispensable part of all knowledge, then the ideal of eliminating all personal elements of knowledge would, in effect, aim at the destruction of all knowledge. The ideal of exact science would turn out to be fundamentally misleading and possibly a source of devastating fallacies.”
This remains an important critique of the view that science should (or even can) be “objective.” Lorrain Daston and Peter Galison, on the other hand, have a whole book about how objectivity came to mean what it does.
Recalling the Toni Morrison quote, and “your demand for statistical proof is racist” in my comment for the Introduction: the black women who reached out to Williams know what happens to them. But Williams still had to cite a statistic to be credible. We can acknowledge that this is the state of things, but still critique it. Maybe data is power, and we should try to share this power, but also, maybe data should not be power. This also applies to something you write further down, “In the present moment, in which the most powerful form of evidence is data--a fact we may find troubling, but is increasingly true”, but must it be true?
Good point. This is an issue we’re hoping to address with more nuance in the revision.
Is this the right word?
agreed. Does ‘it’ refer to data or the “god”?
+1. Extracting market value from bodies?
This is another permutation of the strange metaphor. Are the bodies extracted or is the data extracted? I’m not sure that either one is right…
I wonder if there is a missed opportunity here to address all the metaphors around data and how they mislead us from learning about where data come from?
We thought we addressed this here and in the intro, but perhaps we’ll need to make it more explicit, since it sounds like it’s not coming through to all readers.
Mixed metaphors here. I don’t specifically object to the use of the term “bodies,” which I know has a history in gender and race studies, but let’s talk about the specifics of how bodies are datafied, rather than using the terminology of industry (i.e. mining, extraction and so forth).
Datafication is also a term that has been developed in response to industry. We hope that our implicit critique of these terms—and datafication, too, which we discuss in the intro— comes through.
Perhaps I’m overthinking this sentence, but it seems to imply that bodies inherently hold information that is waiting to be extracted. This obscures processes of datafication and how they torque bodies to conform to preexisting categories. See Sorting Things Out for more on these processes.
Agree about the relevance of Bowker and Star’s book, which I think is incredibly relevant to modeling and which I don’t see connected to it often enough, although I’m not sure if this is the right place to bring it in.
I wouldn’t make this argument as yet because the example is mainly about race.
Hm. I do see this example as one of class as well, so it seems like we’ll need to clarify that a bit more.
Would it be possible to compare with or indicate what happens elsewhere in the world (non-US)?
I am wondering about the links provided in the script - I am sure you have thought about this carefully - how is this going to work with the printed copy? Are these going to appear as footnotes?
The links will appear in the ebook version but not in the print version. We will have more substantial footnotes in the final version, though.
I might consider removing this sentence. It’s likely that they did have to agree at some point. It would have been a totally terrible manner, with out truly informed consent - but they probably agreed somewhere. And leaving this loophole weakens the entire argument. For example in the EU now people definitely have to agree - but the first two points are still valid in the EU. I’m not saying that this third sentence is invalid but I think it weakens the argument.
I could use some more elaboration on this. Catherine Borgman argues that all “alleged evidence” is data. How are you characterizing data here: as a specific kind of evidence?
To further Shannon’s point, it is not so much that people and bodies are missing. We just don’t explicitly acknowledge their importance to data practices: what kinds of bodies are creating contemporary data systems and how are those systems handling —categorizing, classifying, and otherwise “torquing” (see Star and Bowker) bodies?
Thanks, This seems to be a sticking point for a lot of people. We’ll need to be more explicit about how we are using the term “bodies” in a conceptual sense.
I think it might be worth considering, as well, how “challenging” data can help change the world. Data will always be collected unevenly, with more data being created and processed by institutions in power. Can you also give readers the conceptual tools to question the dominance of data collection over other ways of knowing?
to me this chapter is primarily about making the case that data feminism is not some kind of abstract theory, but that it is practically concerned with power and privilege impacting the bodily experience of many people in the physical/social/cultural world - so as concrete as it can be, in my view. while i found the emphasis on the term ‘bodies’ a bit problematic at times, i can see the value of making this point early on in the book to establish that data and algorithms do not belong to some kind of disembodied domain, but that the politics of our data (analysis) have concrete consequences. in this vein, i would recommend sharpening the chapter a bit more on this theme. maybe Haraway’s godtrick and Chun’s work on homophily could be mentioned in a later chapter? (though they really need to be in the book)
maybe it’s a personal bias, but i am not a big fan of extended footnotes that serve as a secondary argument, when it’s often actually more important. to me these references and their contributions could/should be woven into the main thread.
to me this paragraph does not flow too well after the intricate recognition of intersecting differentials… maybe close previous paragraph with acknowledgement that these differentials also pervade data practices?
to me this reads as if this subsumption was deliberate. my impression - in good faith - is that Burke’s marginalization was exclusively structural in that white feminists have a privileged, more powerful platform
While I take your point, I’m not sure that Burke’s marginalization (or any) could be said to be “exclusively structural.” Certainly the nineteenth century (which I’ve studied in detail) has many examples of white women explicitly excluding black women from their organizing projects. I’m not sure of the specific actors involved here, but I’d be hesitant not to suggest a possibility that personal politics were also at play.
i am not sure whether your argument really needs this/such interjection. i assume that you might have many readers who are familiar with data practices that involve the quantification of bodies: from historic examples in phrenology and criminology to more contemporary cases in medicine, quantified self, surveillance, etc…
i found this recent study that looked at types of complications as well as the factor of race - may be worth including: https://www.hcup-us.ahrq.gov/reports/statbriefs/sb243-Severe-Maternal-Morbidity-Delivery-Trends-Disparities.jsp
👏🏾👏🏾👏🏾
Aren’t you saying earlier in this chapter though that ‘objectivity’ is impossible? As in - isn’t it, ‘if we truly care about accuracy’ or something else, then ‘we must pay close attention…’ ?
Agreed. I think you should say, what is objectivity supposed to achieve? And then use those target values, rather than objectivity itself. I’m sure there’s tons of literature about this, I only know critiques of objectivity as a coherent goal in science.
As per my comment above - I feel like this is a little overly-simplified when it comes to the complexity of the problems described here.
Agreed. I wrote a paper on this precise topic and technology that makes clear the problem is the technology, not its representativeness https://ironholds.org/resources/papers/agr_paper.pdf
This feels like a slightly over-simplified two-step solution - to my understanding, even if you solve both of these issues, one huge problem (which some think of as a good thing!) when it comes to getting training data for facial recognition systems is that there’s not enough high-quality data of non-Caucasian faces to use for training data. This historical issue can’t be solved with just more diverse system designers – it’s a bigger systemic issue that is more about coverage/spread of digital technologies IMHO.
Agreed. Moreover, I’m not sure it is useful to think about this as a global problem. Facial recognition algorithms, like all algorithms, are built and used in contexts that matter. In other words, they are deemed to fail or succeed in highly localized ways that change over time. Buolamwini’s story, set at an elite technical university in the United States during a period of high (public) racial tension, seems like good evidence of that. Can we see this as a story about the particular ways in which algorithms are enrolled in the way we think about our bodies, rather than just a problem of objective correctness on a global scale?
Hypothetically – would it have been any better if Target had carried out focus groups/done co-design with teenagers + parents, but still ultimately with the same ‘originating charge’? (ie. isn’t the problem here that they started with an inappropriate design question, rather than that they weren’t collaborative at the design table?)
This seems like a very capitalist approach to understanding the value of data! I think it’s worth mentioning this understanding of it - but worth also critiquing it a little?
I was surprised to to see what I saw as a natural extension: talk about how exploitative extraction of nonrenewable resources has caused a global catastrophe.
What do you mean by data ‘at the highest levels’? If it’s storing huge amounts of data’ it might be worth stating that explicitly (though it’s worth noting that the financial cost of data storage continues to drop) - or is it collecting data from vast amounts of the population?
in case you want any - more examples/case studies here http://civicus.org/thedatashift/learning-zone-2/case-studies/
a new approach to “working with data” ?
i feel like data scientists who have trained in statistical/scientific methods would be (i hope!) among the first to recognise statistical biases – whereas people who know less about data, might say things like this but perhaps more about ‘data’ rather than ‘data science’. Is calling it ‘data science’ here potentially putting non-data-scientists off from understanding that these points apply more broadly, too?
A tangential comment on this to say that data scientists may, in fact, not recognize or maybe may not care about bias. I have been impressed with a vast literature of statisticians being thoughtful and reflective about insurmountable, fundamental limitations in statistics (although not to the point of being critical, or losing faith in statistics). Machine learning, from which data science comes out more immediately (from what I observe), is re-discovering some of these critiques, but slowly. In machine learning (and even modern statistics), we are told, “classical statistics cared about unbiased estimators. But we now recognize that sometimes, biased estimators can predict better.” The “bias” here is a technical word, and means “inaccuracy” more than anything like institutional or interpersonal bias (although inaccuracies have implications for bias when it comes to demographic study). The point being, being “unbiased” is already dismissed in technical terms towards an instrumentalist goal (I can give literature that expands on this, this is a pretty important point but understudied and under-acknowledged), and I feel like this affects how people think about other sources of bias as well.
I think there is something to be said about the inequities in data literacy as well. People are constantly pressured to accept jargon-filled terms and conditions related to data privacy and tech use. Most people are actively giving away lots of data without any understanding of the implications—but also without much of a real choice, in order to use most ubiquitous technologies.
Yes- I was thinking of this example earlier. I’m glad you use it in this book!
A related issue here seems to be increased surveillance and lack of data privacy/protection which puts marginalized groups at risk.
I like how you already illustrated this point with Christine Darden’s story in the introduction.
The time gap between when Burke originally coined the #metoo phrase and when it was commandeered for wider use and popularized by white feminists is significant here. There’s increased name recognition of her contributions, but the ten year gap between the founding of her movement and nonprofit and the launch of #metoo is worthy of note. https://www.nytimes.com/2017/10/20/us/me-too-movement-tarana-burke.html
Glad to know about this phrase. We discuss another project of Nafus’s, the Atlas of Caregiving, in the Labor chapter.
I found one article that addressed the very similar concern in terms of “undone Science,” which might be referred here, though it deal with not racial, gender, or feminism issues, but general inequality issues.
Nafus, Dawn. "Exploration or Algorithm? The Undone Science Before the Algorithms." Cultural Anthropology 33, no. 3 (2018): 368–374. https://doi.org/10.14506/ca33.3.03
“In these projects, there was a good deal of undone science that got done precisely because the goal was not specifically to end in algorithm design. Clues about the sources of stress or illness were surfaced. In science and technology studies, the concept of undone science points to the choices made about which research questions are asked and which go underinvestigated, such as the many unasked questions in environmental health (Frickel et al. 2010). Even if the new knowledge we were creating was social and cultural, not necessarily scientific, this notion encourages us to think about how critique can take the form of knowledge production that opens up or elaborates a particular line of inquiry, rather than simply identifying problems with current technical systems. Both these projects pointed to undone science that needed doing. They surfaced alternative lines of inquiry by appropriating datasets that were originally designed to fit very different categories and by giving the subjects of that data the opportunity to reframe and reconsider its meaning.”
Gregory Piatetsky is skeptical of this claim: https://www.kdnuggets.com/2014/05/target-predict-teen-pregnancy-inside-story.html
Femicides are specifically killings of women or girls because they are female/on account of their gender.
In constrast, ‘gender-related killings’ is a much broader category that could include someone who was killed because they were a man, or transgender or non-binary.
Oops! Thanks for catching that!
In this screenshot, the Facebook chat bar in the bottom-right corner obscures some of the text of Serena’s message.
This list is fascinating! And I hope that the screen capture date is noted in the image caption. As the strikethrough at the first item showed, this list is not fixed, but keeps updated.
As “you” was commented in the above place, “we” as a group I think needs to be specifically used. Whose work is “our work”? I don’t think it means two authors’ work. Then does it mean society at large including data scientists? And the usage of “one ‘we’ call data feminism“ add more confusion in identifying who we are.
Good point. We need to clarify both the times when we use “we” to refer to Lauren and Catherine, and also when we use “we” to refer to the field of data science, which we’re hoping to carry along with us in this journey.
I think authors need to make explicit what they have in mind when saying “data science.“ It is a little bit vague term so depending on the definitions, it can be an indicator to encompassing enterprises or to a specific set of fields. I think authors might mean the former. Without proper definitions, it might take a risk of reifying and objectifying “the” data science. And in the Introduction I found authors already gave an example of data visualizations, which "the data science underlies."
Agree. See my comment on the Introduction about defining data science.
The sentence structure of “Because (…), but (…)” is confusing. I think authors might mean “Even though (…), (…).”
I feel difficult to imagine what you try to describe. How can data science relies upon bodies or (bodies data?) “to make decisions about data”?
It could be helpful to name a few concepts related to biases in data and research.
I think it would be helpful and help make your point precise if some stereotypical examples, which render data science irrelevant to bodies, are presented after this sentence .
Is there any bibliographic information of this (perhaps) book?
+1
Also the footnote mentioning “Now let’s multiply” needs bibliographic info.
A nice counterpoint to work in: https://www.smithsonianmag.com/history/remembering-howard-university-librarian-who-decolonized-way-books-were-catalogued-180970890/
Perhaps acknowledge Genevieve Yue, too, who published a concurrent piece on the “China Girl": https://www.academia.edu/15365886/China_Girls_on_the_Margins_of_Film
Yes!
Some readers might imagine exceptions, like weather data: a weather map registers environmental forces, not people! But even weather data is, of course, harvested via instruments *designed by people*, modeled using software *designed by people*, impacted by climatic forces transformed by humankind.
Good point. We should elaborate this a bit more.
Maybe, to complete the shipping map example, you could close by telling us what bodies we *could* be seeing if this map were rendered at different scales: dockworkers, pilots, ship officers and engineers, truck drivers, gas station attendants, diner waitstaff, etc.
Yes! We actually do this in the “Show Your Work” chapter, but I think you’re right about the point you make elsewhere that we should not repeat examples in the book, even if we elaborate them later.
Great, punchy sentence :)
This is a big concern in mapping, too. When does rendering something visible also render it vulnerable? This Twitter thread offers a great list of examples: https://twitter.com/shannonmattern/status/1052731087317815296 Given that cartographers and their partners in environmental resource management, endangered species advocacy, archaeology, etc. have *long* been thinking about the “paradox of exposure,” perhaps their work is worth an endnote?
For sure. We intend to amp up our references to critical cartography, and our references in general, in the final version.
The phrasing here is a bit awkward. Perhaps instead: “…within a year of death, we’d need a researcher who was already interested in racial disparities in healthcare, who could then combine those data with data collected on race, to reveal the “three times more likely” stat that Williams cited in her Facebook post.” ?
It also seems that everywhere maps on femicides seems to be coming from private citizen. There is an issue in the collection of data inside the justice departments and patriarcal visions of what a feminicide is.
Really glad to see that you’re incorporating examples from the art and design worlds!
We love Mimi’s work, and it was a major source of inspiration for us.
Again, I would say that there are notable exceptions, including data-driven medicine any anything that employs biometrics (e.g., customs and immigration). Datafied bodies seem to be front and center in these fields — in practice, in public perception and media representations of that practice, etc.
Could you expand on your reasoning behind data-driven medicine being an exception?
I’d say precision medicine and other biometric applications are key exceptions here.
Thanks for calling this out. Our point here wasn’t that data science doesn’t operate on human bodies, or affect them; but rather that data and DS doesn’t always consider the whole person (and context) behind the data— even in applications like health (where medical data derives mostly from white men) and certainly biometrics. A helpful clarification to make as we elaborate what we mean about “bodies” in this chapter.
I found this very powerful.
I’d love to see an example a couple of orders of magnitude higher here. I think this under-represents the financial investment in data collection and analysis by companies like Facebook. And thinking about this example in particular, one might think that only a fraction of the costs of the data center are related to collection and analysis.
Good point. Now I’m wondering if the stats about the “big five” being worth more than the (pre-Brexit) UK GDP made it into this draft anywhere.
Missing a period. (Also, I have another organization for your list 😉)
In addition to considering naming more organizations, it might be relevant to mention the civic tech field and groups like Code for America.
The chapter as a whole is really rich, and the language refreshingly accessible. As you revise, there are a few fundamental elements that should be better articulated: handling the heterogeneity of examples; the stakes of “bringing back bodies;” and the simultaneity of the promise and peril of visibility.
First, the heterogeneity. There is a disconcerting flattening, in the chapter, between African American women’s life-threatening experiences during childbirth and Asian American Twitter gripes about getting lousy matches on a game. Not all racisms are equally deadly. This isn’t to suggest that you should cut any of the examples, but you should treat the differences among them with more care. Especially since you specifically credit Black feminism as an inspiration, it’s worth thinking through how anti-Black racism in particular is foundational to the United States, and that centring Black women has a distinctive value for intersectional feminisms.
Second, the bodies. As you surely already know, bodies are not simply givens about which data can be straight-forwardly extracted. For example, for Haraway, the location that a view from somewhere comes from is not just physical but also social and historical. Knowledge-makers learn how to see with the assistance of technologies ranging from microscopes to taxonomical categories, and so situated knowledges are products of minds as well as eyeballs. If you are committed to the idea that bodies are themselves at stake, you need to make clear what paying attention to bodies gives you that, say, listening to Black women doesn’t.
Third, the promise and perils of data. The beginning of the chapter seems to operate on the assumption that more data is a good thing, but later in the chapter, we learn that being overexposed to surveillance is also a problem. How might you hold that tension throughout the chapter, rather than treating the elements in turn?
Thanks so much for these broad comments, Anne, which I’m just seeing now. I appreciate each of the issues you raise, and they’re ones we’ll take to heart (and mind and typing hands) as we revise.
As of now, the chapter doesn’t really come full circle. Might either de-emphasize Serena Williams’ birth story at the opening or come back to it at the end?
Agree.
:) Here’s Ruha! Might still cite her other work above, but glad to see it here.
This wording leaves it ambiguous whether Haraway herself argues that the view from nowhere is always a view from somewhere or whether that’s your addition - reword to make clear that it’s the former.
Noted.
I would caution against naturalizing this assumption. It is perfectly possible for someone of one race to resemble someone of another race. The decision to sort faces into races in art as in life is social and historical not simply physical.
Good point. We should say something more like: “But *some* Asian users of the app…”
Would recommend wording with more care - you are painting with a pretty broad brush here.
Yes. Point taken.
I don’t understand why this is bodies at the table rather than people. To cite an aphorism that Ruha Benjamin is fond of quoting, if you aren’t at the table, you are on the table. (Speaking of which, #citeblackwomen, including Ruha Benjamin if you haven’t already. Including the question she highlights: "Why am I in such demand as a research subject, when no one wants me as a patient?”)
I think I have a similar question. While I do realize that struggles over inequality and exertions of power do often play out on individual, physical bodies, I also wonder if bodies are the right “unit” for this chapter. The term “bodies” seems to suggest that non-compliant data subjects are most powerfully represented as their phenotypical, empirically observable selves. What about subjectivities, interiorities, personal histories? What alternative term might captures the totality of the human subject — encompassing both the body and all the ineffable stuff that shapes it?
Or, maybe you simply need to provide a capacious definition of “bodies” early in the chapter, to explain that you’re not talking only about the corporeal.
Do we know this? Did they boycott Target or something?
Why people and their bodies? Is there a separation meant to be implied?
Agreed. This chapter is terrific. But I find the use of bodies rather than people confusing throughout.
Might nuance this with reference to Steve Epstein’s inclusion and difference paradigm
The expression "not all bodies are represented in those decisions" seems unqualified. Rather, it makes sense that not *all* seven billions of bodies can be represented under time and budgetary constraints in data practices. I think “not all kinds of bodies are represented” or “a homogeneous kind of body is represented” might sound realistic.
The second person is tricky. I would never say this. Who, exactly, is your imagined “you”?
Good point. We’ve been discussing this throughout the writing process, and will likely add something about our use of the second person in the intro.
I’d welcome other thoughts about this issue from others as well. On the one hand, I like the informality of the direct address. On the other hand, it can’t help but imply certain things about our imagined audience.
This is surely a fundamental issue discussed throughout the book as a whole, but I’ll flag here that I am sceptical about the implied cause-and-effect. We have lots of data about racial health disparities — how can we justify hope that more data would help to ameliorate them?
Agreed. I think it’s important to acknowledge here, especially in the intro, and throughout the book, that data practices are only one element in an assemblage of services, infrastructures, etc., that, via their *own* intersectional interactions, have the power to exacerbate or mitigate inequities. Perhaps you could frame data practices as worthy of singling out here because they “index” these other sectors?
This is an awesome phrasing that might merit a closer reading.
Well put
Agreed!
What is the basis for describing the announcement as accidental? The fact that she took it down quickly might suggest a change of heart about disclosure rather than accidental disclosure, no? Distracting and unnecessary.
I read an interview where I thought she said it was an accident, but I will check and confirm.
Even though you cite Wendy Chun and Jacob Gaboury, I think you can briefly synthesize their conclusions as a means of further underscoring how history, culture and context play into the argument you are making. Unpack this a little more - it will give the questions you pose in the next two paragraphs a little more punch.
Thank you Carol — excellent suggestion.
You probably have already read Sara Banet-Weiser’s work on popular feminism and popular misogyny (her book is entitled “Empowered”). Her take on feminism’s commitment to visibility in the public sphere and the repercussions of that commitment in an online environment gave me pause. I’d love to hear what you two think.
I really love this table, too!
I also found this really helpful. Great addition.
On a structural note, I think having the chapter outlines/guides in the introductory chapter would be useful. Not only would it provide your readers with a map to the rest of the text, but it would also provides you, the authors, with the opportunity to dig more deeply into “data feminism,” your central concept.
Thanks Sarah - this is a great point. This chapter and the introduction were previously combined which I believe is how the chapter outlines ended up here.
Yes!
indeed. A political-economic consideration can help deepen this anchoring