Skip to main content
Log in

Privacy rights and ‘naked’ statistical evidence

  • Published:
Philosophical Studies Aims and scope Submit manuscript

A Correction to this article was published on 17 April 2021

This article has been updated

Abstract

Do privacy rights restrict what is permissible to infer about others based on statistical evidence? This paper replies affirmatively by defending the following symmetry: there is not necessarily a morally relevant difference between directly appropriating people’s private information—say, by using an X-ray device on their private safes—and using predictive technologies to infer the same content, at least in cases where the evidence has a roughly similar probative value. This conclusion is of theoretical interest because a comprehensive justification of the thought that statistical inferences can violate privacy rights is lacking in the current literature. Secondly, the conclusion is of practical interest due to the need for moral assessment of emerging predictive algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Change history

Notes

  1. By “not perfectly reliable” I mean to suggest that there is at least a risk that B uses the device and nevertheless acquires a false belief about the contents of A’s briefcase. For vividness, this could happen in a case where A anticipates B’s act and covertly uses a jamming device to inject a false image into B’s X-ray device. I thank an anonymous reviewer for asking me to clarify this.

  2. Some endorse a view according to which there is a fundamental difference between belief and credence. On such views, belief states are typically understood as categorical attitudes, while credence is an attitude concerning a degree of confidence in a proposition (Jackson 2019). When relevant, I shall foreground the distinction. One way to interpret the conclusion of this paper is that we have reason to be concerned about what one might call ‘credal privacy’.

  3. For discussions of similar examples, see Thomson (1975), Scanlon (1975) and Marmor (2015). Thomson writes, for example, ‘If we use an X-ray device to look at a man in order to get personal information about him, then we violate his right to privacy’ (1975: 307).

  4. Some commentators argue that privacy is inherently conventionally defined (Nissenbaum 2010; Scanlon 1975). While I shall not discuss such views presently, I want to highlight that nothing I shall say is in tension with such views. Plausibly, those endorsing such conventional views would not claim that what should count as a privacy rights violation is only a matter of social convention. There would also have to be some ‘objective’ features of acts (e.g., that they constitute a setback to significant interests) that would make it reasonable to recognize such acts as wrongful privacy violations. Since it is only these ‘recognition-independent’ features I shall discuss presently, my argument is compatible with the further requirement that privacy violations are recognition-dependent in some senses.

  5. Moss (2018) describes the inference type in Statistical that I am interested in as a ‘statistical syllogism’. Bolinger (2021) calls it an ‘actuarial inference’.

  6. Kappel (2013), Blaauw (2013), Matheson (2007), Fallis (2013) all discuss the concept of privacy from an epistemic perspective, but they do not discuss the moral question I raise presently.

  7. Another way is by following Marmor (2015: 6) and Thomson (1975) and grant that judgements that there is a violation of privacy seem to involve a so-called ‘proprietary intuition’: “when we focus on what is wrong about the way in which some fact came to be known, we can normally explain it as a violation of one’s proprietary rights: somebody used something that is yours without your permission”. Applied to the cases under consideration, one might feel that there is a clear sense in which B ‘uses’ something that belongs to A—for instance by perceiving the contents of the briefcase, something we might assume belongs to A—whereas it is much less clear in what sense B objectionably uses something that ‘belongs to A’ by making a statistical inference based on A’s publicly available identifiers.

  8. Notice that the view that privacy violations require the prospect of obtaining information does not explain why it feels awkward, if it does, to claim that further inferences from permissibly obtained information can violate privacy. Clearly, statistical inferences (as with all other inferences) can be informative in the sense of increasing one’s information. I thank an anonymous reviewer for raising this concern.

  9. One might immediately object that we often lack the mental control to intentionally avoid making such inferences. But it seems too strong to claim that we never have the requisite control, and I suspect many would still intuitively find such mental predictions permissible even if we assume that people always had the requisite control (cp. Rumbold and Wilson 2019).

  10. Cp. Rumbold and Wilson (2019: 13): ‘As the world knows, Holmes [Sherlock, red.] is a master of both observation and deduction and, during their conversation, he is able to deduce by mentally interrogating a series of stories Annabel tells him that she suffers from the rare genetic condition that she has tried so desperately to keep private. What kind of duties might Holmes be under at this point? On our model, […] he infringes (possibly even violates) Annabel’s right to privacy insofar as he makes any effort to deduce the nature of Annabel’s condition in the first place.’.

  11. Some scholars can be taken to reject the symmetry thesis for this reason; see for instance Ryberg (2017), Manson and O’Neill (2012), Skopek (2020).

  12. For endorsement of views in the vicinity of the symmetry thesis, see also Loi and Christen (2020), Hartzog (2015), Pan (2016) and Zhu (2014), Wachter and Mittelstadt (2018), although it is not always clear from these references whether the claim is that we should be morally and/or legally concerned about information obtained from inference or the claim that such inferences violate privacy rights. What, specifically, sets my analysis apart from these is that I consider the epistemic differences between the two types of evidence involved.

    What sets my analysis apart from these articles is that they gloss over the epistemic differences between ‘statistical’ and ‘individualized’ evidence highlighted above.

  13. ‘We believe that this description of the scope of the right to privacy is the right one, following necessarily from a set of intuitive judgements about when someone can be said to have waived their right’ (Rumbold and Wilson 2019: 16). Alternatively, their argument can be read as stating the counter-intuitiveness of holding that there is not a privacy violation in their target cases, or the counter-intuitiveness of saying that a right was waived in their target cases. As I suggest below, however, we can do better than appealing to intuitions (although in my opinion intuitions are also important, and so my work and Rumbold and Wilson’s should largely be regarded as complementary).

  14. I thank an anonymous reviewer for helping me in clarifying the contribution of the paper.

  15. Rumbold and Wilson (2019: 5) concede that their claim is largely independent of any substantive account of the right to privacy.

  16. In the literature, the term ‘privacy’ (and a corresponding idea of privacy rights as well as its natural negation, ‘public’) has been conceived in different—and sometimes competing—ways, typically either as a form of control, non-access or non-accessibility. For helpful overviews see Lippert-Rasmussen (2017), Koops et al. (2017) and Matheson (2007). I want to steer clear of most of these debates apart from assuming that part of the purpose of privacy rights is determining when, and in what form, duty-bearers are permitted to acquire information about putative right-holders: the problem which Individualized/Statistical arguably animates.

  17. Taken from Ross (2020: 2), cp. (p. 1), ‘There seems to be a difference between evidence presented in a statistical form (e.g., “her work patterns suggest an 80% chance that she was in the building”) versus non-statistical evidence (e.g., eye-witness testimony; CCTV recordings; confessions). The proof paradox begins from the thought that deciding a legal case on the basis of statistical evidence alone can seem problematic.’ See also Cohen (1977), Di Bello (2019), Enoch, Spectre, and Fisher (2012), Moss (2018), Redmayne (2008), Thomson (1986) and Tribe (1971).

  18. Some remain unconvinced that these disparate puzzles call for a unified explanation (see for instance Backes 2020). For present purposes, I can remain agnostic on this broader question as I am just interested in assessing what, if anything, these literatures have to offer the analysis of privacy rights.

  19. It is worth noting that we are here concerned with systematic differences between individualized and statistical evidence. Both kinds of evidence might, contingently, be flawed in many ways. I shall largely abstract away from such contingent epistemic flaws in order to focus clearly on the problem at hand.

  20. However—and this might be striking in and of itself—it seems that the intuitive insufficiency of statistical evidence pulls us in the direction of judging Statistical morally unobjectionable. This makes this pattern of responses different from those typically elicited in the analogous legal cases. In the legal cases, it is the reliance on statistical evidence that seems to be what makes such cases morally objectionable. This asymmetry should not discourage us, however, as we might assume that the moral badness of privacy violations has something to do with objectionably acquiring evidence. On this view, we should expect that the feeling that statistical evidence is somehow epistemically deficient would pull us in the direction of being less concerned with Statistical-type cases, as opposed to Individualized-type cases.

  21. This qualification is needed because we can imagine some cases where the X-ray fails to track the truth (cp. fn. 2). It is worth stressing that none of the accounts I review are uncontested, but this does not matter too much for present purposes as what we need, primarily, is one or a few fairly plausible accounts of the distinction between statistical and individualized evidence in light of which to assess the symmetry thesis. See Pardo (2018) and Smith** (2019) for critical discussion of the sensitivity-based solution.

  22. Although this probably depends, in part, on what precisely explains the composition of the statistical evidence used for the inference (cp. Gardiner 2019).

  23. As sensitivity and safety were originally put forth as necessary conditions for having knowledge, this raises the natural question of whether the epistemically relevant distinction between Statistical and Individualized (for the purposes of assessing the symmetry thesis) is that only evidence of the latter type can support knowledge (and, perhaps, violations of privacy require the possibility of obtaining evidence supporting knowledge). One piece of evidence for this is that scholars often speak of privacy as a matter of knowledge (Marmor 2015 for an example of this and Kappel 2013 and Blaauw 2013 for disagreement). While intriguing, I do not want to take a stand on this question since the question of what constitutes ‘knowledge’ is a controversial issue in and of itself. To my mind, this claim would also gloss over some important distinctions, for instance in light of the fact that some have argued that we can know probabilistic contents, and statistical evidence seems like an admissible kind of evidence to establish such forms of knowledge (Moss 2018). We have to keep separate the question of i) whether privacy violations require the prospect of obtaining knowledge and ii) the question of whether privacy violations only range over acts that can bring about evidence justifying full belief (or also range over acts that can bring about evidence justifying some credences).

  24. For similar-spirited remarks about the ‘legal value’ of non-statistical evidence see Enoch, Spectre, and Fisher (2012, 2015).

  25. Parent (1983) defends some such view, but it is almost unanimously endorsed (although see Marmor 2015 for some reservations).

  26. Another way of fleshing out this privacy concern is in terms of worries about domination and will dependence, which would not necessarily accentuate risk (and likelihood) but instead will dependence as a form of unfreedom. See Schmidt (2018) for a general discussion of domination and will dependence.

  27. To an intolerant person with the aim of harassing certain types of people (e.g. homosexuals), gaining evidence that B is a homosexual provides him with a derivative reason for harassing B. This reveals that any particular application of the subsequent harm account is in need of an account (or assumption) of people’s motives to be complete.

  28. Indeed, we know from studies on discrimination that people are often perfectly willing to rely on even poor reference group proxies to act in ways that render other people worse off (Lippert-Rasmussen 2014). A distinct question concerns when it is permissible to act based on such proxies (Bolinger 2019b; Schoeman 1987).

  29. See Owens (2012: 63) for discussion of this claim in general: ‘We all have an interest in how others regard us and so can be wronged by a bad attitude. At least we have such an interest when the other is a friend, an acquaintance, a colleague, when they are part of our social space. Our life goes worse if such people think badly of us.’.

  30. Although Buchak claims that statistical evidence can justify having credences, to her, credences (unlike beliefs) cannot play the correct functional role in our practices of holding others morally responsible (cp. Strawson 1962).

  31. Perhaps this argument raises the further question of whether moral rights are also meant to protect us from some reasonable mistakes. See Gaukroger (2020) for the view that privacy rights protect us against other people’s mistakes. See Bolinger (2020) for a general view on rights according to which the toleration for mistakes depends on considerations about creating a fair distribution of risk.

  32. Feinberg writes about the ideal of autonomy: ‘The kernel of the idea of autonomy is the right to make choices and decisions—what to put into my body, what contacts with my body to permit, where and how to move my body through public space, how to use my chattels and physical property, what personal information to disclose to others, what information to conceal, and more’ (1986: 54).

  33. As far as I can tell, no one has plausibly demonstrated a link between a loss of privacy and a loss of mental faculties, so I shall not discuss this, but see Reiman (1976).

  34. There are different ways to flesh out the details here. Some argue that privacy and shame are intimately connected, and Bathroom 1 might be objectionable because it involves or could cause shame (Velleman 2001). We could also make sense of the badness of ‘naked exposure’ in light of claims about offense and emotional/psychical distress (Feinberg 1985).

  35. Some say that a picture is worth a thousand words. So we can assume that it is a rather large front page written in a small font.

  36. ‘Deepfake’ is an emerging machine learning technology for human image (and video) synthesis. It can be used to combine and superimpose existing images and videos. See Chesney and Citron (2019) and Rini (2019).

References

  • Backes, M. (2020). Epistemology and the law: Why there is no epistemic mileage in legal cases. Philosophical Studies, 177, 2759–2778.

    Article  Google Scholar 

  • Basu, R. (2019). The wrongs of racist beliefs. Philosophical Studies, 176, 2497–2515.

    Article  Google Scholar 

  • Blaauw, M. (2013). The epistemic account of Privacy. Episteme, 10(2), 167–177.

    Article  Google Scholar 

  • Bolinger, R. J. (2019a). Moral risk and communicating consent. Philosophy and Public Affairs, 47, 179–207.

    Article  Google Scholar 

  • Bolinger, R.J. (2019b) Demographic statistics in defensive decisions. Synthese,

  • Bolinger, R. J. (2020) The moral grounds of reasonably mistaken self-defense. Philosophy and Phenomenological Research,, 1– 17

  • Bolinger, R. J. (2021) Explaining Justificatory Asymmetries between Statistical and Individualized Evidence. In Z. Hoskins & J. Robson (Eds.), Truth and trials: Dilemmas at the intersection of epistemology and philosophy of law. London: Routledge (forthcoming).

  • Buchak, L. (2014). Belief, credence, and norms. Philosophical Studies, 169(2), 1–27.

    Article  Google Scholar 

  • Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753–1820.

    Google Scholar 

  • Christman, J. (2018) Autonomy in moral and political philosophy. In E. N Zalta (Ed.), Stanford Encyclopedia of Philosophy (Spring 2018 Edition), https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/.

  • Cohen, J. L. (1977). The probable and the provable. Oxford University Press.

  • Di Bello, M. (2019). Trial by statistics: Is a high probability of guilt enough to convict? Mind, 128, 1045–1084.

    Article  Google Scholar 

  • Enoch, D., Spectre, L., & Fisher, T. (2012). Statistical evidence, sensitivity, and the legal value of knowledge. Philosophy and Public Affairs, 40(3), 197–224.

    Article  Google Scholar 

  • Enoch, D., Spectre, L., & Fisher, T. (2015). Sense and ‘sensitivity’: Epistemic and instrumental approaches to statistical evidence. Stanford Law Review, 67, 557–611.

    Google Scholar 

  • Fallis, D. (2013). Privacy and lack of knowledge. Episteme, 10(2), 153–166.

    Article  Google Scholar 

  • Feinberg, J. (1985). Offense to others: The moral limits of the criminal law (Vol. 2). Oxford University Press.

  • Feinberg, J. (1986). Harm to self: The moral limits of the criminal law (Vol. 3). Oxford University Press.

  • Gardiner, G. (2019). The reasonable and the relevant: Legal standards of proof. Philosophy & Public Affairs, 47, 288–318.

    Article  Google Scholar 

  • Gaukroger, C. (2020). Privacy and the importance of ‘getting away with it.’ Journal of Moral Philosophy, 17(4), 416–439.

    Article  Google Scholar 

  • Hartzog, Woodrow et al. (2015) Inefficiently automated law enforcement. Michigan State Law Review, 1763–1796

  • Hawthorne, J. (2005). Knowledge and lotteries. Clarendon Press.

  • Jackson, E. G. (2019). Belief and credence: Why the attitude-type matters. Philosophical Studies, 176, 2477–2496.

    Article  Google Scholar 

  • Johnson, J. L. (1989). Privacy and the judgment of others. The Journal of Value Inquiry, 23, 157–168.

    Article  Google Scholar 

  • Kappel, K. (2013). Epistemological dimensions of informational privacy. Episteme, 10(2), 179–192.

    Article  Google Scholar 

  • Koops, B.-J., et al. (2017). A typology of privacy. University of Pennsylvania Journal of International Law, 38(2), 483–575.

    Google Scholar 

  • Lippert-Rasmussen, K. (2014). Born free and equal? A philosophical inquiry into the nature of discrimination. Oxford University Press.

  • Lippert-Rasmussen, K. (2017). Brain privacy, intimacy, and authenticity: Why a complete lack of the former might undermine neither of the latter! Res Publica, 23, 227–244.

    Article  Google Scholar 

  • Littlejohn, C. (2018) Truth, knowledge, and the standard of proof in criminal law. Synthese, 1–34.

  • Loi, M., & Christen, M. (2020). Two concepts of group privacy. Philosophy and Technology, 33, 207–224.

    Article  Google Scholar 

  • Manson, N. C., & O’Neill, O. (2012). Rethinking informed consent in bioethics. Cambridge University Press.

  • Marmor, A. (2015). What is the right to privacy? Philosophy & Public Affairs, 43(1), 1–26.

    Article  Google Scholar 

  • Matheson, D. (2007). Unknowableness and informational privacy. Journal of Philosophical Research, 32, 251–267.

    Article  Google Scholar 

  • Moss, S. (2018). Probabilistic knowledge. Oxford University Press.

  • Nagel, T. (1998). Concealment and Exposure. Philosophy & Public Affairs, 27(1), 3–30.

    Article  Google Scholar 

  • Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

  • Nozick, R. (1981). Philosophical explanations. Harvard University Press.

  • Owens, D. (2012). Shaping the normative landscape. Oxford University Press.

  • Pan, S. B. (2016). Get to know me: Protecting privacy and autonomy under big data’s penetrating gaze. Harvard Journal of Law and Technology, 30(1), 239–261.

    Google Scholar 

  • Pardo, M. S. (2018). Safety vs. sensitivity: Possible worlds and the law of evidence. Legal Theory, 24(1), 50–75.

  • Parent, W. A. (1983). Privacy, morality, and the law. Philosophy & Public Affairs, 12(4), 269–288.

    Google Scholar 

  • Rachels, J. (1975). Why privacy is important. Philosophy & Public Affairs, 4(4), 323–333.

    Google Scholar 

  • Raz, J. (1986). The morality of freedom. Clarendon Press.

  • Redmayne, M. (2008). Exploring the proof paradoxes. Legal Theory, 14(4), 281–309.

    Article  Google Scholar 

  • Reiman, J. H. (1976). Privacy, intimacy, and personhood. Philosophy & Public Affairs, 6(1), 26–44.

    Google Scholar 

  • Rini, R. (2019). Deepfakes and the epistemic backstop. Philosopher’s Imprint, 20(24), 1–16.

    Google Scholar 

  • Ross, L. D. (2020) Recent work on the proof paradox. Philosophy Compass,

  • Rumbold, B., & Wilson, J. (2019). Privacy rights and public information. Journal of Political Philosophy, 27, 3–25.

    Article  Google Scholar 

  • Ryberg, J. (2017). Neuroscience, mind reading and mental privacy. Res Publica, 23, 197–211.

    Article  Google Scholar 

  • Scanlon, T. M. (1975). Thomson on privacy. Philosophy & Public Affairs, 4(4), 315–322.

    Google Scholar 

  • Schmidt, A. T. (2018). Domination without inequality? Mutual domination, republicanism, and gun control. Philosophy & Public Affairs, 46(2), 175–206.

    Article  Google Scholar 

  • Schoeman, F. D. (1987). Statistical vs. direct evidence. Noûs, 21(2), 179–198.

    Article  Google Scholar 

  • Skopek, J. (2020). Untangling privacy: Losses versus violations. Iowa Law Review, 105(5), 2169–2231.

    Google Scholar 

  • Smith, M. (2018). When does evidence suffice for conviction? Mind, 127, 1193–1218.

    Article  Google Scholar 

  • Sosa, E. (1999). How must knowledge be modally related to what is known? Philosophical Topics, 26(1 & 2), 373–384.

    Article  Google Scholar 

  • Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 1–25.

    Google Scholar 

  • Taylor, J. S. (2002). Privacy and autonomy: A reappraisal. Southern Journal of Philosophy, 40(4), 587–604.

    Article  Google Scholar 

  • Thomson, J. J. (1975). The right to privacy. Philosophy & Public Affairs, 4(4), 295–314.

    Google Scholar 

  • Thomson, J. J. (1986). Liability and individualized evidence. Law and Contemporary Problems, 49(3), 199–219.

    Article  Google Scholar 

  • Tribe, L. (1971). Trial by mathematics: Precision and ritual in the legal process. Harvard Law Review, 84(6), 1329–1393.

    Article  Google Scholar 

  • Velleman, J. D. (2001). The genesis of shame. Philosophy & Public Affairs, 30(1), 27–52.

    Article  Google Scholar 

  • Wachter, Sandra, & Mittelstadt, Brent (2018), A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2019(2).

  • Zhu, B. (2014). A traditional tort for a modern threat: Applying intrusion upon seclusion to dataveillance observations. NYU Law Review, 2381, 2382–2387.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lauritz Aastrup Munch.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Is it phenomenal quality that explains a moral difference between Individualized and Statistical?

Appendix: Is it phenomenal quality that explains a moral difference between Individualized and Statistical?

It is possible that my assumption of fleshing out the distinction between Statistical and Individualized epistemically runs the risk of overlooking other important moral features that systematically serve to distinguish them. I discuss one salient possibility here, which I shall argue is largely orthogonal to the question at hand. Consider to this end a view expressed by Thomas Nagel:

‘Naked exposure itself, whether or not it arouses disapproval, is disqualifying. The boundary between what we reveal and what we do not, and some control over that boundary, are among the most important attributes of our humanity’ (1998: 4).

Nagel does not offer a reason to support his claim, rather taking it as a datum, but it might be a promising idea in the present context that there is something especially morally objectionable about naked (or ‘direct’) exposure. Neither does Nagel, I should emphasize, say much about how exactly we should understand this notion, so the following will be an interpretation of how some such argument might proceed. One idea would be to claim that naked exposure should be spelled out along the lines of a certain phenomenal quality. This, it seems, could be one property setting Individualized and Statistical apart. Consider a fresh pair of cases:

Bathroom 1. A is in his bathroom. B watches him covertly through a camera installed in the bathroom, although the picture is slightly blurry because of steam from A’s showering.

Bathroom 2. A is in his bathroom. B has taken notice of mundane features of A, and using a sophisticated algorithm, he can covertly predict exactly what A is doing in the bathroom.

The probative value of the resulting inference in both cases is approximately 0.85.

We could (plausibly, to my mind) claim that only the former case involves ‘disqualifying naked exposure’ and that this makes for a moral difference compared to the latter. One might deny this comparative judgement, but I shall assume some such arguendo.Footnote 34

Even if we grant that this idea tracks something of moral significance, I want to deny that it should lead us to doubt the symmetry thesis for the following reason: The idea of ‘naked exposure’ (under this interpretation, at least) does not map neatly onto the Individualized/Statistical distinction. Rather, it seems to map onto facts about the way in which some evidence is represented (I call this ‘mode of representation’). To see this, consider:

Newspaper 1. A magazine publishes a front page photograph of Mr. Famous sitting on the toilet without Mr. Famous’ consent.

Newspaper 2. A magazine publishes a front page. The front page consists of a wall of text carefully describing what it looks like when Mr. Famous sits on the toilet (the journalist had the picture in his possession and describes carefully in objective prose what he sees).Footnote 35 This is done without Mr. Famous’ consent.

The Newspaper cases should mirror the difference in mode of representation that is involved in the Bathroom cases to the effect that Newspaper 1 is more ‘disqualifying’ than Newspaper 2. Again, just as with the former set of cases, there is, perhaps, sufficient uncontrolled nakedness in both cases to render them both objectionable. The point that I want to make, however, is that the moral difference does not correspond to any meaningful distinction between individual and statistical evidence. At least in terms of the evidential material involved, both cases have the same evidential basis, only differing in the way in which the evidence is represented (picture or text). The example suggests that statistical evidence is not necessary for the presence of ‘disqualifying nakedness’. Here is a case that illustrates that the presence of statistical evidence is not sufficient for the presence of ‘disqualifying nakedness’ either:

Deepfake algorithm.Footnote 36 B feeds a deepfake statistical algorithm with the same mundane information used in Bathroom 2. It does not output text or a probability score that B reads to subsequently form a belief about A. Instead, it outputs a fake, yet virtually indistinguishable from a non-fake, video of what A is doing in the bathroom, combining a large array of permissibly acquired video material.

If the concern is ‘disqualifying nakedness’, I find it hard to believe that Deepfake algorithm should be less worrisome than Bathroom 1. If this is right, it follows that statistical evidence is not a sufficient condition for ‘disqualifying naked exposure’. It follows that we are dealing with orthogonal properties, and the type of evidence (statistical vs. individualized) has no necessary bearing on the badness of the mode of representation involved.

I conclude that an argument from phenomenal differences, while maybe plausible on its own terms, does not speak directly to the question at hand.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Munch, L.A. Privacy rights and ‘naked’ statistical evidence. Philos Stud 178, 3777–3795 (2021). https://doi.org/10.1007/s11098-021-01625-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-021-01625-0

Keywords

Navigation