KDD (Knowledge Discovery in Databases) confronts us withphenomena that can intuitively be grasped as highly problematic, but arenevertheless difficult to understand and articulate. Many of theseproblems have to do with what I call the ``deindividualization of theperson'': a tendency of judging and treating persons on the basis ofgroup characteristics instead of on their own individual characteristicsand merits. This tendency will be one of the consequences of theproduction and use of group profiles with the help of KDD. Currentprivacy law and regulations, (...) as well as current ethical theoryconcerning privacy, start from too narrow a definition of ``personaldata'' to capture these problems. In this paper, I introduce the notionof ``categorical privacy'' as a starting point for a possible remedy forthe failures of the current conceptions of privacy. I discuss some waysin which the problems relating to group profiles definitely cannot besolved and I suggest a possible way out of these problems. Finally, Isuggest that it may take us a step forward if we would begin to questionthe predominance of privacy norms in the social debate on informationtechnologies and if we would be prepared to introduce normativeprinciples other than privacy rules for the assessment of newinformation technologies. If we do not succeed in articulating theproblems relating to KDD clearly, one day we may find ourselves in asituation where KDD appears to have undermined the methodic andnormative individualism which pervades the mainstream of morality andmoral theory. (shrink)
Questions regarding the moral responsibility of Internet accessand service providers relating to information on the Internetcall for a reassessment of the ways in which we think aboutattributing blame, guilt, and duties of reparation andcompensation. They invite us to borrow something similar to theidea of strict liability from the legal sphere and to introduceit in morality and ethical theory. Taking such a category in thedistribution of responsibilities outside the domain of law andintroducing it into ethics, however, is a difficult thing. Doingso (...) seems to conflict with some broadly shared and deeply feltintuitions regarding the individuality of responsibility and therelationship between responsibility and guilt. These convictionscoincide with some basic ideas in Kantian moral theory and theascriptive theory based on these ideas. Nevertheless, theproblems to which the proposed liabilities / responsibilitiesrelate are so serious that they do not seem to leave room foraloofness. At least, they encourage us to reconsider the idea ofstrict liability carefully and to assess its merits. (shrink)
We present a developmental perspective regarding the difference in perceptions toward privacy between young and old. Here, we introduce the notion of privacy conceptions, that is, the specific ideas that individuals have regarding what privacy actually is. The differences in privacy concerns often found between young and old are postulated as the result of the differences found in their privacy conceptions, which are subsequently linked to their developmental life stages. The data presented have been obtained through a questionnaire distributed among (...) adolescents, young adults, and adults and provide support for this developmental perspective. This study is one of the first to include adolescents when investigating the privacy concerns among young and old. The results show that the privacy conceptions held by adolescents indeed differ from those held by young adults and adults in keeping with the expectations as seen from a developmental perspective. In addition, the areas in which the differences in privacy conceptions are found also reflect the strongest relationship with concerns. As such, these findings present an alternative perspective to the commonly held notion that young people are less concerned about privacy. (shrink)
In this contribution, we identify and clarifysome distinctions we believe are useful inestablishing the reliability of information onthe Internet. We begin by examining some of thesalient features of information that go intothe determination of reliability. In so doing,we argue that we need to distinguish contentand pedigree criteria of reliability and thatwe need to separate issues of reliability ofinformation from the issues of theaccessibility and the usability of information.We then turn to an analysis of some commonfailures to recognize reliability orunreliability.
This article discusses some rather formal characteristics of possible obligations to enhance. Obligations to enhance can exist in the absence of good moral reasons. If obligation and duty however are considered as synonyms, the enhancement involved must be morally desirable in some respect. Since enhancers and enhanced can, but need not coincide, advertency is appropriate regarding the question who exactly is addressed by an obligation or a duty to enhance: the person on whom the enhancing treatment is performed, or the (...) controller or the operator of the enhancement. Especially, the position of the operator is easily overlooked. The exact functionality of the specific enhancement, is all-important, not only for the acceptability of a specific form of enhancement, but also for its chances of success for becoming a duty or morally obligatory. Finally and most importantly, however, since obligations can exist without good moral reasons, there can be obligations to enhance that are not morally right, let alone desirable. (shrink)
One of the most significant aspects of Internet, in comparison with other sources of information, such as libraries, books, journals, television, radio etcetera, is that it makes expert knowledge much more accessible to non‐experts than the other traditional sources. This phenomenon has often been applauded for its democratizing effects. Unfortunately, there is also a disadvantage. Expert information that was originally intended for a specific group of people ‐ and not in any way processed or adapted to make it fit for (...) a broader audience ‐ can easily be misunderstood and misinterpreted by non‐experts and, when used as a basis for decisions, lead to unhappy consequences. Can these risks be diminished without limiting the informational freedoms of the information providers and without imposing paternalistic measures regarding the receivers of the information? (shrink)
Big data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety of processed (...) data raises unique questions regarding the interplay of multiple regulatory systems alongside food safety legislation. Provisions aiming to preserve the confidentiality of data and protect personal information are juxtaposed to norms prescribing the public disclosure of scientific information. This research is intended to provide guidance for data governance and data ownership issues that unfold from the ongoing transformation of the technical and legal domains of food safety risk assessment. Following the reconstruction of technological advances in data collection and analysis and the description of recent amendments to food safety legislation, emerging concerns are discussed in light of the individual, collective and social implications of the deployment of cutting-edge Big Data collection and analysis techniques. Then, a set of principle-based recommendations is proposed by adapting high-level principles enshrined in institutional documents about Artificial Intelligence to the realm of food safety risk assessment. The proposed set of recommendations adopts Safety, Accountability, Fairness, Explainability, Transparency as core principles (SAFETY), whereas Privacy and data protection are used as a meta-principle. (shrink)
Informed consent bears significant relevance as a legal basis for the processing of personal data and health data in the current privacy, data protection and confidentiality legislations. The consent requirements find their basis in an ideal of personal autonomy. Yet, with the recent advent of the global pandemic and the increased use of eHealth applications in its wake, a more differentiated perspective with regards to this normative approach might soon gain momentum. This paper discusses the compatibility of a moral duty (...) to share data for the sake of the improvement of healthcare, research, and public health with autonomy in the field of data protection, privacy and medical confidentiality. It explores several ethical-theoretical justifications for a duty of data sharing, and then reflects on how existing privacy, data protection, and confidentiality legislations could obstruct such a duty. Consent, as currently defined in the General Data Protection Regulation – a key legislative framework providing rules on the processing of personal data and data concerning health – and in the recommendation of the Council of Europe on the protection of health-related data – explored here as soft-law – turns out not to be indispensable from various ethical perspectives, while the requirement of consent in the General Data Protection Regulation and the recommendation nonetheless curtails the full potential of a duty to share medical data. Also other legal grounds as possible alternatives for consent seem to constitute an impediment. (shrink)