Forbidden knowledge in machine learning reflections on the limits of research and publication

AI and Society:1-15 (forthcoming)


Certain research strands can yield “forbidden knowledge”. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance, with regard to generative video or text synthesis, personality analysis, behavior manipulation, software vulnerability detection and the like. Up till now, the machine learning research community embraces the idea of open access. However, this is opposed to precautionary efforts to prevent the malicious use of machine learning applications. Information about or from such applications may, if improperly disclosed, cause harm to people, organizations or whole societies. Hence, the goal of this work is to outline deliberations on how to deal with questions concerning the dissemination of such information. It proposes a tentative ethical framework for the machine learning community on how to deal with forbidden knowledge and dual-use applications.

Download options


    Upload a copy of this work     Papers currently archived: 72,879

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library


Added to PP

14 (#738,865)

6 months
1 (#386,001)

Historical graph of downloads
How can I increase my downloads?

Similar books and articles

From Privacy to Anti-Discrimination in Times of Machine Learning.Thilo Hagendorff - 2019 - Ethics and Information Technology 21 (4):331-343.
Inductive Learning by Machines.Stuart Russell - 1991 - Philosophical Studies 64 (October):37-64.