The issue of damaged relationships and of repairing them is very important, especially in recent years with reports of organizations which damage relationships with various stakeholders. Many studies have investigated how individuals react to damaged relationships after perceiving injustice or receiving offense in organizations. A part of this research has been focused on revenge or other types of negative responses. However, individuals can choose to react in other ways than revenge, willing to repair relationships through reconciliation. Recently, the effectiveness of (...) reconciliation to repair damaged relationship in organizations has been linked to restorative justice. For the purpose of this article we are interested in understanding how restorative justice can be effective in repairing damaged or broken relationships in organizations, as inspired by principles of compassion or mercy. In this sense, we look for a convincing philosophical foundation for restorative justice, proposing Levinas’ ethics as a way to justify it. (shrink)
The logic of gift and gratuitousness in business activity raised by the encyclical Caritas in Veritate stresses a deeper critical evaluation of the category of relation. The logic of gift in business includes two aspects. The first is considering the logic of gift as a new conceptual lens in order to view business relationship beyond contractual logic. In this view, it is crucial to see the circulation of goods as instrumental for the development of relationships. The second aspect is to (...) qualify the relationships established through the gift, and to think about the motivation in gift-giving, which has an ethical content. We give because we have received, and through gift-giving we develop relationships that have a high ‘bonding value’. Analysing the logic of gift in business management may permit us to gain an understanding of the ambiguity of gift-giving in organizations. Looking at the relationships between organizations and employees, and organizations and customers, we can discover why the logic of gift is often misunderstood or abused in its application, and how it should be applied to be more consistent with the message of Caritas in Veritate. (shrink)
Today we are facing the rising of new needs for the firms, especially for the small ones; they find themselves acting in a context characterized by the great content of information technology. This paper wants to analyse some aspects tied to the use of some particular kinds of resources, such as knowledge and organizational culture. It's necessary, especially in the new economy, to add another attribute to the four set by Barney as elements able to make the resources sustainable competitive (...) advantage sources in 1991: this attribute is freedom, essentially as freedom of reaching and using resources. This attribute, more than coming along with the four already set, can be considered in many cases as a pre-condition to the other ones in existence.The theoretical part will be completed by the reference to a particular organizational model that is based on freedom, that is the open source model; we'll try to show how freedom is not an abstract concept in business. (shrink)
Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. In particular, (...) it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities. (shrink)
Can an event’s blameworthiness distort whether people see it as intentional? In controversial recent studies, people judged a behavior’s negative side effect intentional even though the agent allegedly had no desire for it to occur. Such a judgment contradicts the standard assumption that desire is a necessary condition of intentionality, and it raises concerns about assessments of intentionality in legal settings. Six studies examined whether blameworthy events distort intentionality judgments. Studies 1 through 4 show that, counter to recent claims, intentionality (...) judgments are systematically guided by variations in the agent’s desire, for moral and nonmoral actions alike. Studies 5 and 6 show that a behavior’s negative side effects are rarely seen as intentional once people are allowed to choose from multiple descriptions of the behavior. Specifically, people distinguish between “knowingly” and “intentionally” bringing about a side effect, even for immoral actions. These studies suggest that intentionality judgments are unaffected by a behavior’s blameworthiness. (shrink)
Moral judgments about an agent's behavior are enmeshed with inferences about the agent's mind. Folk psychology—the system that enables such inferences—therefore lies at the heart of moral judgment. We examine three related folk-psychological concepts that together shape people's judgments of blame: intentionality, choice, and free will. We discuss people's understanding and use of these concepts, address recent findings that challenge the autonomous role of these concepts in moral judgment, and conclude that choice is the fundamental concept of the three, defining (...) the core of folk psychology in moral judgment. (shrink)
Extant models of moral judgment assume that an action’s intentionality precedes assignments of blame. Knobe (2003b) challenged this fundamental order and proposed instead that the badness or blameworthiness of an action directs (and thus unduly biases) people’s intentionality judgments. His and other researchers’ studies suggested that blameworthy actions are considered intentional even when the agent lacks skill (e.g., killing somebody with a lucky shot) whereas equivalent neutral actions are not (e.g., luckily hitting a bull’s-eye). The present five studies offer an (...) alternative account of these provocative findings. We suggest that people see the morally significant action examined in previous studies (killing) as accomplished by a basic action (pressing the trigger) for which an unskilled agent still has sufficient skill. Studies 1 through 3 show that when this basic action is performed unskillfully or is absent, people are far less likely to view the killing as intentional, demonstrating that intentionality judgments, even about immoral actions, are guided by skill information. Studies 4 and 5 further show that a neutral action such as hitting the bull’s-eye is more difficult than killing and that difficult actions are less often judged intentional. When difficulty is held constant, people’s intentionality judgments are fully responsive to skill information regardless of moral valence. The present studies thus speak against the hypothesis of a moral evaluation bias in intentionality judgments and instead document people’s sensitivity to subtle features of human action. (shrink)
Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way to principled (...) comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)
The paper distinguishes two accounts of legal normativity. One-source accounts claim there is only one source for legal normativity, which is ultimately linguistic. Two-source accounts claim legal normativity is both linguistic and non-linguistic. Two-source accounts claim we need to go beyond language and beyond propositions taken as linguistic entities, while they are one-source accounts? main conceptual tool. Both accounts construct propositions as linguistic. There is, nevertheless, a documented analytic tradition starting with G.E. Moore that constructs propositions as non-linguistic entities. Today, (...) the problem of the unity of proposition and structured propositions are highly debated in metaphysics. How does such debates fit into the one-source vs. two-source picture of legal normativity? Why has analytic legal philosophy failed to consider such an option concerning propositions? This paper thus reconstructs the argumentative dynamics between one-source and two-source accounts; presents the less considered philosophical view of propositions as non-linguistic entities and discusses how to include or dismiss such a philosophical view in the one-source/two-source debate on legal normativity. nema. (shrink)
The paper set up a small “philosophical lab” for thought experiments using Digital Universes as its main tool. Digital Universes allow us to examine how mereology affects the debate on New Realism of Ferraris and shed new light on the whole notion of Realism. The semi-formal framework provides a convenient way to model the varieties of realism that are important for the program of New Realism: we then draw the natural consequences of this approach into the ontology of our world, (...) arguing that the same considerations that apply to Digital Universe would hold for chess, institutions and social objects as well. Once a particular version of mereology is chosen, there are unavoidable consequences that the very underlying structure of social ontology. We then propose a new New Realism to tackle social objects: social objects turn out to be nothing more than mereological sums, picked up by some description. (shrink)
Moral judgment – even the type discussed by Knobe – necessarily relies on substantial information about an agent's mental states, especially regarding beliefs and attitudes. Moreover, the effects described by Knobe can be attributed to norm violations in general, rather than moral concerns in particular. Consequently, Knobe's account overstates the influence of moral judgment on assessments of mental states and causality.
My Ph.D. thesis Impossibilità nel diritto [Impossibility in the Legal Domain] is devoted to the systematic analyses of what are called, at least prima facie, legal impossibilities. My dissertation defines and isolates an area of studies - impossibility in the law - that has never been put organically together. In my work I present some case studies of normative impossibilities and discuss them from a philosophical point of view: impossible laws, impossible norms in a prescriptive theory of norms (ch. 2), (...) conflicting norms and legal gaps (metanormative impossibility - ch. 3), impossible obligations (ch. 4), impossible crimes (ch. 5), impossible legal proofs (ch. 6). I organize my research along the distinction - introduced in ch. 1 - between impossibility of norms (i.e. impossible norms and impossible normative acts) and impossibility from norms (i.e. impossibility due to a norm or a set of norms); the distinction between the impossibility of a norm conceived as a single entity and the impossibility of a norm conceived as part of a legal system; and the distinction of two uses of impossibilities in general, as impossibility can be both the object of a modal qualification and a modality itself. I propose four new contributions to the study of impossibility in the legal domain (ch. 7). Firstly, I reconstruct two different functions of the impossibility in the legal domain (exculpatory and invalidating); secondly, I put forward a triadic model for describing impossibility in the legal domain (in which, roughly, a set of sources of impossibilities is qualified by a function for the assumption of impossibility in the actual and concrete legal system); thirdly, I define and investigate the relationship of creation, assumption and presupposition between impossibility and a legal system; fourthly, I critically list and review all the different kinds of things that are called impossibilities inside a legal system, showing how sometimes the use of the concept of impossibility is not carefully justified. As an appendix (ch. 8), I outline a logic for impossibilities in the legal domain that allows to investigate the phenomena discussed in the work by breaking down the equivalence between being impossible (in the legal domain) and being logically contradictory. (shrink)
Model checking, a prominent formal method used to predict and explain the behaviour of software and hardware systems, is examined on the basis of reflective work in the philosophy of science concerning the ontology of scientific theories and model-based reasoning. The empirical theories of computational systems that model checking techniques enable one to build are identified, in the light of the semantic conception of scientific theories, with families of models that are interconnected by simulation relations. And the mappings between these (...) scientific theories and computational systems in their scope are analyzed in terms of suitable specializations of the notions of model of experiment and model of data. Furthermore, the extensively mechanized character of model-based reasoning in model checking is highlighted by a comparison with proof procedures adopted by other formal methods in computer science. Finally, potential epistemic benefits flowing from the application of model checking in other areas of scientific inquiry are emphasized in the context of computer simulation studies of biological information processing. (shrink)
Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - (...) can enter cognitive interactions that human beings have not experienced with any other non-human system. (shrink)
Robots are being extensively used for the purpose of discovering and testing empirical hypotheses about biological sensorimotor mechanisms. We examine here methodological problems that have to be addressed in order to design and perform “good” experiments with these machine models. These problems notably concern the mapping of biological mechanism descriptions into robotic mechanism descriptions; the distinction between theoretically unconstrained “implementation details” and robotic features that carry a modeling weight; the role of preliminary calibration experiments; the monitoring of experimental environments for (...) disturbing factors that affect both modeling features and theoretically unconstrained implementation details of robots. Various assumptions that are gradually introduced in the process of setting up and performing these robotic experiments become integral parts of the background hypotheses that are needed to bring experimental observations to bear on biological mechanism descriptions. (shrink)
This paper addresses the methodological problem of analysing what it is to explain observed behaviours of engineered computing systems, focusing on the crucial role that abstraction and idealization play in explanations of both correct and incorrect BECS. First, it is argued that an understanding of explanatory requests about observed miscomputations crucially involves reference to the rich background afforded by hierarchies of functional specifications. Second, many explanations concerning incorrect BECS are found to abstract away from descriptions of physical components and processes (...) of computing systems that one finds below the logic circuit and gate layer of functional specification hierarchies. Third, model-based explanations of both correct and incorrect BECS that are provided in the framework of formal verification methods often involve idealizations. Moreover, a distinction between restrictive and permissive idealizations is introduced and their roles in BECS explanations are analysed. (shrink)
The ethical monitoring of brain-machine interfaces (BMIs) is discussed in connection with the potential impact of BMIs on distinguishing traits of persons, changes of personal identity, and threats to personal autonomy. It is pointed out that philosophical analyses of personhood are conducive to isolating an initial thematic framework for this ethical monitoring problem, but a contextual refinement of this initial framework depends on applied ethics analyses of current BMI models and empirical case-studies. The personal autonomy-monitoring problem is approached by identifying (...) various ways in which the inclusion of a robotic controller in the motor pathway of an output BMI may limit or jeopardize personal autonomy. (shrink)
In this paper, we investigate the ‘ought implies can’ thesis, focusing on explanations and interpretations of OIC, with a view to clarifying its uses and relevance to legal philosophy. We first review various issues concerning the semantics and pragmatics of OIC; then we consider how OIC may be incorporated in Hartian and Kelsenian theories of the law. Along the way we also propose a taxonomy of OIC-related claims.
Psychological attitudes towards service and personal robots are selectively examined from the vantage point of psychoanalysis. Significant case studies include the uncanny valley effect, brain-actuated robots evoking magic mental powers, parental attitudes towards robotic children, idealizations of robotic soldiers, persecutory fantasies involving robotic components and systems. Freudian theories of narcissism, animism, infantile complexes, ego ideal, and ideal ego are brought to bear on the interpretation of these various items. The horizons of Human-robot Interaction are found to afford new and fertile (...) grounds for psychoanalytic theorizing beyond strictly therapeutic contexts. (shrink)
The article presents another of those ingenious mind, rebels to the yoke of religion, typical of the Italian Renaissance. Converted to Calvinism and therefore condemned to death by the Inquisition, Guglielmo Grataroli (1516-1568) became a defender of heterodox doctrine. His translation of a report of the Waldensian massacre in Calabria became part of the history of Protestant martyrs. He was the author of numerous treatises on various subjects, for which he widely used the works of Giovanni Michele Alberto da (...) Carrara, Antoine Mizauld and Gerolamo Cardano. The perfect correspondence of the topics discussed makes it probable that Giordano Bruno knew his writings. In particular, the De mutatione temporum, eiusque signis perpetuis may have inspired the De’ segni de’ tempi, a Bruno’s lost opera. This allows us to conjecture the content of the work with greater reliability. (shrink)
This volume is an introduction to the philosophy of William of Ockham. After a brief account of his life and works, it presents his philosophical ideas under the headings of logic, epistemology, metaphysics, rational theology, philosophy of nature, psychology, ethics, and politics. The work concludes with a bibliography on Ockham and Ockhamism from 1950 to 1970, supplementing that of V. Heynck from 1919 to 1949. This is a well-informed presentation of Ockham’s philosophical ideas. Covering the whole range of his philosophy, (...) it does not treat any part of it in depth; but, as Fr. Bettoni says in his introduction, it offers a new global perspective on Ockham’s thought. There are frequent citations from Ockham’s writings in the notes, indicating the author’s close reading of Ockham’s works. The author is well aware that Ockham was "a philosopher who never ceases to be a theologian". But he does not convey to his reader the fact that Ockham’s interests, like those of Aquinas and Scotus, were primarily theological and not philosophical; that Ockham developed a logic and philosophy in support of a theology. The title of the work "Guglielmo di Ockham" suggests that the Ockham presented in it—Ockham the philosopher, not Ockham the theologian—was the "essential" Ockham.—A. A. M. (shrink)
The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned (...) into open dissent. Founder of Cybernetics Norbert Wiener, in the aftermath of World War II, argued against military applications of learning machines, by drawing on epistemological appraisals of machine learning techniques. This connection between philosophy of science and techno-ethics is both strengthened and extended here. It is strengthened by an epistemological analysis of contemporary machine learning from examples; it is extended by a reflection on ceteris paribus conditions for models of adaptive behaviours. (shrink)
What follows is a brief commentary to Dan Sperber's plenary lecture at ECAP7 "The deconstruction of social unreality". Sperber's main criticism to Searle's socia lontology is that Searle attributes a causal role to mere Cambridge properties. Sperber then argues that declarations do not create institutional facts causally, criticizes the Serlean theory of recognition/acceptance and put forward his thesis using the concept cognitive causal chains.