Faculty plagiarism and fraud are widely documented occurrences but little analysis has been conducted. This article addresses the question of why faculty plagiarism and fraud occurs and suggests approaches on how to develop an environment where faculty misconduct is socially inappropriate. The authors review relevant literature, primarily in business ethics and student cheating, developing action steps that could be applied to higher education. Based upon research in these areas, the authors posit some actions that would be appropriate in higher education (...) and suggest topics for continued study. (shrink)
Nietzsche, in his work On the Genealogy of Morals, argues that human cognition is analogous in certain significant respects to the perspectival nature of optical vision. Because of this analogy, his account of human cognition is often referred to as perspectivism. Brian Leiter argues that Nietzsche’s use of this optical perspective metaphor undermines interpretations that take perspectivism to have radically skeptical implications. In this paper, I examine Leiter’s argument and show that the considerations he raises based on the optical perspective (...) metaphor are insufficient to undermine the claim that perspectivism entails radical skepticism. (shrink)
We model the forgetting of propositional variables in a modal logical context where agents become ignorant and are aware of each others' or their own resulting ignorance. The resulting logic is sound and complete. It can be compared to variable-forgetting as abstraction from information, wherein agents become unaware of certain variables: by employing elementary results for bisimulation, it follows that beliefs not involving the forgotten atom(s) remain true.
A Study of the History and Philosophy of Category Theory Jean-Pierre Marquis. to say that objects are dispensable in geometry. What is claimed is that the specific nature of the objects used is irrelevant. To use the terminology already ...
We model the forgetting of propositional variables in a modal logical context where agents become ignorant and are aware of each others’ or their own resulting ignorance. The resulting logic is sound and complete. It can be compared to variable-forgetting as abstraction from information, wherein agents become unaware of certain variables: by employing elementary results for bisimulation, it follows that beliefs not involving the forgotten atom(s) remain true.
The aim of this paper is to put into context the historical, foundational and philosophical significance of category theory. We use our historical investigation to inform the various category-theoretic foundational debates and to point to some common elements found among those who advocate adopting a foundational stance. We then use these elements to argue for the philosophical position that category theory provides a framework for an algebraic in re interpretation of mathematical structuralism. In each context, what we aim to show (...) is that, whatever the significance of category theory, it need not rely upon any set-theoretic underpinning. (shrink)
In this paper, the problem of purifying an assumption-based theory KB, i.e., identifying the right extension of KB using knowledge-gathering actions (tests), is addressed. Assumptions are just normal defaults without prerequisite. Each assumption represents all the information conveyed by an agent, and every agent is associated with a (possibly empty) set of tests. Through the execution of tests, the epistemic status of assumptions can change from "plausible" to "certainly true", "certainly false" or "irrelevant", and the KB must be revised so (...) as to incorporate such a change. Because removing all the extensions of an assumption-based theory except one enables both identifying a larger set of plausible pieces of information and renders inference computationally easier, we are specifically interested in finding out sets of tests allowing to purify a KB (whatever their outcomes). We address this problem especially from the point of view of computational complexity. (shrink)
In this paper, I introduce and examine the notion of “mathematical engineering” and its impact on mathematical change. Mathematical engineering is an important part of contemporary mathematics and it roughly consists of the “construction” and development of various machines, probes and instruments used in numerous mathematical fields. As an example of such constructions, I briefly present the basic steps and properties of homology theory. I then try to show that this aspect of contemporary mathematics has important consequences on our conception (...) of mathematical knowledge, in particular mathematical growth. (shrink)
In this paper, we try to establish that some mathematical theories, like K-theory, homology, cohomology, homotopy theories, spectral sequences, modern Galois theory (in its various applications), representation theory and character theory, etc., should be thought of as (abstract) machines in the same way that there are (concrete) machines in the natural sciences. If this is correct, then many epistemological and ontological issues in the philosophy of mathematics are seen in a different light. We concentrate on one problem which immediately follows (...) the recognition of the particular status of these theories: the demarcation problem between ‘natural kinds’ and ‘artefacts’. (shrink)
The aim of this paper is to clarify the role of category theory in the foundations of mathematics. There is a good deal of confusion surrounding this issue. A standard philosophical strategy in the face of a situation of this kind is to draw various distinctions and in this way show that the confusion rests on divergent conceptions of what the foundations of mathematics ought to be. This is the strategy adopted in the present paper. It is divided into 5 (...) sections. We first show that already in the set theoretical framework, there are different dimensions to the expression foundations of. We then explore these dimensions more thoroughly. After a very short discussion of the links between these dimensions, we move to some of the arguments presented for and against category theory in the foundational landscape. We end up on a more speculative note by examining the relationships between category theory and set theory. (shrink)
Recent discussions of the doctrine of double effect have contained improved versions of the doctrine not subject to some of the difficulties of earlier versions. There is no longer one doctrine of double effect. This essay evaluates four versions of the doctrine: two formulations of the traditional Catholic doctrine, Joseph Boyle's revision of that doctrine, and Warren Quinn's version of the doctrine. I conclude that all of these versions are flawed. Keywords: double effect, intention, Joseph Boyle, medical ethics, Warren Quinn (...) CiteULike Connotea Del.icio.us What's this? (shrink)
Approximations form an essential part of scientific activity and they come in different forms: conceptual approximations (simplifications in models), mathematical approximations of various types (e.g. linear equations instead of non-linear ones, computational approximations), experimental approximations due to limitations of the instruments and so on and so forth. In this paper, we will consider one type of approximation, namely numerical approximations involved in the comparison of two results, be they experimental or theoretical. Our goal is to lay down the conceptual and (...) formal foundations of a local theory of partial truth. This is done by introducing and exploring the concept of truth space. (shrink)
The surgical treatment of breast cancer has changed in recent years. Analysis of the research that led to these changes yields apparently good arguments for all of the following: (1) The research yielded very great benefits for women. (2) There was no other way of obtaining these benefits. (3) This research violated the fundamental rights of the women who were research subjects. This sets a problem for ethics at many levels.
Conversion of slowly accruing conventionally randomized studies to a prerandomized design has apparently been successful in increasing accrual enough so that some of these studies can be completed. Ellenberg (1984) has pointed out some of the ethical dangers of prerandomization. This paper argues that prerandomization must be either unsuccessful or unethical: either conversion to prerandomization will result in no significant increase in the rate of completion of the study or a significant increase in accrual rate will be achieved either at (...) the price of an inadequate attempt to obtain informed consent, at the price of the deceit of patients, or at the price of violations of patient autonomy. The argument of the paper can be sketched as follows: For any given randomized study, either patients prefer one treatment arm to the other or they do not. On the one hand, if they do, then conventional randomization fails. But prerandomization, if done ethically, will fail also. Hence, if prerandomization succeeds in this sort of case, then the trial has been conducted unethically. On the other hand, if patients do not prefer One arm to the other, then prerandomization will succeed. So will conventional randomization. Hence, prerandomization is either unnecessary or unethical. Ellenberg's concerns count as good moral reasons for not prerandomizing if prerandomization is unnecessary. It follows that prerandomization is always wrong. Keywords: prerandomized clinical trials, ethics, informed consent CiteULike Connotea Del.icio.us What's this? (shrink)