Submission of study protocols to research ethics committees constitutes one of the earliest stages at which planned trials are documented in detail. Previous studies have investigated the amendments requested from researchers by RECs, but the type of issues raised during REC review have not been compared by sponsor type. The objective of this study was to identify recurring shortcomings in protocols of drug trials based on REC comments and to assess whether these were more common among industry-sponsored or non-industry trials.
The Technology Assessment (TA) Program established in 2003 as part of the Dutch R&D consortium NanoNed is interesting for what it did, but also as an indication that there are changes in how new science and technology are pursued: the nanotechnologists felt it necessary to spend part of their funding on social aspects of nanotechnology. We retrace the history of the TA program, and present the innovative work that was done on Constructive TA of emerging nanotechnology developments and on aspects (...) of embedding of nanotechnology in society. One achievement is the provision of tools and approaches to help make the co-evolution of technology and society more reflexive. We briefly look forward by outlining its successor program, TA NanoNextNL, in place since 2011. (shrink)
In many Western science systems, funding structures increasingly stimulate academic research to contribute to practical applications, but at the same time the rise of bibliometric performance assessments have strengthened the pressure on academics to conduct excellent basic research that can be published in scholarly literature. We analyze the interplay between these two developments in a set of three case studies of fields of chemistry in the Netherlands. First, we describe how the conditions under which academic chemists work have changed since (...) 1975. Second, we investigate whether practical applications have become a source of credibility for individual researchers. Indeed, this turns out to be the case in catalysis, where connecting with industrial applications helps in many steps of the credibility cycle. Practical applications yield much less credibility in environmental chemistry, where application-oriented research agendas help to acquire funding, but not to publish prestigious papers or to earn peer recognition. In biochemistry practical applications hardly help in gaining credibility, as this field is still strongly oriented at fundamental questions. The differences between the fields can be explained by the presence or absence of powerful upstream end-users, who can afford to invest in academic research with promising long term benefits. (shrink)
Increased scrutiny of corporate legitimacy has sparked an interest in “historic corporate social responsibility”, or the mechanism through which firms take responsibility for past misdeeds. Extant theory on historic CSR implicitly treats corporate engagement with historical criticism as intentional and dichotomous, with firms choosing either a limited or a high engagement strategy. However, this conceptualization is puzzling because a firm’s engagement with historic claims involves organizational practices that managers don’t necessarily control; hence, it might materialize differently than anticipated. Furthermore, multiple (...) motivations could jointly affect managers’ approach to organizational history, especially when dealing with conflicting stakeholder demands, rendering it difficult to historicize consistently. Examining the relationship between the legitimacy of critical historic claims, corporate engagement with these claims and corporate legitimacy, the present paper performs a historical case study of the Hudson’s Bay Company’s long term use of history in stakeholder relations. The data suggest that under conflicting internal and external pressures, the HBC’s engagement with historical criticism became “sedimented” over time, involving both open and stakeholder-inclusive practices of “history-as-sensemaking” and instrumental “history-as-rhetoric”. Enriching understanding of corporate-stakeholder interaction about the past, this finding may stimulate its generation of social value and corporate legitimacy. (shrink)
The politics of innovation involves displacements between various interrelated settings ranging from the context of design to the context of use. This variety of settings and their particular qualities raise questions about the democratic implications of displacements, which have been addressed within science and technology studies for decades from different perspectives and along various theoretical strands. This article distinguishes five different traditions of conceptualizing the relation between technological innovation and democracy: an intentionalist, a proceduralist, an actor—network, an interpretivist, and a (...) performative perspective. They differ in their concepts of “technology,” “politics,” and “democracy”; they imply different roles for the analyst and they suggest or urge other political means. It is suggested that spelling out the differences and similarities between the five perspectives creates the possibility to overcome the limitations of any particular perspective of technology and democracy. (shrink)
ABSTRACTRecently, Cognition and Emotion published an article demonstrating that age cues affect the speed and accuracy of emotion recognition. The authors claimed that the observed effect of target age on emotion recognition is better explained by evaluative than stereotype associations. Although we agree with their conclusion, we believe that with the research method the authors employed, it was impossible to detect a stereotype effect to begin with. In the current research, we successfully replicate previous findings. Furthermore, by changing the comparative (...) context, Study 2 provides a first test of age-stereotypes affecting emotions recognition. We discuss recommendations for future studies in the domain of social categorisation and emotion recognition. (shrink)
In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions (...) that affect their environment. These relations between people, machines, and decisions can take many different forms, but thus far, a systematic account of machine-assisted moral decisions is lacking. This paper investigates the concept of machine-assisted moral decisions from the perspective of technological mediation. It is argued that modern machines do not only have morality in the sense of mediating the actions of humans, but that, by making their own decisions within their relations with humans, mediate morality itself. A classification is proposed to differentiate between four different types of moral relations. The moral aspects within the decisions these systems make are combined into three dimensions that describe the distinct characteristics of different types of moral mediation by machines. Based on this classification, specific guidelines for moral behavior can be provided for these systems. (shrink)
The classical dichotomical framework has shaped the western conceptualization of emotions and is still alive in our common imagery impregnating our own assumptions about the polarity emotion/reason. Thereby, my main purpose is to suggest that another framework can be defended. In order to it, I will firstly analyse the basis of this logic, as well as I will also offer a critique of its main principles and consequences. Finally, as a way of surpassing the old dichotomic model, I will argue (...) in favour of the links between reason and emotions taking into account the new evidences found over the last decades. To conclude, I will point out some of the benefits of replacing a dualistic framework by an interactive one. (shrink)
In de loop van de evolutie heeft het menselijk lichaam allerlei temporaliteiten ontwikkeld. Deze stellen bepaalde grenzen aan het lichaam. Recente ontwikkelingen in de chronobiologie weerleggen de veronderstelling dat deze grenzen eigen zijn aan het lichaam. Met behulp van het begrip ‘eigentijd’ betoog ik dat deze grenzen bepaald worden door de interactie tussen lichaamseigen en lichaamsvreemde tijden.
Resolving conflicts between different measurements ofa property of a physical system may be a key step in a discoveryprocess. With the emergence of large-scale databases and knowledgebases with property measurements, computer support for the task ofconflict resolution has become highly desirable. We will describe amethod for model-based conflict resolution and the accompanyingcomputer tool KIMA, which have been applied in a case-study inmaterials science. In order to be a useful aid to scientists, the toolneeds to be integrated with other tools in (...) a computer-supporteddiscovery environment. We will give an outline of such acomputer-supported discovery environment and argue that its use mightlead to new ways of doing science, so-called computer regimes. (shrink)
This article provides current Schwartz Values Survey data from samples of business managers and professionals across 50 societies that are culturally and socioeconomically diverse. We report the society scores for SVS values dimensions for both individual- and societallevel analyses. At the individual- level, we report on the ten circumplex values sub- dimensions and two sets of values dimensions. At the societal- level, we report on the values dimensions of embeddedness, hierarchy, mastery, affective autonomy, intellectual autonomy, egalitarianism, and harmony. For each (...) society, we report the Cronbach' s? statistics for each values dimension scale to assess their internal consistency as well as report interrater agreement analyses to assess the acceptability of using aggregated individual level values scores to represent country span sp. (shrink)
This paper examines the vital role played by electron microscopy toward the modern definition of viruses, as formulated in the late 1950s. Before the 1930s viruses could neither be visualized by available technologies nor grown in artificial media. As such they were usually identified by their ability to cause diseases in their hosts and defined in such negative terms as “ultramicroscopic” or invisible infectious agents that could not be cultivated outside living cells. The invention of the electron microscope, with magnification (...) and resolution powers several orders of magnitude better than that of optical instruments, opened up possibilities for biological applications. The hitherto invisible viruses lent themselves especially well to investigation with this new instrument. We first offer a historical consideration of the development of the instrument and, more significantly, advances in techniques for preparing and observing specimens that turned the electron microscope into a routine biological tool. We then describe the ways in which the electron microscopic images, or micrographs, functioned as forms of new knowledge about viruses and resulted in a paradigm shift in the very definition of these entities. Micrographs were not mere illustrations since they did the work for the electron microscopists. Drawing extensively on primary publications, we adduce the role of the new instrument in understanding the so-called eclipse phase in virus multiplication and the unexpected spinoffs of data from electron microscopy in naming and classifying viruses. Thus, we show that electron microscopy functioned not only to provide evidence, but also arguments in facilitating a reordering of the world that it brought into the visual realm. (shrink)
The idea of progress was lent much importance by Collingwood, but it is difficult to elucidate his views on the idea. Considering his views of other related concepts -change, development, and process-aids the understanding of his idea of progress. Collingwood's treatment of the concept of historical progress shows a lack of consistency, when he denies on the one hand that ways of life can be grasped, while on the other he believes that historical periods may be understood. Collingwood denies (...) the possibility that historical periods can be compared, for each period is characterized and judged in terms of its own problems and the solutions it find's for them. It is possible to distinguish four different positions in Collingwood's attitude to the concept of progress: a) It is dependent on a point of view; b) It is meaningless when used in the realms of art, happiness, and morality; c) It is meaningful when applied to the identity of a certain problem; d) It is necessary in solving practical and theoretical problems. (shrink)
It is commonly thought that before the introduction of quantum mechanics, determinism was a straightforward consequence of the laws of mechanics. However, around the nineteenth century, many physicists, for various reasons, did not regard determinism as a provable feature of physics. This is not to say that physicists in this period were not committed to determinism; there were some physicists who argued for fundamental indeterminism, but most were committed to determinism in some sense. However, for them, determinism was often not (...) a provable feature of physical theory, but rather an a priori principle or a methodological presupposition. Determinism was strongly connected with principles of causality and continuity and the principle of sufficient reason; this thesis examines the relevance of these principles in the history of physics. Moreover, the history of determinism in this period shows that there were essential changes in the relation between mathematics and physics: whereas in the eighteenth century, there were metaphysical arguments which lent support to differential calculus, by the early twentieth century the development of rigorous foundations of differential calculus led to concerns about its applicability in physics. The thesis consists of six papers. In the first paper, "On the origins and foundations of Laplacian determinism", I argue that Laplace, who is usually pointed out as the first major proponent of scientific determinism, did not derive his statement of determinism directly from the laws of mechanics; rather, his determinism has a background in eighteenth century Leibnizian metaphysics, and is ultimately based on the law of continuity and the principle of sufficient reason. These principles also provided a basis for the idea that one can find laws of nature in the form of differential equations which uniquely determine natural processes. In "The Norton dome and the nineteenth century foundations of determinism", I argue that an example of indeterminism in classical physics which has attracted attention in philosophy of physics in recent years, namely the Norton come, was already discussed during the nineteenth century. However, the significance which this type of indeterminism had back then is very different from the significance which the Norton dome currently has in philosophy of physics. This is explained by the fact that determinism was conceived of in an essentially different way: in particular, the nineteenth century authors who wrote about this type of indeterminism regarded determinism as an a priori principle rather than as a property of the equations of physics. In "Vital instability: life and free will in physics and physiology, 1860-1880", I show how Maxwell, Cournot, Stewart and Boussinesq used the possibility of unstable or indeterministic mechanical systems to argue that the will or a vital principle can intervene in organic processes without violating the laws of physics, so that a strictly dualist account of life and the mind is possible. Moreover, I show that their ideas can be understood as a reaction to the law of conservation of energy and to the way it was used in physiology to exclude vital and mental causes. In "The nineteenth century conflict between mechanism and irreversibility", I show that in the late nineteenth century, there was a widespread conflict between the aim of reducing physical processes to mechanics and the recognition that certain processes are irreversible. Whereas the so-called reversibility objection is known as an objection that was made to the kinetic theory of gases, it in fact appeared in a wide range of arguments, and was susceptible to very different interpretations. It was only when the project of reducing all of physics to mechanics lost favor, in the late nineteenth century, that the reversibility objection came to be used as an argument against mechanism and against the kinetic theory of gases. In "Continuity in nature and in mathematics: Boltzmann and Poincaré", I show that the development of rigorous foundations of differential calculus in the nineteenth century led to concerns about its applicability in physics: through this development, differential calculus was made independent of empirical and intuitive notions of continuity and was instead based on mathematical continuity conditions, and for Boltzmann and Poincaré, the applicability of differential calculus in physics depended on whether these continuity conditions could be given a foundation in intuition or experience. In the final paper, "Determinism around 1900", I briefly discuss the implications of the developments described in the previous two papers for the history of determinism in physics, through a discussion of determinism in Mach, Poincaré and Boltzmann. I show that neither of them regards determinism as a property of the laws of mechanics; rather, for them, determinism is a precondition for science, which can be verified to the extent that science is successful. (shrink)
Categorization is probably one of the most central areas in the study of cognition, language and information. However, there is a serious gap running through the semantic treatments of categories and concepts . On one side we find the ’classical’, formal approach, based on logical considerations, that has lent itself well for computational applications. In this approach, concepts are defined in terms of necessary and sufficient conditions. On the other side is an informal approach to categorization that is usually (...) motivated by the results of psychological experiments and that has not found its way into technologies on a large scale. Concepts here are based on prototypes, stereotypical attributes and family resemblances, which have become the hallmark of cognitive semantics. Obviously, it is important to bridge this gap, for theoretical and practical reasons. (shrink)
Wilson & Sober's discussion of group selection is marred by the absence of plausible examples of human group-level behavioral adaptation. The trait of authoritarianism is one possible example of such an adaptation. It reduces within-group variance in reproductive success, manifests itself more strongly in response to group-level threat, and is found in a variety of cultures.
Technical hitches mar van Gelder's proposed map of the conceptual landscape, particularly with respect to descriptive levels and the trio of instantiation, realisation, and implementation. However, for all the formal quibbles, van Gelder is onto something important; the tension he notes between computationalism and a dynamical alternative threatens to transform the way we conduct cognitive science research.
This paper revisits a well-known rebuttal of Peter van Inwagen’s consequence argument. This CS-rebuttal, as I shall call it, focuses on the counterfactual structure of alternative possibilities. It shows that the ability to do otherwise is such that if the agent had exercised it, the distant past and/or the laws of nature would have been different. On the counterfactual scenario, there is, therefore, no need for the agent to exercise an ability to change the past or the laws of nature. (...) I first present van Inwagen’s original version of the consequence argument. After exposing some difficulties with Lewis’ famous version of the CS-rebuttal, I proceed by explaining and defending an older and, in my view, superior version. I subsequently discuss a traditional incompatibilist rejoinder, which insists that the past and the laws of nature are fixed. Although this rejoinder delivers a valid argument against the existence of alternative possibilities, it relies on premises the compatibilist explicitly rejects. The outcome of the debate is therefore properly characterized as a genuine dialectical stalemate between compatibilists and incompatibilists. In the final sections of the paper, I demonstrate that attempts by Fischer, Holliday and Fischer and Pendergraft to move beyond the stalemate in favor of the incompatibilist position all fail. I thereby show that the debate is marred by a misunderstanding of the semantics underlying the backtracking conditionals sometimes associated with the compatibilist position. In view of my arguments, the dialectical stalemate between compatibilists and incompatibilists regarding the counterfactual structure of the ability to do otherwise remains fully intact. (shrink)
In response to prominent criticisms of virtue ethical accounts of right action, Daniel Russell has argued that these criticisms are misguided insofar as they rest on an incorrect understanding of what virtue ethicists mean by ‘right action’, drawing on Rosalind Hursthouse’s influential account of the term. Liezl van Zyl has explored, though not fully-endorsed, a similar approach. The response holds that virtue ethicists do not embrace a strong connection between (i) right action and (ii) what any given agent ought to (...) do in a given set of circumstances. Rather, ‘right action’ is a matter of action assessment, and indicates that a given action is morally excellent and praiseworthy. More generally, the proposed account of rightness emphasizes both (i) an agent’s past and how she came to be in certain circumstances - is it a result of her own vice or wrong actions? and (ii) the agent’s own future happiness and well-being - will an action be so terrible that her life is marred and ruined? The narrative structure of an agent’s life thus plays a significant role in determining whether an action is right. This revisionary conception of right action is the focus of the current chapter. (shrink)
Scientific representation: A long journey from pragmatics to pragmatics Content Type Journal Article DOI 10.1007/s11016-010-9465-5 Authors James Ladyman, Department of Philosophy, University of Bristol, 9 Woodland Rd, Bristol, BS8 1TB UK Otávio Bueno, Department of Philosophy, University of Miami, Coral Gables, FL 33124, USA Mauricio Suárez, Department of Logic and Philosophy of Science, Complutense University of Madrid, 28040 Madrid, Spain Bas C. van Fraassen, Philosophy Department, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA Journal Metascience Online (...) ISSN 1467-9981 Print ISSN 0815-0796. (shrink)
En 1893, « physicien-ingénieur » André Blondel invente l’oscillographe bifilaire permettant de visualiser les tensions et courants variables. À l’aide de ce puissant moyen d’investigation, il entreprend tout d’abord l’étude des phénomènes de l’arc électrique alors utilisé pour l’éclairage côtier et urbain puis de l’arc chantant employé comme émetteur d’ondes radioélectriques en T.S.F. En 1905, il met en évidence un nouveau type d’oscillations non-sinusoïdales au sein de l’arc chantant. Vingt ans plus tard, Balthasar Van der Pol reconnaitra qu’il s’agissait en (...) réalité d’oscillations de relaxation. Pour expliquer ce phénomène, il fait appel à une representation dans le plan de phase et montre que son évolution prend la forme de petits cycles. Pendant le premier conflit mondial la triode remplace peu à peu l’arc chantant dans les systèmes de transmission. Au sortir de la guerre, Blondel transpose à la triode la plupart de ses résultats par analogie avec l’arc chantant. En avril 1919, il publie un long mémoire dans lequel il introduit la terminologie « oscillations auto- entretenues » et propose d’illustrer ce concept à partir de l’exemple du vase de Tantale qui sera ensuite repris par Van der Pol et par Philippe Le Corbeiller. Il fournit alors une définition d’un système auto-entretenu assez proche de celles qui seront données par la suite par Aleksandr’ Andronov et Van der Pol. Pour étudier la stabilité des oscillations entretenues par la triode et par l’arc chantant il utilise cette fois une représentation dans le plan complexe et explicite l’amplitude en coordonnées polaires. Il justifie alors l’entretien des oscillations par l’existence de cycles qui présentent presque toutes les caractéristiques des cycles limites de Poincaré. Enfin, en novembre 1919, Blondel réalise, un an avant Van der Pol, la mise en équation des oscillations d’une triode. En mars 1926, Blondel établit l’équation différentielle caractérisant les oscillations de l’arc chantant, en tous points similaire à celle qu’obtient concomitamment Van der Pol pour la triode. Ainsi, tout au long de sa carrière, Blondel, a apporté une contribution fondamentale et relativement méconnue à l’élaboration de la théorie des oscillations non linéaires. L’objet de cet article est donc d’analyser ses principaux travaux dans ce domaine et de mesurer leur importance, voire leur influence en les replaçant dans la perspective du développement de cette théorie. (shrink)
In 1997 an international conference on Aristotle and modern science took place in Thessaloniki. Aristotle’s view of nature—his criticism of the atomists, on the one hand, and modern science, on the other—seem to be widely opposed, but in recent years science has changed so much that scientists resort to certain basic notions of Aristotle’s natural philosophy to underpin their theories and make material nature more intelligible. In a first paper Hilary Putnam argues against Victor Gaston that Aristotle’s theory of cognition (...) is a “ direct realism” and not as many say a theory based on representation. Perception and thinking are in direct contact with things and their properties. In a charming comparison Bas C. van Fraassen argues that both tragedy and science are subspecies of representation. As in poetry, in science the inexplicable is kept off stage. John P. Anton is confident that the revival of Aristotle’s model of science can provide a solution to the question of the unity of the various sciences. He levels a stinging attack at Putnam’s interpretation of Aristotle’s theory of cognition. Lambros Couloubaritsis voices amazement that in Physics IV Aristotle says nothing about the creative capacity of time, but believes that the notion of “appropriate time” will bring this out. James R. Brown argues that the main stream of science stemming from the seventeenth century is a fusion of the Platonic and mechanic traditions, but that in recent years Aristotle has made an impressive comeback. He examines to what extent the notion of potentiality may be in agreement with and help explain certain physical facts perceived by common sense observation, although it does no justice to quantum “ bizarreness”. He sees better help in the Platonic account of formal causality. Speaking about levels of reality Basarab Nicolescu believes that the universe is self-creating, showing an open structure. A flow of information traverses the various levels of reality. The notion of potency, we are told by Ephtichios Bitsakis, exercises quite some attraction on scientists. Indeed, Aristotle is a precursor of scientific realism, but his theories are marred by many inconsistencies: the Prime Mover, final causality, and entelechy contradict his dynamic view of nature and should be abandoned. In the transformation of massive particles into nonmassive ones the actual mass becomes potential. Thomas M. Olshewsky points out that Aristotle has a differentiated notion of prime matter and rejects absolute prime matter. Jagdish Hattiangadi suggests giving up the idea of substance. Demetra Sfendoni-Mentzou also tackles “the always actual question” of what matter is for Aristotle. Nowadays the idea of stable particles has disappeared and we have to deal with what is potentially real. (shrink)
S. A. Lloyd - Liberty, Rationality, and Agency in Hobbes's Leviathan - Journal of the History of Philosophy 40:3 Journal of the History of Philosophy 40.3 397-398 Book Review Liberty, Rationality, and Agency in Hobbes's Leviathan David van Mill. Liberty, Rationality, and Agency in Hobbes's Leviathan. Albany: The State University of New York Press, 2001. Pp. xii + 253. Cloth, $59.50. Paper, $19.95. David van Mill's provocative book is an ambitious and thoughtful argument by an author well-versed in Hobbes's writings (...) and the secondary literature on them, intended to revolutionize our understanding of Hobbes's theory of rational agency. It is a shame that the text is marred by such egregious copy-editing that its unrelenting misprints, misquotes, inaccurate references, and grammatical infelicities so distract from van Mill's.. (shrink)
Plurality of concepts of identity, foundational identity, and dualismThis paper defends three claims. One: the Dutch word for ‘identity’ is used to express very different concepts, such as the concept of ‘character’, ‘self-image’, ‘social identity’, ‘narrative identity’, and ‘identity through time’. Two: each of these concepts is applicable to human persons, but the concept of ‘identity through time’ is, in a crucial respect, more fundamental than the others. Three: because the fundamental concept of identity applies to human persons, dualism is (...) to be preferred over physicalism. (shrink)
Psillos has recently argued that van Fraassen’s arguments against abduction fail. Moreover, he claimed that, if successful, these arguments would equally undermine van Fraassen’s own constructive empiricism, for, Psillos thinks, it is only by appeal to abduction that constructive empiricism can be saved from issuing in a bald scepticism. We show that Psillos’ criticisms are misguided, and that they are mostly based on misinterpretations of van Fraassen’s arguments. Furthermore, we argue that Psillos’ arguments for his claim that constructive empiricism itself (...) needs abduction point up to his failure to recognize the importance of van Fraassen’s broader epistemology for constructive empiricism. Towards the end of our paper we discuss the suspected relationship between constructive empiricism and scepticism in the light of this broader epistemology, and from a somewhat more general perspective. (shrink)
Max van Manen offers an extensive exploration of phenomenological traditions and methods for the human sciences. It is his first comprehensive statement of phenomenological thought and research in over a decade. Phenomenology of practice refers to the meaning and practice of phenomenology in professional contexts such as psychology, education, and health care, as well as to the practice of phenomenological methods in contexts of everyday living. Van Manen presents a detailed description of key phenomenological ideas as they have evolved over (...) the past century; he then thoughtfully works through the methodological issues of phenomenological reflection, empirical methods, and writing that a phenomenology of practice offers to the researcher. Van Manen’s comprehensive work will be of great interest to all concerned with the interrelationship between being and acting in human sciences research and in everyday life. (shrink)
In this paper, the author defends Peter van Inwagen’s modal skepticism. Van Inwagen accepts that we have much basic, everyday modal knowledge, but denies that we have the capacity to justify philosophically interesting modal claims that are far removed from this basic knowledge. The author also defends the argument by means of which van Inwagen supports his modal skepticism, offering a rebuttal to an objection along the lines of that proposed by Geirrson. Van Inwagen argues that Stephen Yablo’s recent and (...) influential account of the relationship between conceivability and possibility supports his skeptical claims. The author’s defence involves a creative interpretation and development of Yablo’s account, which results in a recursive account of modal epistemology, what the author calls the “safe explanation” theory of modal epistemology. (shrink)
Page generated Tue Aug 3 13:13:06 2021 on philpapers-web-65948fd446-wp78j
cache stats: hit=12544, miss=15714, save= autohandler : 1167 ms called component : 1153 ms search.pl : 1045 ms render loop : 727 ms next : 344 ms addfields : 327 ms initIterator : 315 ms publicCats : 286 ms save cache object : 67 ms menu : 61 ms retrieve cache object : 53 ms autosense : 37 ms match_cats : 34 ms quotes : 26 ms prepCit : 21 ms search_quotes : 10 ms applytpl : 5 ms intermediate : 1 ms match_other : 1 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms