The urgent drive for vaccine development in the midst of the current COVID-19 pandemic has prompted public and private organisations to invest heavily in research and development of a COVID-19 vaccine. Organisations globally have affirmed the commitment of fair global access, but the means by which a successful vaccine can be mass produced and equitably distributed remains notably unanswered. Barriers for low-income countries include the inability to afford vaccines as well as inadequate resources to vaccinate, barriers that are exacerbated during (...) a pandemic. Fair distribution of a pandemic vaccine is unlikely without a solid ethical framework for allocation. This piece analyses four allocation paradigms: ability to develop or purchase; reciprocity; ability to implement; and distributive justice, and synthesises their ethical considerations to develop an allocation model to fit the COVID-19 pandemic. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
There is a long-standing disagreement in the philosophy of probability and Bayesian decision theory about whether an agent can hold a meaningful credence about an upcoming action, while she deliberates about what to do. Can she believe that it is, say, 70% probable that she will do A, while she chooses whether to do A? No, say some philosophers, for Deliberation Crowds Out Prediction (DCOP), but others disagree. In this paper, we propose a valid core for DCOP, and identify terminological (...) causes for some of the apparent disputes. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
In countries such as China, where Confucianism is the backbone of national culture, high-social-status entrepreneurs are inclined to engage in corporate social responsibility activities due to the perceived high stress from stakeholders and high ability of doing CSR. Based on a large-scale survey of private enterprises in China, our paper finds that Chinese entrepreneurs at private firms who have high social status are prone to engage in social responsibility efforts. In addition, high-social-status Chinese entrepreneurs are even more likely to engage (...) in social responsibility efforts as they become more politically connected and as the region becomes more market-oriented. These findings extend the upper echelons perspective of CSR into Chinese context by shedding light on antecedents of CSR from a new perspective and clarifying the boundary conditions of the social status–CSR link from the institutional perspective. (shrink)
Achieving the global benefits of artificial intelligence will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on (...) the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: cooperation does not require achieving agreement on principles and standards for all areas of AI; and it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. (shrink)
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett's work on backward causation and Lewis's on chance and crystal balls. We show that the possibility that the causal (...) facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis's own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action. (shrink)
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
The event-triggered consensus control for leader-following multiagent systems subjected to external disturbances is investigated, by using the output feedback. In particular, a novel distributed event-triggered protocol is proposed by adopting dynamic observers to estimate the internal state information based on the measurable output signal. It is shown that under the developed observer-based event-triggered protocol, multiple agents will reach consensus with the desired disturbance attenuation ability and meanwhile exhibit no Zeno behaviors. Finally, a simulation is presented to verify the obtained results.
Grounded in Bandura’s social cognitive theory of moral thought and action, we develop a conceptual model linking supervisors’ perceptions of organizational injustice and abusive supervision with moral disengagement mechanisms acting as the underlying process. Specifically, we elaborate why and how supervisors’ experiences of each type of injustice would trigger their adoption of distinctive moral disengagement mechanisms, which in turn lead to their abusive supervisory conduct. The present conceptual model sheds new light on linking organizational injustice to abusive supervision from a (...) moral perspective. In addition, it also provides important theoretical and managerial implications to our current understanding of why and how abusive supervision happens. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to consequences. (...) Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Recently, infrared human action recognition has attracted increasing attention for it has many advantages over visible light, that is, being robust to illumination change and shadows. However, the infrared action data is limited until now, which degrades the performance of infrared action recognition. Motivated by the idea of transfer learning, an infrared human action recognition framework using auxiliary data from visible light is proposed to solve the problem of limited infrared action data. In the proposed framework, we first construct a (...) novel Cross-Dataset Feature Alignment and Generalization framework to map the infrared data and visible light data into a common feature space, where Kernel Manifold Alignment and a dual aligned-to-generalized encoders model are employed to represent the feature. Then, a support vector machine is trained, using both the infrared data and visible light data, and can classify the features derived from infrared data. The proposed method is evaluated on InfAR, which is a publicly available infrared human action dataset. To build up auxiliary data, we set up a novel visible light action dataset XD145. Experimental results show that the proposed method can achieve state-of-the-art performance compared with several transfer learning and domain adaptation methods. (shrink)
This paper proposes an innovative ducted fan aerial manipulator, which is particularly suitable for the tasks in confined environment, where traditional multirotors and helicopters would be inaccessible. The dynamic model of the aerial manipulator is established by comprehensive mechanism and parametric frequency-domain identification. On this basis, a composite controller of the aerial platform is proposed. A basic static robust controller is designed via H-infinity synthesis to achieve basic performance, and an adaptive auxiliary loop is designed to estimate and compensate for (...) the effect acting on the vehicle from the manipulator. The computer simulation analyses show good stability of the aerial vehicle under the manipulator motion and good tracking performance of the manipulator end effector, which verify the feasibility of the proposed aerial manipulator design and the effectiveness of the proposed controller, indicating that the system can meet the requirements of high precision operation tasks well. (shrink)
Time theft is a costly burden on organizations. However, there is limited knowledge about why time theft occurs. To advance this line of research, this conceptual paper looks at the association between organizational injustice and time theft from identity, moral, and equity perspectives. This paper proposes that organizational injustice triggers time theft through decreased organizational identification. It also proposes that moral disengagement and equity sensitivity moderate this process such that organizational identification is less likely to mediate among employees with high (...) moral disengagement and more likely to mediate among employees who are equity sensitives and entitleds. (shrink)
With the rapid development of mobile Internet, the social network has become an important platform for users to receive, release, and disseminate information. In order to get more valuable information and implement effective supervision on public opinions, it is necessary to study the public opinions, sentiment tendency, and the evolution of the hot events in social networks of a smart city. In view of social networks’ characteristics such as short text, rich topics, diverse sentiments, and timeliness, this paper conducts text (...) modeling with words co-occurrence based on the topic model. Besides, the sentiment computing and the time factor are incorporated to construct the dynamic topic-sentiment mixture model. Then, four hot events were randomly selected from the microblog as datasets to evaluate the TSTS model in terms of topic feature extraction, sentiment analysis, and time change. The results show that the TSTS model is better than the traditional models in topic extraction and sentiment analysis. Meanwhile, by fitting the time curve of hot events, the change rules of comments in the social network is obtained. (shrink)
There are two accounts of how readers of unspaced writing systems know where to move their eyes: saccades are directed toward default targets ; or saccade lengths are adjusted dynamically, as a function of ongoing parafoveal processing. This article reports an eye-movement experiment supporting the latter hypothesis by demonstrating that the slope of the relationship between the saccade launch site on word N and the subsequent fixation landing site on word N + 1 is > 1, suggesting that saccades are (...) lengthened from launch sites that afford more parafoveal processing. This conclusion is then evaluated and confirmed via simulations using implementations of both hypotheses, with a discussion of these results for our understanding of saccadic targeting during reading and existing models of eye-movement control. (shrink)
The correct urban building layout is an important influencing factor in urban ventilation, and the heat island effect has become an important factor affecting the quality of urban life. Optimization of the urban building layout can play a role in mitigating the heat island effect. The traditional ventilation corridor analysis method, based on a least-cost path analysis, can only generate a few main ventilation corridors. It is difficult to obtain global ventilation results covering the whole study area using this method (...) of analysis. On the basis of urban morphology and a least-cost path analysis, this study proposes a “least cumulative ventilation cost” method for analyzing urban ventilation. Taking Wuhan downtown as a research area, the urban ventilation environment under different wind directions and seasons was analyzed. This method can effectively express the ventilation conditions throughout the whole study area and can simultaneously express the quality of the generated corridors effectively. The results show that Wuhan has three levels of ventilation corridor. Moreover, the ventilation conditions in Wuchang are better than those in Hankou. (shrink)
The recent rapid development of information technology, such as sensing technology, communications technology, and database, allows us to use simulation experiments for analyzing serious accidents caused by hazardous chemicals. Due to the toxicity and diffusion of hazardous chemicals, these accidents often lead to not only severe consequences and economic losses, but also traffic jams at the same time. Emergency evacuation after hazardous chemical accidents is an effective means to reduce the loss of life and property and to smoothly resume the (...) transport network as soon as possible. This paper considers the dynamic changes of the hazardous chemicals’ concentration after their leakage and simulates the diffusion process. Based on the characteristics of emergency evacuation of hazardous chemical accidents, we build a mixed-integer programming model and design a heuristic algorithm using network optimization and diffusion simulation. We then verify the validity and feasibility of the algorithm using Jinan, China, as a computational example. In the end, we compare the results from different scenarios to explore the key factors affecting the effectiveness of the evacuation process. (shrink)
The quality factor is an important parameter for measuring the attenuation of seismic waves. Reliable [Formula: see text] estimation and stable inverse [Formula: see text] filtering are expected to improve the resolution of seismic data and deep-layer energy. Many methods of estimating [Formula: see text] are based on an individual wavelet. However, it is difficult to extract the individual wavelet precisely from seismic reflection data. To avoid this problem, we have developed a method of directly estimating [Formula: see text] from (...) reflection data. The core of the methodology is selecting the peak-frequency points to linear fit their logarithmic spectrum and time-frequency product. Then, we calculated [Formula: see text] according to the relationship between [Formula: see text] and the optimized slope. First, to get the peak frequency points at different times, we use the generalized S transform to produce the 2D high-precision time-frequency spectrum. According to the seismic wave attenuation mechanism, the logarithmic spectrum attenuates linearly with the product of frequency and time. Thus, the second step of the method is transforming a 2D spectrum into 1D by variable substitution. In the process of transformation, we only selected the peak frequency points to participate in the fitting process, which can reduce the impact of the interference on the spectrum. Third, we obtain the optimized slope by least-squares fitting. To demonstrate the reliability of our method, we applied it to a constant [Formula: see text] model and the real data of a work area. For the real data, we calculated the [Formula: see text] curve of the seismic trace near a well and we get the high-resolution section by using stable inverse [Formula: see text] filtering. The model and real data indicate that our method is effective and reliable for estimating the [Formula: see text] value. (shrink)
We developed an integrated method that can better constrain subsalt tomography using geology, thermal history modeling, and rock-physics principles. This method, called rock-physics-guided velocity modeling for migration uses predicted pore pressure as a guide to improve the quality of the earth model. We first generated a rock-physics model that provided a range of plausible pore pressure that lies between hydrostatic and fracture pressure. The range of plausible pore pressures was then converted into a range of plausible depth varying velocities as (...) a function of pore pressure that is consistent with geology and rock physics. Such a range of plausible velocities is called the rock-physics template. Such a template was then used to flatten the seismic gathers. We call this the pore-pressure scan technique. The outcome of the pore-pressure scan process was an “upper” and “lower” bound of pore pressure for a given earth model. Such velocity bounds were then used as constraints on the subsequent tomography, and further iterations were carried out. The integrated method not only flattened the common image point gathers but also limited the velocity field to its physically and geologically plausible range without well control; for example, in the study area it produced a better image and pore-pressure prognosis below salt. We determined that geologic control is essential, and we used it for stratigraphy, structure, and unconformity, etc. The method had several subsalt applications in the Gulf of Mexico and proved that subsalt pore pressure can be reliably predicted. (shrink)
This short paper has two parts. First, we prove a generalisation of Aumann's surprising impossibility result in the context of rational decision making. We then move, in the second part, to discuss the interpretational meaning of some formal setups of epistemic models, and we do so by means of presenting an interesting puzzle in epistemic logic. The aim is to highlight certain problematic aspects of these epistemic systems concerning first/third-person asymmetry which underlies both parts of the story. This asymmetry, we (...) argue, reveals certain limits of what epistemic models can be. (shrink)
This study examined the ability to comprehend conventional and non-conventional implicatures, and the effect of proficiency and learning context on comprehension of implicature in L2 Chinese. Participants were three groups of college students of Chinese: elementary-level foreign language learners, advanced-level foreign language learners, and advanced-level heritage learners. They completed a 36-item computer-delivered listening test measuring their ability to comprehend three types of implicature: conventional indirect refusals, conventional indirect opinions, and non-conventional indirect opinions. Comprehension was analyzed for accuracy and comprehension speed. (...) There was a significant effect of implicature type on accuracy, but not on comprehension speed. A significant effect of participant group was observed on accuracy, but the effect was mixed on comprehension speed. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the interpretational value of (...) sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
It is widely taken that the first-order part of Frege's Begriffsschrift is complete. However, there does not seem to have been a formal verification of this received claim. The general concern is that Frege's system is one axiom short in the first-order predicate calculus comparing to, by now, the standard first-order theory. Yet Frege has one extra inference rule in his system. Then the question is whether Frege's first-order calculus is still deductively sufficient as far as the first-order completeness is (...) concerned. In this short note we confirm that the missing axiom is derivable from his stated axioms and inference rules, and hence the logic system in the Begriffsschrift is indeed first-order complete. (shrink)
In the present study, we tested the effectiveness of color coding on the programming learning of students who were learning from video lectures. Effectiveness was measured using multimodal physiological measures, combining eye tracking and electroencephalography. Using a between-subjects design, 42 university students were randomly assigned to two video lecture conditions. The participants’ eye tracking and EEG signals were recorded while watching the assigned video, and their learning performance was subsequently assessed. The results showed that the color-coded design was more beneficial (...) than the grayscale design, as indicated by smaller pupil diameter, shorter fixation duration, higher EEG theta and alpha band power, lower EEG cognitive load, and better learning performance. The present findings have practical implications for designing slide-based programming learning video lectures; slides should highlight the format of the program code using color coding. (shrink)
Objective: This study aimed to explore the relationship among cognitive fusion, experiential avoidance, and obsessive–compulsive symptoms in patients with obsessive–compulsive disorder.Methods: A total of 118 outpatient and inpatient patients with OCD and 109 healthy participants, gender- and age-matched, were selected using cognitive fusion questionnaire, acceptance and action questionnaire−2nd edition, Yale–Brown scale for obsessive–compulsive symptoms, Hamilton anxiety scale, and Hamilton depression scale for questionnaire testing and data analysis.Results: The levels of cognitive fusion and experiential avoidance in the OCD group were significantly (...) higher than those in the healthy control group. Regression analysis results showed that, in predicting the total score of obsessive–compulsive symptoms, AAQ-II and CFQ entered the equation, which explained 17.1% variance. In predicting anxiety, only AAQ-II entered the equation, which explained 13% variance. In the prediction of depression, AAQ-II entered the equation, which explained 17.7% variance.Conclusion: Cognitive fusion and experiential avoidance may be important factors for the maintenance of OCD, and experiential avoidance can positively predict the anxiety and depression of OCD patients. (shrink)
Interpersonal physiological synchrony has been consistently found during collaborative tasks. However, few studies have applied synchrony to predict collaborative learning quality in real classroom. To explore the relationship between interpersonal physiological synchrony and collaborative learning activities, this study collected electrodermal activity and heart rate during naturalistic class sessions and compared the physiological synchrony between independent task and group discussion task. The students were recruited from a renowned university in China. Since each student learn differently and not everyone prefers collaborative learning, (...) participants were sorted into collaboration and independent dyads based on their collaborative behaviors before data analysis. The result showed that, during group discussions, high collaboration pairs produced significantly higher synchrony than low collaboration dyads. Given the equivalent engagement level during independent and collaborative tasks, the difference of physiological synchrony between high and low collaboration dyads was triggered by collaboration quality. Building upon this result, the classification analysis was conducted, indicating that EDA synchrony can identify different levels of collaboration quality. (shrink)
A fundamental question in reading research concerns whether attention is allocated strictly serially, supporting lexical processing of one word at a time, or in parallel, supporting concurrent lexical processing of two or more words (Reichle, Liversedge, Pollatsek, & Rayner, 2009). The origins of this debate are reviewed. We then report three simulations to address this question using artificial reading agents (Liu & Reichle, 2010; Reichle & Laurent, 2006) that learn to dynamically allocate attention to 1–4 words to “read” as efficiently (...) as possible. These simulation results indicate that the agents strongly preferred serial word processing, although they occasionally attended to more than one word concurrently. The reason for this preference is discussed, along with implications for the debate about how humans allocate attention during reading. (shrink)
Scientists normally earn less money than many other professions which require a similar amount of training and qualification. The economic theory of marginal utility and cost-benefit analysis can be applied to explain this phenomenon. Although scientists make less money than entertainment stars, the scientists do research work out of their interest and they also enjoy a much higher reputation and social status in some countries.
Legal translation has become a principal means to unfold Chinese laws to the world in the global era and the study of it has proved to be of practical significance. Since the proper theory guidance is the key to the quality of LT translation, this paper focuses on the Skopos theory and the strategies applied in the practice of LT. A case study of LT examples from the Criminal Law of the P.R.C. has been made while briefly reviewing the Skopos (...) theory and its principles. Started with short discussion of LT, this paper probes into the applicability of the three principles of Skopos theory, including the Skopos rule, the coherence rule and the fidelity rule, into the legal texts, especially into the translation of the Criminal Law of the P.R.C. and based on the study, the strategies for LT are proposed, with the hope that it can be useful for reference in other legal texts. (shrink)