11 found
Sort by:
Disambiguations:
Sanjeev Kulkarni [7]Sanjeev R. Kulkarni [4]
  1. Sanjeev R. Kulkarni & Gilbert Harman, Statistical Learning Theory: A Tutorial.
    In this article, we provide a tutorial overview of some aspects of statistical learning theory, which also goes by other names such as statistical pattern recognition, nonparametric classification and estimation, and supervised learning. We focus on the problem of two-class pattern classification for various reasons. This problem is rich enough to capture many of the interesting aspects that are present in the cases of more than two classes and in the problem of estimation, and many of the results can be (...)
    No categories
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  2. Michael K. Miller, Guanchun Wang, Sanjeev R. Kulkarni & Daniel N. Osherson, Wishful Thinking and Social Influence in the 2008 U.S. Presidential Election.
    This paper analyzes individual probabilistic predictions of state outcomes in the 2008 U.S. presidential election. Employing an original survey of more than 19,000 respondents, ours is the first study of electoral forecasting to involve multiple subnational predictions and to incorporate the influence of respondents’ home states. We relate a range of demographic, political, and cognitive variables to individual accuracy and predictions, as well as to how accuracy improved over time. We find strong support for wishful thinking bias in expectations, as (...)
    No categories
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  3. Guanchun Wang, Sanjeev R. Kulkarni & Daniel N. Osherson, Aggregating Large Sets of Probabilistic Forecasts by Weighted Coherent Adjustment.
    Stochastic forecasts in complex environments can benefit from combining the estimates of large groups of forecasters (“judges”). But aggregating multiple opinions faces several challenges. First, human judges are notoriously incoherent when their forecasts involve logically complex events. Second, individual judges may have specialized knowledge, so different judges may produce forecasts for different events. Third, the credibility of individual judges might vary, and one would like to pay greater attention to more trustworthy forecasts. These considerations limit the value of simple aggregation (...)
    No categories
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  4. Guanchun Wang, Sanjeev Kulkarni & Daniel N. Osherson, Improving Aggregated Forecasts of Probability.
    ��The Coherent Approximation Principle (CAP) is a method for aggregating forecasts of probability from a group of judges by enforcing coherence with minimal adjustment. This paper explores two methods to further improve the forecasting accuracy within the CAP framework and proposes practical algorithms that implement them. These methods allow flexibility to add fixed constraints to the coherentization process and compensate for the psychological bias present in probability estimates from human judges. The algorithms were tested on a data set of nearly (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  5. Gilbert Harman & Sanjeev Kulkarni, Statistical Learning Theory as a Framework for the Philosophy of Induction.
    Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.
    No categories
     
    My bibliography  
     
    Export citation  
  6. Sanjeev Kulkarni, Statistical Learning Theory as a Framework for the Philosophy of Induction.
    Statistical Learning Theory (e.g., Hastie et al. 2001; Vapnik 1998, 2000, 2006; Devroye, Györfi, Lugosi 1996) is the basic theory behind contemporary machine learning and pattern recognition. We suggest that the theory provides an excellent framework for the philosophy of induction (see also Harman and Kulkarni 2007).
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  7. Gilbert Harman & Sanjeev Kulkarni (2009). Précis of Reliable Reasoning: Induction and Statistical Learning Theory. Abstracta 5 (3):5-9.
    No categories
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  8. Gilbert Harman & Sanjeev Kulkarni (2009). Response to Shaffer, Thagard, Strevens and Hanson. Abstracta 5 (3):47-56.
    No categories
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  9. Joel Predd, Robert Seiringer, Elliott Lieb, Daniel Osherson, H. Vincent Poor & Sanjeev Kulkarni (2009). Probabilistic Coherence and Proper Scoring Rules. IEEE Transactions on Information Theory 55 (10):4786-4792.
    We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem recapitulates insights achieved by other investigators, and clarifi es the connection of coherence and proper scoring rules to Bregman divergence.
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  10. Gilbert Harman & Sanjeev Kulkarni (2007). Reliable Reasoning: Induction and Statistical Learning Theory. A Bradford Book.
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  11. Gilbert Harman & Sanjeev R. Kulkarni (2006). The Problem of Induction. Philosophy and Phenomenological Research 72 (3):559-575.
    The problem of induction is sometimes motivated via a comparison between rules of induction and rules of deduction. Valid deductive rules are necessarily truth preserving, while inductive rules are not.
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation