92 found
Order:
  1.  48
    Nick Chater & Christopher D. Manning (2006). Probabilistic Models of Language Processing and Acquisition. Trends in Cognitive Sciences 10 (7):335-344.
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   30 citations  
  2. Christopher Manning, Accurate Unlexicalized Parsing.
    assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is..
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  3.  38
    Christopher D. Manning, An Introduction to Information Retrieval.
    1 Boolean retrieval 1 2 The term vocabulary and postings lists 19 3 Dictionaries and tolerant retrieval 49 4 Index construction 67 5 Index compression 85 6 Scoring, term weighting and the vector space model 109 7 Computing scores in a complete search system 135 8 Evaluation in information retrieval 151 9 Relevance feedback and query expansion 177 10 XML retrieval 195 11 Probabilistic information retrieval 219 12 Language models for information retrieval 237 13 Text classification and Naive Bayes 253 (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   6 citations  
  4.  16
    Christopher Manning, Probabilistic Syntax.
    “Everyone knows that language is variable.” This is the bald sentence with which Sapir (1921:147) begins his chapter on language as an historical product. He goes on to emphasize how two speakers’ usage is bound to differ “in choice of words, in sentence structure, in the relative frequency with which particular forms or combinations of words are used”. I should add that much sociolinguistic and historical linguistic research has shown that the same speaker’s usage is also variable (Labov 1966, Kroch (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  5.  16
    David Hall & Christopher D. Manning, Studying the History of Ideas Using Topic Models.
    How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  6.  10
    Dan Klein & Christopher D. Manning, Accurate Unlexicalized Parsing.
    We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  7.  3
    Dan Klein & Christopher D. Manning, A Generative Constituent-Context Model for Improved Grammar Induction.
    We present a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. Parameter search with EM produces higher quality analyses than previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher F1 of 71% on nontrivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  8.  18
    Nick Chater & Christopher D. Manning (2006). Linguistics, Computational Linguistics and Cognitive Science. Trends in Cognitive Sciences 10 (7):335-344.
  9.  37
    Christopher Manning, Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network.
    first-order HMM, the current tag t0 is predicted based on the previous tag t−1 (and the current word).1 The back- We present a new part-of-speech tagger that ward interaction between t0 and the next tag t+1 shows demonstrates the following ideas: (i) explicit up implicitly later, when t+1 is generated in turn. While unidirectional models are therefore able to capture both use of both preceding and following tag con-.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  10.  20
    Christopher Manning, Extensions to HMM-Based Statistical Word Alignment Models.
    translation. We present a method for using part of speech tag information to improve alignment accu-.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  11.  31
    Christopher Manning, Language Varieties.
    Part-of-speech tagging, like any supervised statistical NLP task, is more difficult when test sets are very different from training sets, for example when tagging across genres or language varieties. We examined the problem of POS tagging of different varieties of Mandarin Chinese. An analytic study first showed that unknown words were a major source of difficulty in cross-variety tagging. Unknown words in English tend to be proper nouns. By contrast, we found that Mandarin unknown words were mostly common nouns and (...)
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  12.  24
    Christopher Manning, Dissociations Between Argument Structure and Grammatical Relations.
    In Pollard and Sag (1987) and Pollard and Sag (1994:Ch. 1–8), the subcategorized arguments of a head are stored on a single ordered list, the subcat list. However, Borsley (1989) argues that there are various defi- ciencies in this approach, and suggests that the unified list should be split into separate lists for subjects, complements, and specifiers. This proposal has been widely adopted in what is colloquially known as HPSG3 (Pollard and Sag (1994:Ch. 9) and other recent work in HPSG). (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  13.  6
    Christopher D. Manning, Ergativity: Argument Structure and Grammatical Relations.
    I wish to present a codi cation of syntactic approaches to dealing with ergative languages and argue for the correctness of one particular approach, which I will call the Inverse Grammatical Relations hypothesis.1 I presume familiarity with the term `ergativity', but, brie y, many languages have ergative case marking, such as Burushaski in (1), in contrast to the accusative case marking of Latin in (2). More generally, if we follow Dixon (1979) and use A to mark the agent-like argument of (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  14.  14
    Christopher Manning, The Infinite Tree.
    number of hidden categories is not fixed, but when the number of hidden states is unknown (Beal et al., 2002; Teh et al., 2006). can grow with the amount of training data.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  15.  46
    Christopher Manning, A Conditional Random Field Word Segmenter.
    We present a Chinese word segmentation system submitted to the closed track of Sighan bakeoff 2005. Our segmenter was built using a conditional random field sequence model that provides a framework to use a large number of linguistic features such as character identity, morphological and character reduplication features. Because our morphological features were extracted from the training corpora automatically, our system was not biased toward any particular variety of Mandarin. Thus, our system does not overfit the variety of Mandarin most (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  16.  35
    David Hall & Christopher D. Manning, Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-Labeled Corpora.
    A significant portion of the world’s text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  17.  19
    Christopher D. Manning, An Effective Two-Stage Model for Exploiting Non-Local Dependencies in Named Entity Recognition.
    This paper shows that a simple two-stage approach to handle non-local dependencies in Named Entity Recognition (NER) can outperform existing approaches that handle non-local dependencies, while being much more computationally efficient. NER systems typically use sequence models for tractable inference, but this makes them unable to capture the long distance structure present in text. We use a Conbel.
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  18.  18
    Christopher D. Manning, Efficient, Feature-Based, Conditional Random Field Parsing.
    Discriminative feature-based methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, featurerich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. Our efficiency is primarily due to the use of stochastic optimization techniques, as well as parallelization and chart prefiltering. On WSJ15, (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  19.  19
    Christopher D. Manning, Modeling Semantic Containment and Exclusion in Natural Language Inference.
    We propose an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  20.  35
    Christopher D. Manning, Part-of-Speech Tagging From 97% to 100%: Is It Time for Some Linguistics?
    I examine what would be necessary to move part-of-speech tagging performance from its current level of about 97.3% token accuracy (56% sentence accuracy) to close to 100% accuracy. I suggest that it must still be possible to greatly increase tagging performance and examine some useful improvements that have recently been made to the Stanford Part-of-Speech Tagger. However, an error analysis of some of the remaining errors suggests that there is limited further mileage to be had either from better machine learning (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  21.  22
    Christopher D. Manning, Natural Logic for Textual Inference.
    This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PAS- CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  22.  31
    Christopher D. Manning, A Simple and Effective Hierarchical Phrase Reordering Model.
    adjacent phrases, but they typically lack the ability to perform the kind of long-distance reorderings possible with syntax-based systems. In this paper, we present a novel hierarchical phrase reordering model aimed at improving non-local reorderings, which seamlessly integrates with a standard phrase-based system with little loss of computational efficiency. We show that this model can successfully handle the key examples often used to motivate syntax-based systems, such as the rotation of a prepositional phrase around a noun phrase. We contrast our (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  23.  23
    Christopher D. Manning, A Phrase-Based Alignment Model for Natural Language Inference.
    The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  24.  27
    Christopher D. Manning, Nested Named Entity Recognition.
    Many named entities contain other named entities inside them. Despite this fact, the field of named entity recognition has almost entirely ignored nested named entity recognition, but due to technological, rather than ideological reasons. In this paper, we present a new technique for recognizing nested named entities, by using a discriminative constituency parser. To train the model, we transform each sentence into a tree, with constituents for each named entity (and no other syntactic structure). We present results on both newspaper (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  25.  26
    Christopher D. Manning, Clustering the Tagged Web.
    Automatically clustering web pages into semantic groups promises improved search and browsing on the web. In this paper, we demonstrate how user-generated tags from largescale social bookmarking websites such as del.icio.us can be used as a complementary data source to page text and anchor text for improving automatic clustering of web pages. This paper explores the use of tags in 1) K-means clustering in an extended vector space model that includes tags as well as page text and 2) a novel (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  26.  28
    Christopher Manning, Generating Typed Dependency Parses From Phrase Structure Parses.
    This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  27.  26
    Christopher Manning, Max-Margin Parsing.
    Ben Taskar Dan Klein Michael Collins Computer Science Dept. Computer Science Dept. CS and AI Lab Stanford University Stanford University.
    Direct download  
     
    Export citation  
     
    My bibliography  
  28.  27
    Christopher Manning, Learning to Recognize Features of Valid Textual Entailments.
    separated from evaluating entailment. Current approaches to semantic inference in question answer-.
    No categories
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  29.  17
    Christopher D. Manning, Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors That Increase ASR Error Rates.
    Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  30.  25
    Christopher D. Manning, Enforcing Transitivity in Coreference Resolution.
    A desirable quality of a coreference resolution system is the ability to handle transitivity constraints, such that even if it places high likelihood on a particular mention being coreferent with each of two other mentions, it will also consider the likelihood of those two mentions being coreferent when making a final assignment. This is exactly the kind of constraint that integer linear programming (ILP) is ideal for, but, surprisingly, previous work applying ILP to coreference resolution has not encoded this type (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  31.  23
    Christopher Manning, Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger.
    Kristina Toutanova Christopher D. Manning Dept of Computer Science Depts of Computer Science and Linguistics Gates Bldg 4A, 353 Serra Mall Gates Bldg 4A, 353 Serra Mall Stanford, CA 94305–9040, USA Stanford, CA 94305–9040, USA kristina@cs.stanford.edu manning@cs.stanford.edu..
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  32.  23
    Christopher Manning, Parsing and Hypergraphs.
    While symbolic parsers can be viewed as deduction systems, this view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an Ç´Ò¿µ time bound for arbitrary PCFGs, while preserving as much of the flexibility of symbolic chart parsers as allowed by the inherent (...)
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  33.  14
    Christopher Manning, Finding Contradictions in Text.
    Marie-Catherine de Marneffe, Anna N. Rafferty and Christopher D. Manning Linguistics Department Computer Science Department Stanford University Stanford University Stanford, CA 94305 Stanford, CA 94305 {rafferty,manning}@stanford.edu mcdm@stanford.edu..
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  34.  6
    Dan Klein & Christopher D. Manning, Distributional Phrase Structure Induction.
    Unsupervised grammar induction systems commonly judge potential constituents on the basis of their effects on the likelihood of the data. Linguistic justifications of constituency, on the other hand, rely on notions such as substitutability and varying external contexts. We describe two systems for distributional grammar induction which operate on such principles, using part-of-speech tags as the contextual features. The advantages and disadvantages of these systems are examined, including precision/recall trade-offs, error analysis, and extensibility.
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  35.  22
    Philip Beineke & Christopher Manning, An Exploration of Sentiment Summarization.
    The website Rotten Tomatoes, located at www.rottentomatoes.com, is primarily an online repository of movie reviews. For each movie review document, the site provides a link to the full review, along with a brief description of its sentiment. The description consists of a rating (“fresh” or “rotten”) and a short quotation from the review. Other research (Pang, Lee, & Vaithyanathan 2002) has predicted a movie review’s rating from its text. In this paper, we focus on the quotation, which is a main (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  36.  24
    Christopher Manning, Natural Language Grammar Induction Using a Constituent-Context Model.
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Direct download  
     
    Export citation  
     
    My bibliography  
  37.  16
    Christopher Manning, Ofer Dekel & Yoram Singer, Log-Linear Models for Label Ranking.
    In Sebastian Thrun, Lawrence K. Saul, and Bernhard Schölkopf (eds), Advances in Neural Information Processing Systems 16 (NIPS 2003). Cambridge, MA: MIT Press, pp. 497-504.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  38.  15
    Christopher D. Manning, Disambiguating “DE” for Chinese-English Machine Translation.
    Linking constructions involving dሇ (DE) are ubiquitous in Chinese, and can be translated into English in many different ways. This is a major source of machine translation error, even when syntaxsensitive translation models are used. This paper explores how getting more information about the syntactic, semantic, and discourse context of uses of dሇ (DE) can facilitate producing an appropriate English translation strategy. We describe a finergrained classification of dሇ (DE) constructions in Chinese NPs, construct a corpus of annotated examples, and (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  39.  23
    Christopher Manning, NIST Open Machine Translation 2008 Evaluation: Stanford University's System Description.
    Michel Galley, Pi-Chuan Chang, Daniel Cer, Jenny R. Finkel, and Christopher D. Manning Computer Science and Linguistics Departments Stanford University..
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  40.  18
    Christopher D. Manning & Bill MacCartney, An Extended Model of Natural Logic.
    We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  41.  5
    Dan Klein & Christopher D. Manning, Natural Language Grammar Induction Using a Constituent-Context Model.
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  42.  20
    Christopher Manning, An ¢¡¤£¦¥¨§ Agenda-Based Chart Parser for Arbitrary Probabilistic Context-Free Grammars.
    fundamental rule” in an order-independent manner, such that the same basic algorithm supports top-down and Most PCFG parsing work has used the bottom-up bottom-up parsing, and the parser deals correctly with CKY algorithm (Kasami, 1965; Younger, 1967) with the difficult cases of left-recursive rules, empty elements, Chomsky Normal Form Grammars (Baker, 1979; Jeand unary rules, in a natural way.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  43.  17
    Christopher Manning, Lexical Conceptual Structure and Marathi.
    Jackendoff (1987, 1990) has brought up various problems with the current use of thematic roles (Kiparsky, 1987; Bresnan & Kanerva, 1989 and references cited therein) and suggested a different way of thinking of thematic roles as structural configurations in his semantic Lexical Conceptual Structures (LCSs). Conversely, Joshi (1989) has claimed that Jackendoff’s LCSs alone are insufficient, and that an analysis of certain facts in Marathi additionally requires the existence of a level of predicate-argument structure (PAS). Below we will mention a (...)
    Direct download  
     
    Export citation  
     
    My bibliography  
  44.  11
    Christopher Manning, Learning Alignments and Leveraging Natural Logic.
    Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage Eric Yeh, Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  45.  20
    Christopher Manning, Fast Exact Inference with a Factored Model for Natural Language Parsing.
    We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  46.  17
    David Hall, Christopher D. Manning, Daniel Cer & Chloe Kiddon, Learning Alignments and Leveraging Natural Logic.
    We describe an approach to textual inference that improves alignments at both the typed dependency level and at a deeper semantic level. We present a machine learning approach to alignment scoring, a stochastic search procedure, and a new tool that finds deeper semantic alignments, allowing rapid development of semantic features over the aligned graphs. Further, we describe a complementary semantic component based on natural logic, which shows an added gain of 3.13% accuracy on the RTE3 test set.
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  47.  17
    Christopher Manning, Presents Embedded Under Pasts.
    In this paper I will discuss a rather recondite phenomenon in the area of sequence of tense (SOT), exhibited by sentences like (1): (1) John said that Mary is pregnant. According to traditional grammar, this is a sentence where sequence of tense has failed to apply (i.e., concord has been broken): standard sequence of tense rules would dictate use of a past tense when embedding an event contemporaneous to the embedding verb under a past tense verb, giving the sentence John (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  48.  13
    Christopher Manning, Unsupervised Discovery of a Statistical Verb Lexicon.
    tic structure. Determining the semantic roles of a verb’s dependents is an important step in natural..
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  49.  11
    Christopher Manning, Computing Pagerank Using Power Extrapolation.
    We present a novel technique for speeding up the computation of PageRank, a hyperlink-based estimate of the “importance” of Web pages, based on the ideas presented in [7]. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing the Web link graph. The algorithm presented here, called Power Extrapolation, accelerates the convergence of the Power Method by subtracting off the error along several nonprincipal eigenvectors from the current (...)
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  50.  14
    Dan Klein & Christopher D. Manning, An Ç ´Ò¿ Μ Agenda-Based Chart Parser for Arbitrary Probabilistic Context-Free Grammars.
    While Ç ´Ò¿ µ methods for parsing probabilistic context-free grammars (PCFGs) are well known, a tabular parsing framework for arbitrary PCFGs which allows for botton-up, topdown, and other parsing strategies, has not yet been provided. This paper presents such an algorithm, and shows its correctness and advantages over prior work. The paper finishes by bringing out the connections between the algorithm and work on hypergraphs, which permits us to extend the presented Viterbi (best parse) algorithm to an inside (total probability) (...)
    No categories
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
1 — 50 / 92