Results for 'Unsupervised learning'

988 found
Order:
See also
  1. Unsupervised learning and grammar induction.Alex Clark & Shalom Lappin - unknown
    In this chapter we consider unsupervised learning from two perspectives. First, we briefly look at its advantages and disadvantages as an engineering technique applied to large corpora in natural language processing. While supervised learning generally achieves greater accuracy with less data, unsupervised learning offers significant savings in the intensive labour required for annotating text. Second, we discuss the possible relevance of unsupervised learning to debates on the cognitive basis of human language acquisition. In (...)
     
    Export citation  
     
    Bookmark  
  2.  26
    On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Unsupervised learning of visual structure.Shimon Edelman - unknown
    To learn a visual code in an unsupervised manner, one may attempt to capture those features of the stimulus set that would contribute significantly to a statistically efficient representation. Paradoxically, all the candidate features in this approach need to be known before statistics over them can be computed. This paradox may be circumvented by confining the repertoire of candidate features to actual scene fragments, which resemble the “what+where” receptive fields found in the ventral visual stream in primates. We describe (...)
     
    Export citation  
     
    Bookmark   4 citations  
  4.  11
    Unsupervised learning of complex associations in an animal model.Leyre Castro, Edward A. Wasserman & Marisol Lauffer - 2018 - Cognition 173 (C):28-33.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  26
    Unsupervised learning of facial emotion decoding skills.Jan O. Huelle, Benjamin Sack, Katja Broer, Irina Komlewa & Silke Anders - 2014 - Frontiers in Human Neuroscience 8.
  6. Unsupervised learning with global objective functions.Suzanna Becker & R. Zemel - 1995 - In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. MIT Press. pp. 997--1000.
     
    Export citation  
     
    Bookmark  
  7.  24
    Cognitive Modeling of Anticipation: Unsupervised Learning and Symbolic Modeling of Pilots' Mental Representations.Sebastian Blum, Oliver Klaproth & Nele Russwinkel - 2022 - Topics in Cognitive Science 14 (4):718-738.
    The ability to anticipate team members' actions enables joint action towards a common goal. Task knowledge and mental simulation allow for anticipating other agents' actions and for making inferences about their underlying mental representations. In human–AI teams, providing AI agents with anticipatory mechanisms can facilitate collaboration and successful execution of joint action. This paper presents a computational cognitive model demonstrating mental simulation of operators' mental models of a situation and anticipation of their behavior. The work proposes two successive steps: (1) (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8.  84
    Cue integration with categories: Weighting acoustic cues in speech using unsupervised learning and distributional statistics.Joseph C. Toscano & Bob McMurray - 2010 - Cognitive Science 34 (3):434.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  9.  20
    Commentary on David Watson, “On the Philosophy of Unsupervised Learning,” Philosophy & Technology.Tom F. Sterkenburg - 2023 - Philosophy and Technology 36 (4):1-5.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  12
    Improving wearable-based fall detection with unsupervised learning.Mirko Fáñez, José R. Villar, Enrique de la Cal, Víctor M. González & Javier Sedano - 2022 - Logic Journal of the IGPL 30 (2):314-325.
    Fall detection is a challenging task that has received the attention of the research community in the recent years. This study focuses on FD using data gathered from wearable devices with tri-axial accelerometers, developing a solution centered in elderly people living autonomously. This research includes three different ways to improve a FD method: an analysis of the event detection stage, comparing several alternatives, an evaluation of features to extract for each detected event and an appraisal of up to 6 different (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  16
    Exploiting redundancy for flexible behavior: Unsupervised learning in a modular sensorimotor control architecture.Martin V. Butz, Oliver Herbort & Joachim Hoffmann - 2007 - Psychological Review 114 (4):1015-1046.
  12.  19
    Modeling language and cognition with deep unsupervised learning: a tutorial overview.Marco Zorzi, Alberto Testolin & Ivilin P. Stoianov - 2013 - Frontiers in Psychology 4.
  13. Hierarchical categorization and the effects of contrast inconsistency in an unsupervised learning task.J. Davies & D. Billman - 1996 - In Garrison W. Cottrell (ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. Lawrence Erlbaum. pp. 750.
  14.  27
    Teacher and learner: Supervised and unsupervised learning in communities.Michael G. Shafto & Colleen M. Seifert - 2015 - Behavioral and Brain Sciences 38.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  14
    When unsupervised training benefits category learning.Franziska Bröker, Bradley C. Love & Peter Dayan - 2022 - Cognition 221 (C):104984.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  67
    Unsupervised statistical learning in vision: computational principles, biological evidence.Shimon Edelman - unknown
    Unsupervised statistical learning is the standard setting for the development of the only advanced visual system that is both highly sophisticated and versatile, and extensively studied: that of monkeys and humans. In this extended abstract, we invoke philosophical observations, computational arguments, behavioral data and neurobiological findings to explain why computer vision researchers should care about (1) unsupervised learning, (2) statistical inference, and (3) the visual brain. We then outline a neuromorphic approach to structural primitive learning (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  20
    Beta-Hebbian Learning to enhance unsupervised exploratory visualizations of Android malware families.Nuño Basurto, Diego García-Prieto, Héctor Quintián, Daniel Urda, José Luis Calvo-Rolle & Emilio Corchado - 2024 - Logic Journal of the IGPL 32 (2):306-320.
    As it is well known, mobile phones have become a basic gadget for any individual that usually stores sensitive information. This mainly motivates the increase in the number of attacks aimed at jeopardizing smartphones, being an extreme concern above all on Android OS, which is the most popular platform in the market. Consequently, a strong effort has been devoted for mitigating mentioned incidents in recent years, even though few researchers have addressed the application of visualization techniques for the analysis of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18.  7
    Unsupervised clustering of context data and learning user requirements for a mobile device.John A. Flanagan - 2005 - In B. Kokinov A. Dey (ed.), Modeling and Using Context. Springer. pp. 155--168.
  19.  6
    Unsupervised collaborative learning based on Optimal Transport theory.Abdelfettah Touzani, Guénaël Cabanes, Younès Bennani & Fatima-Ezzahraa Ben-Bouazza - 2021 - Journal of Intelligent Systems 30 (1):698-719.
    Collaborative learning has recently achieved very significant results. It still suffers, however, from several issues, including the type of information that needs to be exchanged, the criteria for stopping and how to choose the right collaborators. We aim in this paper to improve the quality of the collaboration and to resolve these issues via a novel approach inspired by Optimal Transport theory. More specifically, the objective function for the exchange of information is based on the Wasserstein distance, with a (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  61
    Unsupervised Efficient Learning and Representation of Language Structure.Shimon Edelman - unknown
    We describe a linguistic pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of corpus data. This is achieved by compactly coding recursively structured constituent patterns, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. The resulting representations constitute an efficient encoding of linguistic knowledge and support systematic generalization to unseen sentences.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21.  16
    Modelling unsupervised online-learning of artificial grammars: Linking implicit and statistical learning.Martin A. Rohrmeier & Ian Cross - 2014 - Consciousness and Cognition 27:155-167.
  22. Supervised, Unsupervised and Reinforcement Learning-Face Recognition Using Null Space-Based Local Discriminant Embedding.Yanmin Niu & Xuchu Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4114--245.
     
    Export citation  
     
    Bookmark  
  23.  9
    Discriminative Extreme Learning Machine with Cross-Domain Mean Approximation for Unsupervised Domain Adaptation.Shaofei Zang, Xinghai Li, Jianwei Ma, Yongyi Yan, Jinfeng Lv & Yuan Wei - 2022 - Complexity 2022:1-22.
    Extreme Learning Machine is widely used in various fields because of its fast training and high accuracy. However, it does not primarily work well for Domain Adaptation in which there are many annotated data from auxiliary domain and few even no annotated data in target domain. In this paper, we propose a new variant of ELM called Discriminative Extreme Learning Machine with Cross-Domain Mean Approximation for unsupervised domain adaptation. It introduces Cross-Domain Mean Approximation into the hidden layer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  15
    Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code.Andrea Tagarelli & Andrea Simeri - 2022 - Artificial Intelligence and Law 30 (3):417-473.
    Modeling law search and retrieval as prediction problems has recently emerged as a predominant approach in law intelligence. Focusing on the law article retrieval task, we present a deep learning framework named LamBERTa, which is designed for civil-law codes, and specifically trained on the Italian civil code. To our knowledge, this is the first study proposing an advanced approach to law article prediction for the Italian legal system based on a BERT (Bidirectional Encoder Representations from Transformers) learning framework, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  25.  36
    Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media.Geoffrey C. Bowker & Anja Bechmann - 2019 - Big Data and Society 6 (1).
    Artificial Intelligence in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  26.  46
    Identifying and characterizing scientific authority-related misinformation discourse about hydroxychloroquine on twitter using unsupervised machine learning.Tim K. Mackey, Jiawei Li & Michael Robert Haupt - 2021 - Big Data and Society 8 (1).
    This study investigates the types of misinformation spread on Twitter that evokes scientific authority or evidence when making false claims about the antimalarial drug hydroxychloroquine as a treatment for COVID-19. Specifically, we examined tweets generated after former U.S. President Donald Trump retweeted misinformation about the drug using an unsupervised machine learning approach called the biterm topic model that is used to cluster tweets into misinformation topics based on textual similarity. The top 10 tweets from each topic cluster were (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  11
    The framing of initial COVID‐19 communication: Using unsupervised machine learning on press releases.Stella Tomasi, Sushma Kumble, Pratiti Diddi & Neeraj Parolia - 2023 - Business and Society Review 128 (3):515-531.
    The COVID-19 pandemic was a global health crisis that required US residents to understand the phenomenon, interpret the cues, and make sense within their environment. Therefore, how the communication of COVID-19 was framed to stakeholders during the early stages of the pandemic became important to guide them through specific actions in their state and subsequently with the sensemaking process. The present study examines which frames were emphasized in the states' press releases on policies and other COVID information to influence stakeholders (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  21
    Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation.Geoffrey Hinton, Simon Osindero, Max Welling & Yee-Whye Teh - 2006 - Cognitive Science 30 (4):725-731.
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  34
    Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives.Rohan Nanda, Giovanni Siragusa, Luigi Di Caro, Guido Boella, Lorenzo Grossio, Marco Gerbaudo & Francesco Costamagna - 2019 - Artificial Intelligence and Law 27 (2):199-225.
    The automated identification of national implementations of European directives by text similarity techniques has shown promising preliminary results. Previous works have proposed and utilized unsupervised lexical and semantic similarity techniques based on vector space models, latent semantic analysis and topic models. However, these techniques were evaluated on a small multilingual corpus of directives and NIMs. In this paper, we utilize word and paragraph embedding models learned by shallow neural networks from a multilingual legal corpus of European directives and national (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  17
    Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives.Rohan Nanda, Giovanni Siragusa, Luigi Di Caro, Guido Boella, Lorenzo Grossio, Marco Gerbaudo & Francesco Costamagna - 2019 - Artificial Intelligence and Law 27 (2):199-225.
    The automated identification of national implementations of European directives by text similarity techniques has shown promising preliminary results. Previous works have proposed and utilized unsupervised lexical and semantic similarity techniques based on vector space models, latent semantic analysis and topic models. However, these techniques were evaluated on a small multilingual corpus of directives and NIMs. In this paper, we utilize word and paragraph embedding models learned by shallow neural networks from a multilingual legal corpus of European directives and national (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  31
    An Improved EMD-Based Dissimilarity Metric for Unsupervised Linear Subspace Learning.Xiangchun Yu, Zhezhou Yu, Wei Pang, Minghao Li & Lei Wu - 2018 - Complexity 2018:1-24.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  4
    Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is not (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  41
    Unsupervised context sensitive language acquisition from a large corpus.Shimon Edelman - unknown
    We describe a pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of linguistic structures from a plain natural-language corpus. This paper addresses the issues of learning structured knowledge from a large-scale natural language data set, and of generalization to unseen text. The implemented algorithm represents sentences as paths on a graph whose vertices are words. Significant patterns, determined by recursive context-sensitive statistical inference, form new vertices. Linguistic constructions are represented by trees composed of significant (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  34.  46
    A simplicity principle in unsupervised human categorization.Emmanuel M. Pothos & Nick Chater - 2002 - Cognitive Science 26 (3):303-343.
    We address the problem of predicting how people will spontaneously divide into groups a set of novel items. This is a process akin to perceptual organization. We therefore employ the simplicity principle from perceptual organization to propose a simplicity model of unconstrained spontaneous grouping. The simplicity model predicts that people would prefer the categories for a set of novel items that provide the simplest encoding of these items. Classification predictions are derived from the model without information either about the number (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  35.  53
    Learning Representations of Animated Motion Sequences—A Neural Model.Georg Layher, Martin A. Giese & Heiko Neumann - 2014 - Topics in Cognitive Science 6 (1):170-182.
    The detection and categorization of animate motions is a crucial task underlying social interaction and perceptual decision making. Neural representations of perceived animate objects are partially located in the primate cortical region STS, which is a region that receives convergent input from intermediate-level form and motion representations. Populations of STS cells exist which are selectively responsive to specific animated motion sequences, such as walkers. It is still unclear how and to what extent form and motion information contribute to the generation (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  18
    Unsupervised network traffic anomaly detection with deep autoencoders.Vibekananda Dutta, Marek Pawlicki, Rafał Kozik & Michał Choraś - 2022 - Logic Journal of the IGPL 30 (6):912-925.
    Contemporary Artificial Intelligence methods, especially their subset-deep learning, are finding their way to successful implementations in the detection and classification of intrusions at the network level. This paper presents an intrusion detection mechanism that leverages Deep AutoEncoder and several Deep Decoders for unsupervised classification. This work incorporates multiple network topology setups for comparative studies. The efficiency of the proposed topologies is validated on two established benchmark datasets: UNSW-NB15 and NetML-2020. The results of their analysis are discussed in terms (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. Learning Diphone-Based Segmentation.Robert Daland & Janet B. Pierrehumbert - 2011 - Cognitive Science 35 (1):119-155.
    This paper reconsiders the diphone-based word segmentation model of Cairns, Shillcock, Chater, and Levy (1997) and Hockema (2006), previously thought to be unlearnable. A statistically principled learning model is developed using Bayes’ theorem and reasonable assumptions about infants’ implicit knowledge. The ability to recover phrase-medial word boundaries is tested using phonetic corpora derived from spontaneous interactions with children and adults. The (unsupervised and semi-supervised) learning models are shown to exhibit several crucial properties. First, only a small amount (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  38.  28
    Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  40.  55
    Incremental learning of gestures for human–robot interaction.Shogo Okada, Yoichi Kobayashi, Satoshi Ishibashi & Toyoaki Nishida - 2010 - AI and Society 25 (2):155-168.
    For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41.  13
    Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  9
    What Machine Learning Can Tell Us About the Role of Language Dominance in the Diagnostic Accuracy of German LITMUS Non-word and Sentence Repetition Tasks.Lina Abed Ibrahim & István Fekete - 2019 - Frontiers in Psychology 9.
    This study investigates the performance of 21 monolingual and 56 bilingual children aged 5;6-9;0 on German-LITMUS-sentence-repetition (SRT; Hamann et al., 2013) and nonword-repetition-tasks (NWRT; Grimm et al., 2014), which were constructed according to the LITMUS-principles (Language Impairment Testing in Multilingual Settings; Armon-Lotem et al., 2015). Both tasks incorporate complex structures shown to be cross-linguistically challenging for children with Specific Language Impairment (SLI) and aim at minimizing bias against bilingual children while still being indicative of the presence of language impairment across (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  34
    Machine learning and essentialism.Kristina Šekrst & Sandro Skansi - 2022 - Zagadnienia Filozoficzne W Nauce 73:171-196.
    Machine learning and essentialism have been connected in the past by various researchers, in order to state that the main paradigm in machine learning processes is equivalent to choosing the “essential” attributes for the machine to search for. Our goal in this paper is to show that there are connections between machine learning and essentialism, but only for some kinds of machine learning, and often not including deep learning methods. Similarity-based approaches, more connected to the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  68
    Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that (...) modeling does not eliminate the researchers’ judgments from the process of producing evidence for social scientific theories. The paper shows this by distinguishing between two prevalent attitudes toward topic modeling, i.e., topic realism and topic instrumentalism. Under neither can modeling provide social scientific evidence without the researchers’ interpretive engagement with the original text materials. Thus the unsupervised text analysis cannot improve the objectivity of interpretation by alleviating the problem of underdetermination in interpretive debate. The paper argues that the sense in which unsupervised methods can improve objectivity is by providing researchers with the resources to justify to others that their interpretations are correct. This kind of objectivity seeks to reduce suspicions in collective debate that interpretations are the products of arbitrary processes influenced by the researchers’ idiosyncratic decisions or starting points. The paper discusses this view in relation to alternative approaches to formalizing interpretation and identifies several limitations on what unsupervised learning can be expected to achieve in terms of supporting interpretive work. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  37
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  46.  59
    Rich Syntax from a Raw Corpus: Unsupervised Does It.Shimon Edelman - unknown
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensitive Languages). The representations learned by our algorithm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  28
    Beta Hebbian Learning for intrusion detection in networks with MQTT Protocols for IoT devices.Álvaro Michelena, María Teresa García Ordás, José Aveleira-Mata, David Yeregui Marcos del Blanco, Míriam Timiraos Díaz, Francisco Zayas-Gato, Esteban Jove, José-Luis Casteleiro-Roca, Héctor Quintián, Héctor Alaiz-Moretón & José Luis Calvo-Rolle - 2024 - Logic Journal of the IGPL 32 (2):352-365.
    This paper aims to enhance security in IoT device networks through a visual tool that utilizes three projection techniques, including Beta Hebbian Learning (BHL), t-distributed Stochastic Neighbor Embedding (t-SNE) and ISOMAP, in order to facilitate the identification of network attacks by human experts. This work research begins with the creation of a testing environment with IoT devices and web clients, simulating attacks over Message Queuing Telemetry Transport (MQTT) for recording all relevant traffic information. The unsupervised algorithms chosen provide (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Machine Learning and the Cognitive Basis of Natural Language.Shalom Lappin - unknown
    Machine learning and statistical methods have yielded impressive results in a wide variety of natural language processing tasks. These advances have generally been regarded as engineering achievements. In fact it is possible to argue that the success of machine learning methods is significant for our understanding of the cognitive basis of language acquisition and processing. Recent work in unsupervised grammar induction is particularly relevant to this issue. It suggests that knowledge of language can be achieved through general (...)
     
    Export citation  
     
    Bookmark   2 citations  
  49.  8
    Neural Network Machine Translation Method Based on Unsupervised Domain Adaptation.Rui Wang - 2020 - Complexity 2020:1-11.
    Relying on large-scale parallel corpora, neural machine translation has achieved great success in certain language pairs. However, the acquisition of high-quality parallel corpus is one of the main difficulties in machine translation research. In order to solve this problem, this paper proposes unsupervised domain adaptive neural network machine translation. This method can be trained using only two unrelated monolingual corpora and obtain a good translation result. This article first measures the matching degree of translation rules by adding relevant subject (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  17
    Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health.Ariel Guersenzvaig - 2023 - Ethics and Information Technology 26 (1):1-12.
    Through hypothetical scenarios, this paper analyses whether machine learning (ML) could resolve one of the main shortcomings present in Christopher Boorse’s Biostatistical Theory of health (BST). In doing so, it foregrounds the boundaries and challenges of employing ML in formulating a naturalist (i.e., prima facie value-free) definition of health. The paper argues that a sweeping dataist approach cannot fully make the BST truly naturalistic, as prior theories and values persist. It also points out that supervised learning introduces circularity, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 988