Switch to: References

Citations of:

Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models

Proceedings of the 12th Conference on Language Resources and Evaluation (2020)

Add citations

You must login to add citations.
  1. Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation