Skip to main content

Certified Logic-Based Explainable AI – The Case of Monotonic Classifiers

  • Conference paper
  • First Online:
Tests and Proofs (TAP 2023)

Abstract

The continued advances in artificial intelligence (AI), including those in machine learning (ML), raise concerns regarding their deployment in high-risk and safety-critical domains. Motivated by these concerns, there have been calls for the verification of systems of AI, including their explanation. Nevertheless, tools for the verification of systems of AI are complex, and so error-prone. This paper describes one initial effort towards the certification of logic-based explainability algorithms, focusing on monotonic classifiers. Concretely, the paper starts by using the proof assistant Coq to prove the correctness of recently proposed algorithms for explaining monotonic classifiers. Then, the paper proves that the algorithms devised for monotonic classifiers can be applied to the larger family of stable classifiers. Finally, confidence code, extracted from the proofs of correctness, is used for computing explanations that are guaranteed to be correct. The experimental results included in the paper show the scalability of the proposed approach for certifying explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The paper adopts the classification of monotonic classifiers proposed in earlier work [5].

  2. 2.

    PI-explanations can be formulated as a problem of logic-based abduction, and so are also referred to as abductive explanations (AXp) [13]. More recently, AXp’s have been studied from a knowledge compilation perspective [1].

  3. 3.

    https://coq.inria.fr.

  4. 4.

    https://github.com/thierry-martinez/pyml.

  5. 5.

    https://xgboost.readthedocs.io/en/stable/tutorials/monotonic.html.

  6. 6.

    https://github.com/AishwaryaSivaraman/COMET.

  7. 7.

    https://www.tensorflow.org/lattice/overview.

  8. 8.

    https://github.com/gnobitab/CertifiedMonotonicNetwork.

  9. 9.

    https://www.kaggle.com/datasets/elikplim/car-evaluation-data-set.

  10. 10.

    https://www.kaggle.com/datasets/andrewmvd/heart-failure-clinical-data.

  11. 11.

    https://www.kaggle.com/datasets/barkhaverma/placement-data-full-class.

  12. 12.

    https://www.kaggle.com/datasets/rounakbanik/pokemon.

  13. 13.

    https://xgboost.readthedocs.io/en/stable/.

References

  1. Audemard, G., Koriche, F., Marquis, P.: On tractable XAI queries based on compiled representations. In: KR, pp. 838–849 (2020)

    Google Scholar 

  2. Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability - Second Edition, Frontiers in Artificial Intelligence and Applications, vol. 336. IOS Press (2021). https://doi.org/10.3233/FAIA336

  3. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), pp. 785–794. ACM, New York (2016). https://doi.org/10.1145/2939672.2939785

  4. Cruz-Filipe, L., Marques-Silva, J., Schneider-Kamp, P.: Formally verifying the solution to the Boolean pythagorean triples problem. J. Automat. Reason. 63(3), 695–722 (2018). https://doi.org/10.1007/s10817-018-9490-4

    Article  Google Scholar 

  5. Daniels, H., Velikova, M.: Monotone and partially monotone neural networks. IEEE Trans. Neural Netw. 21(6), 906–917 (2010)

    Article  Google Scholar 

  6. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2019)

    Google Scholar 

  7. Gunning, D.: Explainable artificial intelligence (xai). dARPA-BAA-16-53 (2016). https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf

  8. Gunning, D., Aha, D.W.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

  9. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.: XAI - explainable artificial intelligence. Sci. Robot. 4(37) (2019). https://doi.org/10.1126/scirobotics.aay7120

  10. Huang, X., Marques-Silva, J.: The inadequacy of shapley values for explainability. arXiv preprint CoRR abs/2302.08160 (2023). arXiv:2302.08160

  11. Ignatiev, A.: Towards trustable explainable AI. In: IJCAI, pp. 5154–5158 (2020)

    Google Scholar 

  12. Ignatiev, A., Narodytska, N., Asher, N., Marques-Silva, J.: From contrastive to abductive explanations and back again. In: AIxIA, pp. 335–355 (2020)

    Google Scholar 

  13. Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI, pp. 1511–1519 (2019)

    Google Scholar 

  14. Ignatiev, A., Narodytska, N., Marques-Silva, J.: On validating, repairing and refining heuristic ML explanations. CoRR abs/1907.02509 arXiv preprint (2019) arXiv:1907.02509

  15. Liu, X., Han, X., Zhang, N., Liu, Q.: Certified monotonic neural networks. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  16. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NeurIPS, pp. 4765–4774 (2017)

    Google Scholar 

  17. Marques-Silva, J.: Logic-based explainability in machine learning. CoRR abs/2211.00541 arXiv preprint (2022). arXiv:2211.00541

  18. Marques-Silva, J., Gerspacher, T., Cooper, M.C., Ignatiev, A., Narodytska, N.: Explanations for monotonic classifiers. In: ICML, pp. 7469–7479 (2021)

    Google Scholar 

  19. Marques-Silva, J., Ignatiev, A.: Delivering trustworthy AI through formal XAI. In: AAAI, pp. 12342–12350 (2022)

    Google Scholar 

  20. Marques-Silva, J., Janota, M., Mencía, C.: Minimal sets on propositional formulae, problems and reductions. Artif. Intell. 252, 22–50 (2017). https://doi.org/10.1016/j.artint.2017.07.005

  21. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  Google Scholar 

  22. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)

    Article  Google Scholar 

  23. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?": Explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)

    Google Scholar 

  24. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI, pp. 1527–1535 (2018)

    Google Scholar 

  25. Seshia, S.A., Sadigh, D., Sastry, S.S.: Toward verified artificial intelligence. Commun. ACM 65(7), 46–55 (2022). https://doi.org/10.1145/3503914

  26. Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, pp. 5103–5111 (2018)

    Google Scholar 

  27. Sivaraman, A., Farnadi, G., Millstein, T.D., den Broeck, G.V.: Counterexample-guided learning of monotonic neural networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020 (December), pp. 6–12. Virtual (2020). https://proceedings.neurips.cc/paper/2020/hash/8ab70731b1553f17c11a3bbc87e0b605-Abstract.html

  28. You, S., Ding, D., Canini, K.R., Pfeifer, J., Gupta, M.R.: Deep lattice networks and partial monotonic functions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 (December), pp. 4–9, 2017. Long Beach, CA, USA, pp. 2981–2989 (2017), https://proceedings.neurips.cc/paper/2017/hash/464d828b85b0bed98e80ade0a5c43b0f-Abstract.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aurélie Hurault .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hurault, A., Marques-Silva, J. (2023). Certified Logic-Based Explainable AI – The Case of Monotonic Classifiers. In: Prevosto, V., Seceleanu, C. (eds) Tests and Proofs. TAP 2023. Lecture Notes in Computer Science, vol 14066. Springer, Cham. https://doi.org/10.1007/978-3-031-38828-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-38828-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-38827-9

  • Online ISBN: 978-3-031-38828-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics