Skip to content

Commit

Permalink
update XAI chapter from overleaf
Browse files Browse the repository at this point in the history
update XAI chapter from overleaf

XAI until 知识感知的解释

update XAI ref

update XAI

update XAI

update XAI

update XAI all

update XAI index.md
  • Loading branch information
HaoyangLee committed Apr 29, 2023
1 parent 4c9dfc7 commit 9b323ac
Show file tree
Hide file tree
Showing 8 changed files with 95 additions and 97 deletions.
176 changes: 87 additions & 89 deletions chapter_explainable_AI/explainable_ai.md

Large diffs are not rendered by default.

8 changes: 1 addition & 7 deletions chapter_explainable_AI/index.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
# 可解释性AI系统

近10年来,籍由算力与数据规模的性价比突破临界点,以深度神经网络为代表的联结主义模型架构及统计学习范式(以后简称深度学习)在特征表征能力上取得了跨越级别的突破,大大推动了人工智能的发展,在很多场景中达到令人难以置信的效果。比如:人脸识别准确率达到97%以上;谷歌智能语音助手回答正确率,在2019年的测试中达到92.9%。在这些典型场景下,深度学习在智能表现上的性能已经超过了普通人类(甚至专家),从而到了撬动技术更替的临界点。在过去几年间,在某些商业逻辑对技术友好,或者伦理法规暂时稀缺的领域,如安防、实时调度、流程优化、竞技博弈、信息流分发等,人工智能和深度学习取得了技术和商业上快速突破。

食髓知味,技术发展的甜头自然每个领域都不愿放过。而当对深度学习的商业化运用来到某些对技术敏感、与人的生存或安全关系紧密的领域,如自动驾驶、金融、医疗和司法等高风险应用场景时,原有的商业逻辑在进行技术更替的过程中就会遇到阻力,从而导致商业化变现速度的减缓甚至失败。究其原因,以上场景的商业逻辑及背后伦理法规的中枢之一是稳定的、可追踪的责任明晰与责任分发;而深度学习得到的模型是个黑盒,我们无法从模型的结构或权重中获取模型行为的任何信息,从而使这些场景下责任追踪和分发的中枢无法复用,导致人工智能在业务应用中遇到技术上和结构上的困难。此外,模型的可解释性问题也引起了国家层面的关注,相关机构对此推出了相关的政策和法规。

因此,从商业推广层面以及从法规层面,我们都需要打开黑盒模型,对模型进行解释,可解释AI正是解决该类问题的技术。

本章的学习目标包括:
在本章中,我们介绍深度学习的一个重要话题——可解释AI及其在系统方面的知识。本章的学习目标包括:

- 掌握可解释AI的目标和应用场景

Expand Down
Binary file modified img/ch11/tabular.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/ch11/tb_net.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/ch11/xai_global_feature_importance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/ch11/xai_gradient_based.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/ch11/xai_kg_recommendation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 7 additions & 1 deletion references/explainable.bib
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
@misc{darpaxai2016,
title = {Broad Agency Announcement Explainable Artificial Intelligence (XAI) DARPA-BAA-16-53 },
year = {2016},
howpublished = {\url{https://research-vp.tau.ac.il/sites/resauth.tau.ac.il/files/DARPA-BAA-16-53_Explainable_Artificial_Intelligence.pdf}}
}

@ARTICLE{2020tkde_li,
author={Li, Xiao-Hui and Cao, Caleb Chen and Shi, Yuhan and Bai, Wei and Gao, Han and Qiu, Luyu and Wang, Cong and Gao, Yuanyuan and Zhang, Shenjia and Xue, Xun and Chen, Lei},
journal={IEEE Transactions on Knowledge and Data Engineering},
Expand All @@ -16,7 +22,7 @@ @article{erhan2009visualizing
journal = {Technical Report, Univeristé de Montréal}
}

@InProceedings{kim2017interpretability,
@InProceedings{kim2018interpretability,
title={Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors ({TCAV})},
author={Kim, Been and Wattenberg, Martin and Gilmer, Justin and Cai, Carrie and Wexler, James and Viegas, Fernanda and sayres, Rory},
booktitle={Proceedings of the 35th International Conference on Machine Learning},
Expand Down

0 comments on commit 9b323ac

Please sign in to comment.