pgsoft-games模拟器

pgsoft-games模拟器:Design, Interpretability, and Explainability of Models in the Environment of Granular Computing and Federated Learning

pgsoft-games模拟器:

来源:pgsoft-games模拟器发布时间:2021-05-19访问量:2181

报告题目:Design, Interpretability, and Explainability of Models in the Environment of Granular Computing and Federated Learning (Granular计算和联邦学习环境中模型的设计和可解释性)

时间:2021517日,上午9:30点,线上腾讯会议,

会议链接:https://meeting.tencent.com/s/PT3ljQRHTNLL

会议 ID:976 988 183

报告人:Witold Pedrycz教授,University of Alberta阿尔伯塔大学

邀请人:丁德锐教授

 

报告人介绍:

Witold Pedrycz (IEEE Life Fellow) is Professor and Canada Research Chair (CRC) in Computational Intelligence in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. In 2012 he was elected a Fellow of the Royal Society of Canada. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Society. He is a recipient of the IEEE Canada Computer Engineering Medal, a Cajastur Prize for Soft Computing from the European Centre for Soft Computing, a Killam Prize, a Fuzzy Pioneer Award from the IEEE Computational Intelligence Society, and 2019 Meritorious Service Award from the IEEE Systems Man and Cybernetics Society.

 

His main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data science, pattern recognition, data science, knowledge-based neural networks among others.

 

Dr. Pedrycz is vigorously involved in editorial activities. He is an Editor-in-Chief of Information Sciences, Editor-in-Chief of WIREs Data Mining and Knowledge Discovery (Wiley), and Co-editor-in-Chief of Int. J. of Granular Computing (Springer) and J. of Data Information and Management (Springer).  He serves on an Advisory Board of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of international journals.

 报告内容简介:In data analytics, system modeling, and decision-making, the aspects of interpretability and explainability are of paramount relevance, just to refer here to explainable Artificial Intelligence (XAI). They are especially timely in light of the increasing complexity of systems one has to cope with and ultimate concerns about privacy and security of data and models. With the omnipresence of mobile devices, distributed data, and security and privacy restrictions, federated learning becomes a feasible development alternative.

 We advocate that there are two factors that immensely contribute to the realization of the above important requirements, namely, (i) a suitable level of abstraction along with its hierarchical aspects in describing the problem and (ii) a logic fabric of the resultant constructs. It is demonstrated that their conceptualization and the following realization can be conveniently carried out with the use of information granules (for example, fuzzy sets, sets, rough sets, and alike).

 Information granules are building blocks forming the interpretable environment capturing the essence of data and revealing key relationships existing there. Their emergence is supported by a systematic and focused analysis of data. At the same time, their initialization is specified by stakeholders or/and the owners and users of data.   We present a comprehensive discussion of information granules-oriented design of information granules and their description by engaging an innovative mechanism of federated unsupervised learning in which information granules are constructed and refined with the use of collaborate schemes of clustering.

 We offer a detailed study of quantification of interpretability of functional rule-based models with the rules in the form “if x is A then y =f(x)” with the condition parts described by information granules. The interpretability mechanisms are focused on a systematic elevation of interpretability of the conditions and conclusions of the rules. It is shown that augmenting interpretability of conditions is achieved by (i) decomposing a multivariable information granule into its one-dimensional components, (ii) delivering their symbolic characterization, and (iii) carrying out a process of linguistic approximation. A hierarchy of interpretation mechanisms is systematically established. We also discuss how this increased interpretability associates with the reduced accuracy of the rules and how sound trade-offs between these features are formed.

 


返回原图
/

pgsoft-games模拟器(中国)官方网站IOS/安卓通用版/手机APP