Experts can identify environment differences that impact model predictions at a given location and explore through direct interactions with the model hypothesis to fix it. Sim2RealViz displays details of a given model and the performance of its instances in both simulation and real-world. the estimation of a robot's position using trained models. In this paper, we introduce Sim2RealViz, a visual analytics tool to assist experts in understanding and reducing this gap for robot ego-pose estimation tasks, i.e. lights, objects displacements) lead to errors.
But once robots are deployed in the real world, the simulation gap, as well as changes in the real world (e.g. The Robotics community has started to heavily rely on increasingly realistic 3D simulators for large-scale training of robots on massive amounts of data. She was one of the Papers Co-Chairs of IEEE VIS (VAST) 20 and is in the steering committee of IEEE VIS (2020-2023). She is an associate editor-in-chief of IEEE Transactions on Visualization and Computer Graphics and is an associate editor of Artificial Intelligence, IEEE Transactions on Big Data, and ACM Transactions on Intelligent Systems and Technology. Shixia was elevated to an IEEE Fellow in 2021 and induced into IEEE Visualization Academy in 2020. Her research interests include explainable machine learning, visual text analytics, and text mining. Shixia Liu is a professor at Tsinghua University. This talk presents the major challenges explainable machine learning and exemplifies the solutions with several visual analytics techniques and examples, including data quality diagnosis, model understanding and diagnosis. Without a clear understanding of how and why the model works, the development of high-performance models typically relies on a time-consuming trial-and-error procedure. However, most users often treat the machine learning model as a “black box” because of its incomprehensible functions and unclear working mechanism. Machine learning has demonstrated being highly successful at solving many real-world applications ranging from information retrieval, data mining, and speech recognition, to computer graphics, visualization, and human-computer interaction.
#Rocketchat bigger input box Offline#
Join our Slack channel for Live and Offline Q/A with authors and presenters! In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration. The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. As a result, the interaction between the XAI and visual analytics communities became more and more important. In the last years, the usage of methodologies that explain DL models became central in these systems. Moreover, the debugging phase is a nightmare for practitioners too.Īnother community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. This is an important topic for several reasons.
This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them. In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks.