Proceedings of the AAAI Symposium Series https://ojs.aaai.org/index.php/AAAI-SS <p>The AAAI Symposium Series, previously published as AAAI Technical Reports, are held three times a year (Spring, Summer, Fall) and are designed to bring colleagues together to share ideas and learn from each other’s artificial intelligence research. The series affords participants a smaller, more intimate setting where they can share ideas and learn from each other’s artificial intelligence research. Topics for the symposia change each year, and the limited seating capacity and relaxed atmosphere allow for workshop-like interaction. The format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.<br /><br />The AAAI Spring Symposium Series is typically held during spring break (generally in March) on the west coast. The AAAI Summer Symposium Series is the newest in the annual set of meetings run in parallel at a common site. The inaugural 2023 Summer Symposium Series was held July 17-19, 2023, in Singapore. The AAAI Fall Symposium series is usually held on the east coast during late October or early November.</p> en-US publications@aaai.org (Publications Manager) publications@aaai.org (Publications Manager) Fri, 08 Nov 2024 03:34:02 -0800 OJS 3.2.1.1 http://blogs.law.harvard.edu/tech/rss 60 Cause and Effect: Can Large Language Models Truly Understand Causality? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31764 With the rise of Large Language Models (LLMs), it has become crucial to understand their capabilities and limitations in deciphering and explaining the complex web of causal relationships that language entails. Current methods use either explicit or implicit causal reasoning, yet there is a strong need for a unified approach combining both to tackle a wide array of causal relationships more effectively. This research proposes a novel architecture called Context-Aware Reasoning Enhancement with Counterfactual Analysis (CARE-CA) to enhance causal reasoning and explainability. The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through LLMs. Our framework goes one step further with a layer of counterfactual explanations to accentuate LLMs' understanding of causality. The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification, and counterfactual reasoning. The counterfactual sentences add explicit knowledge of `not caused by' scenarios. By combining these powerful modules, our model aims to provide a deeper understanding of causal relationships, enabling enhanced interpretability. Evaluation of benchmark datasets shows improved performance across all metrics, such as accuracy, precision, recall, and F1 scores. We also present CausalNet, a novel dataset specifically curated to benchmark and enhance the causal reasoning capabilities of LLMs. This dataset is accompanied by code designed to facilitate further research in this domain. Swagata Ashwani, Kshiteesh Hegde, Nishith Reddy Mannuru, Dushyant Singh Sengar, Mayank Jindal, Krishna Chaitanya Rao Kathala, Dishant Banga, Vinija Jain, Aman Chadha Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31764 Fri, 08 Nov 2024 00:00:00 -0800 Limitations of Feature Attribution in Long Text Classification of Standards https://ojs.aaai.org/index.php/AAAI-SS/article/view/31765 Managing complex AI systems requires insight into a model's decision-making processes. Understanding how these systems arrive at their conclusions is essential for ensuring reliability. In the field of explainable natural language processing, many approaches have been developed and evaluated. However, experimental analysis of explainability for text classification has been largely constrained to short text and binary classification. In this applied work, we study explainability for a real-world task where the goal is to assess the technological suitability of standards. This prototypical use case is characterized by large documents, technical language, and a multi-label setting, making it a complex modeling challenge. We provide an analysis of approx. 1000 documents with human-annotated evidence. We then present experimental results with two explanation methods evaluating plausibility and runtime of explanations. We find that the average runtime for explanation generation is at least 5 minutes and that the model explanations do not overlap with the ground truth. These findings reveal limitations of current explanation methods. In a detailed discussion, we identify possible reasons and how to address them on three different dimensions: task, model and explanation method. We conclude with risks and recommendations for the use of feature attribution methods in similar settings. Katharina Beckh, Joann Rachel Jacob, Adrian Seeliger, Stefan Rüping, Najmeh Mousavi Nejad Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31765 Fri, 08 Nov 2024 00:00:00 -0800 Artificial Trust in Mutually Adaptive Human-Machine Teams https://ojs.aaai.org/index.php/AAAI-SS/article/view/31766 As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human's trust, which in turn affects how the human collaborates too. With the goal of maintaining mutual appropriate trust between the human and the machine in mind, we reflect on the requirements for having an AT-based decision-making model on an artificial teammate. Furthermore, we propose a user study to investigate the role of task-based willingness (e.g. human preferences on tasks) and its communication in AT-based decision-making. Carolina Centeio Jorge, Ewart J de Visser, Myrthe L Tielman, Catholijn M Jonker, Lionel P Robert Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31766 Fri, 08 Nov 2024 00:00:00 -0800 DQM: Data Quality Metrics for AI Components in the Industry https://ojs.aaai.org/index.php/AAAI-SS/article/view/31767 In industrial settings, measuring the quality of data used to represent an intended domain of use and its operating conditions is crucial and challenging. Thus, this paper aims to present a set of metrics addressing this data quality issue in the form of a library, named DQM (Data Quality Metrics), for Machine Learning (ML) use. Additional metrics specific to industrial application are developed in the proposed library. This work aims also to assess various data and datasets types. Those metrics are used to characterize the training and evaluating datasets involved in the process of building ML models for industrial use cases. Two categories of metrics are implemented in DQM: inherent data metrics, are the ones evaluating the quality of a given dataset independently from the ML model such as statistical proprieties and attributes, and model dependent metrics which are those implemented to measure the quality of the dataset by considering the ML model outputs such the gap between two datasets in regards to a given ML model. DQM is used in the scope of the Confiance.ai program to evaluate datasets used for industrial purposes such as autonomous driving. Sabrina Chaouche, Yoann Randon, Faouzi Adjed, Nadira Boudjani, Mohamed Ibn Khedher Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31767 Fri, 08 Nov 2024 00:00:00 -0800 ML Model Coverage Assessment by Topological Data Analysis Exploration https://ojs.aaai.org/index.php/AAAI-SS/article/view/31768 The increasing complexity of deep learning models necessitates advanced methods for model coverage assessment, a critical factor for their reliable deployment. In this study, we introduce a novel approach leveraging topological data analysis to evaluate the coverage of a couple dataset & classification model. By using tools from topological data analysis, our method identifies underrepresented regions within the data, thereby enhancing the understanding of both model performances and data completeness. This approach simultaneously evaluates the dataset and the model, highlighting areas of potential risk. We report experimental evidence demonstrating the effectiveness of this topological framework in providing a comprehensive and interpretable coverage assessment. As such, we aim to open new avenues for improving the reliability and trustworthiness of classification models, laying the groundwork for future research in this domain. Ayman Fakhouri, Faouzi Adjed, Martin Gonzalez, Martin Royer Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31768 Fri, 08 Nov 2024 00:00:00 -0800 Influence Reasoning Capabilities of Large Language Models in Social Environments https://ojs.aaai.org/index.php/AAAI-SS/article/view/31769 We ask whether state-of-the-art large language models can provide a viable alternative to human annotators for detecting and explaining behavioural influence online. Working with a large corpus of online interactions retrieved from the social media platform Mastodon, we cross-examine a dataset containing 11,000 LLM influence labels and explanations across nine state-of-the-art large language models from 312 scenarios. We use a range of resolution categories and four stages of shot prompting to further measure the importance of context to language model performance. We also consider the impact of model architecture, and how social media content and features from the explanation impact model labelling accuracy. Our experiment shows that whilst most large language models struggle to identify the correct framing of influence from an interaction, at lower label resolutions, models like Flan and GPT-4 Turbo perform with an accuracy of 70%-80%, demonstrating encouraging potential for future social influence identification and explanation, and contributing to our understanding of the general social reasoning capabilities of large language models. Luke Gassmann, Jimmy Campbell, Matthew Edwards Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31769 Fri, 08 Nov 2024 00:00:00 -0800 Unboxing Occupational Bias: Debiasing LLMs with U.S. Labor Data https://ojs.aaai.org/index.php/AAAI-SS/article/view/31770 Large Language Models (LLMs) are prone to inheriting and amplifying societal biases embedded within their training data, potentially reinforcing harmful stereotypes related to gender, occupation, and other sensitive categories. This issue becomes particularly problematic as biased LLMs can have far-reaching consequences, leading to unfair practices and exacerbating social inequalities across various domains, such as recruitment, online content moderation, or even the criminal justice system. Although prior research has focused on detecting bias in LLMs using specialized datasets designed to highlight intrinsic biases, there has been a notable lack of investigation into how these findings correlate with authoritative datasets, such as those from the U.S. National Bureau of Labor Statistics (NBLS). To address this gap, we conduct empirical research that evaluates LLMs in a “bias-out-of-the-box” setting, analyzing how the generated outputs compare with the distributions found in NBLS data. Furthermore, we propose a straightforward yet effective debiasing mechanism that directly incorporates NBLS instances to mitigate bias within LLMs. Our study spans seven different LLMs, including instructable, base, and mixture-of-expert models, and reveals significant levels of bias that are often overlooked by existing bias detection techniques. Importantly, our debiasing method, which does not rely on external datasets, demonstrates a substantial reduction in bias scores, highlighting the efficacy of our approach in creating fairer and more reliable LLMs. Atmika Gorti, Aman Chadha, Manas Gaur Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31770 Fri, 08 Nov 2024 00:00:00 -0800 Enhancing Fairness in LLM Evaluations: Unveiling and Mitigating Biases in Standard-Answer-Based Evaluations https://ojs.aaai.org/index.php/AAAI-SS/article/view/31771 Large Language Models (LLMs) are recognized for their effectiveness in comparing two answers. However, LLMs can still exhibit biases when comparing one answer to a standard answer, particularly in real-world scenarios like new employee orientations. This paper identifies positional and verbosity biases in LLM evaluators in such contexts. To mitigate these biases, we apply Chain of Thought prompting and Multi-Agent Debate strategies. Our research reveals that bias prevalence varies among different models, indicating the need for tailored approaches to ensure unbiased and constructive feedback. Tong Jiao, Jian Zhang, Kui Xu, Rui Li, Xi Du, Shangqi Wang, Zhenbo Song Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31771 Fri, 08 Nov 2024 00:00:00 -0800 A Black-Box Watermarking Modulation for Object Detection Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31772 Training a Deep Neural Network (DNN) from scratch comes with a substantial cost in terms of money, energy, data, and hardware. When such models are misused or redistributed without authorisation, the owner faces significant financial and intellectual property (IP) losses. Therefore, there is a pressing need to protect the IP of Machine Learning models to avoid these issues. ML watermarking emerges as a promising solution for model traceability. Watermarking has been well-studied for image classification models, but there is a significant research gap in its application to other tasks like object detection, for which no effective methods have been proposed yet. In this paper, we introduce a novel black-box watermarking method for object detection models. Our contributions include a watermarking technique that maps visual information to text semantics and a comparative study of fine-tuning techniques’ impact on watermark detectability. We present the model’s detection performance and evaluate fine-tuning strategies’ effectiveness in preserving watermark integrity. Mohammed Lansari, Lucas Mattioli, Boussad Addad, Paul-Marie Raffi, Katarzyna Kapusta, Martin Gonzalez, Mohamed Ibn Khedher Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31772 Fri, 08 Nov 2024 00:00:00 -0800 Datamodel Distance: A New Metric for Privacy https://ojs.aaai.org/index.php/AAAI-SS/article/view/31773 Recent work developing Membership Inference Attacks has demonstrated that certain points in the dataset are often in- trinsically easier to attack than others. In this paper, we intro- duce a new pointwise metric, the Datamodel Distance, and show that it is empirically correlated to and establishes a theoreti- cal lower bound for the success probability for a point under the LiRA Membership Inference Attack. This establishes a connection between the concepts of Datamodels and Member- ship Inference, and also gives new intuitive explanations for why certain points are more susceptible to attack than others. We then use datamodels as a lens through which to investigate the Privacy Onion Efect. Paul Lintilhac, Henry Scheible, Nathaniel D. Bastian Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31773 Fri, 08 Nov 2024 00:00:00 -0800 Verification and Validation of AI Systems Using Explanations https://ojs.aaai.org/index.php/AAAI-SS/article/view/31774 Verification and validation of AI systems, particularly learning-enabled systems, is hard because often they lack formal specifications and rely instead on incomplete data and human subjective feedback. Aligning the behavior of such systems with the intended objectives and values of human designers and stakeholders is very challenging, and deploying AI systems that are misaligned can be risky. We propose to use both existing and new forms of explanations to improve the verification and validation of AI systems. Toward that goal, we preseant a framework, the agent explains its behavior and a critic signals whether the explanation passes a test. In cases where the explanation fails, the agent offers alternative explanations to gather feedback, which is then used to improve the system's alignment. We discuss examples of this approach that proved to be effective, and how to extend the scope of explanations and minimize human effort involved in this process. Saaduddin Mahmud, Sandhya Saisubramanian, Shlomo Zilberstein Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31774 Fri, 08 Nov 2024 00:00:00 -0800 Leveraging Tropical Algebra to Assess Trustworthy AI https://ojs.aaai.org/index.php/AAAI-SS/article/view/31775 Given the complexity of the application domain, the qualitative and quantifiable nature of the concepts involved, the wide heterogeneity and granularity of trustworthy attributes, and in some cases the non-comparability of the latter, assessing the trustworthiness of AI-based systems is a challenging process. In order to overcome these challenges, the Confiance.ai program proposes an innovative solution based on a Multi-Criteria Decision Aiding (MCDA) methodology. This approach involves several stages: framing trustworthiness as a set of well-defined attributes, exploring attributes to determine related Key Performance Indicators (KPI) or metrics, selecting evaluation protocols, and defining a method to aggregate multiple criteria to estimate an overall assessment of trust. This approach is illustrated by applying the RUM methodology (Robustness, Uncertainty, Monitoring) to ML context, while the focus on aggregation methods are based on Tropical Algebra. Juliette Mattioli, Martin Gonzalez, Lucas Mattioli, Karla Quintero, Henri Sohier Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31775 Fri, 08 Nov 2024 00:00:00 -0800 S-RAF: A Simulation-Based Robustness Assessment Framework for Responsible Autonomous Driving https://ojs.aaai.org/index.php/AAAI-SS/article/view/31776 As artificial intelligence (AI) technology advances, ensuring the robustness and safety of AI-driven systems has become paramount. However, varying perceptions of robustness among AI developers create misaligned evaluation metrics, complicating the assessment and certification of safety-critical and complex AI systems such as autonomous driving (AD) agents. To address this challenge, we introduce Simulation-Based Robustness Assessment Framework (S-RAF) for autonomous driving. S-RAF leverages the CARLA Driving simulator to rigorously assess AD agents across diverse conditions, including faulty sensors, environmental changes, and complex traffic situations. By quantifying robustness and its relationship with other safety-critical factors, such as carbon emissions, S-RAF aids developers and stakeholders in building safe and responsible driving agents, and streamlining safety certification processes. Furthermore, S-RAF offers significant advantages, such as reduced testing costs, and the ability to explore edge cases that may be unsafe to test in the real world. Daniel Omeiza, Pratik Somaiya, Jo-Ann Pattison, Carolyn Ten-Holter, Marina Jirotka, Jack Stilgoe, Lars Kunze Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31776 Fri, 08 Nov 2024 00:00:00 -0800 QUARL: Quantifying Adversarial Risks in Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31777 It is well documented that artificial intelligence (AI) systems have various types of vulnerabilities and associated risks. As such systems are deployed in safety-critical domains, it has become necessary not only to identify and enumerate the vulnerabilities but also to quantify the resulting risks. In this position paper, we discuss approaches for the challenge of quantifying AI risks. The approach is based on a general framework for testing and evaluating language model systems that we have previously developed (called TEL'M). In particular, we extend TEL'M to deal with the problem of quantifying the effort required by an adversary to discover and exploit a language model vulnerability. Joshua Ackerman, George Cybenko, Paul Lintilhac, Henry Scheible, Nathaniel D. Bastian Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31777 Fri, 08 Nov 2024 00:00:00 -0800 From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness https://ojs.aaai.org/index.php/AAAI-SS/article/view/31778 It is well known that successful human-AI collaboration depends on the perceived trustworthiness of the AI. We argue that a key to securing trust in such collaborations is ensuring that the AI competently addresses ethics' foundational role in engagements. Specifically, developers need to identify, address, and implement mechanisms for accommodating ethical components of AI choices. We propose an approach that instantiates ethics semantically as ontology-based moral policies. To accommodate the wide variation and interpretation of ethics, we capture such variations into ethics sets, which are situationally specific aggregations of relevant moral policies. We are extending our ontology-based policy management systems with new representations and capabilities to allow trustworthy AI-human ethical collaborative behavior. Moreover, we believe that such AI-human ethical encounters demand that trustworthiness is bi-directional – humans need to be able to assess and calibrate their actions to be consistent with the trustworthiness of AI in a given context, and AIs need to be able to do the same with respect to humans. Jeffrey M. Bradshaw, Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok, Kristen Brent Venable Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31778 Fri, 08 Nov 2024 00:00:00 -0800 Towards Linking Local and Global Explanations for AI Assessments with Concept Explanation Clusters https://ojs.aaai.org/index.php/AAAI-SS/article/view/31779 Understanding the inner workings of artificial intelligence (AI) systems is important both in the light of regulation (e.g., the EU AI Act), but also to uncover hidden weaknesses. Although local and global explanation methods can support this, a scalable and human-centered combination is required to combine the detail of the former with the latter's efficiency. Therefore, we present our method concept explanation clusters as a step towards explaining (sub-)strategies of the model through human-understandable concepts by identifying clusters in the input data while accounting for model predictions by local explanations. In this way, all the benefits of local explanations can be retained while allowing contextualisation on a larger (i.e., data-global) scale. Elena Haedecke, Maram Akila, Laura von Rueden Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31779 Fri, 08 Nov 2024 00:00:00 -0800 Mitigating Large Vision-Language Model Hallucination at Post-hoc via Multi-agent System https://ojs.aaai.org/index.php/AAAI-SS/article/view/31780 This paper addresses the critical issue of hallucination in Large Vision-Language Models (LVLMs) by proposing a novel multi-agent framework. We integrate three post-hoc correction techniques: self-correction, external feedback, and agent debate, to enhance LVLM trustworthiness. Our approach tackles key challenges in LVLM hallucination, including weak visual encoders, parametric knowledge bias, and loss of visual attention during inference. The framework employs a Plug-in LVLM as the base model to reduce its hallucination, a Large Language Model (LLM) for guided refinement, external toolbox models for factual grounding, and an agent debate system for consensus-building. While promising, we also discuss potential limitations and technical challenges in implementing such a complex system. This work contributes to the ongoing effort to create more reliable and trustworthy multimodal multi-agent systems. Chung-En (Johnny) Yu, Brian Jalaian, Nathaniel D. Bastian Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31780 Fri, 08 Nov 2024 00:00:00 -0800 Technology-Supported Reminiscence Therapy for Those Living with Dementia https://ojs.aaai.org/index.php/AAAI-SS/article/view/31781 The AMPER application is designed to support reminiscence therapy in a domestic setting for those living with dementia through the use of an Intelligent Virtual Agent (IVA). Such agents have been shown to increase functionality, accessibility and user satisfaction. We describe the application and summarise the design process used to choose the graphical agent for the project. We briefly discuss the implications of using stereotypes or archetypes in this process. Ruth Aylett, Bruce Wilson, MeiYii Lim, Katerina Pappa, Matthew Aylett, Mario Parra-Rodriguez Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31781 Fri, 08 Nov 2024 00:00:00 -0800 Using Gerontology Theory to Guide the Development of Artificial Intelligence to Support Aging-in-Place https://ojs.aaai.org/index.php/AAAI-SS/article/view/31782 If artificial intelligence (AI) is to support aging-in-place, determining how, when, and why to apply AI is a crucial endeavor. A seminal gerontology meta-theory called Se-lection, Optimization, and Compensation (SOC) model has promise to conceptualize how AI can play a role in aging-in-place. The model posits that successful aging re-quires selecting goals/domains to apply resources, opti-mizing means to best achieve those goals, and compen-sating for losses by attaining new resources or tapping into unused resources for alternative means of pursuing those goals. In this short paper, we describe the SOC model, and draw links to domains in which AI can sup-port aging in place. For example, AI can assist with health-related decision making (selection), cognitive training and reminders (optimization), and domestic task assistance (compensation). Human-centered considera-tions are provided for implementation of AI in the home. Jenay M. Beer, Otis L. Owens Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31782 Fri, 08 Nov 2024 00:00:00 -0800 Better Apprenticeship Learning with LLM Explanations https://ojs.aaai.org/index.php/AAAI-SS/article/view/31783 As the population ages, care robots will play an increasing role in assisting caregiving by taking on repetitive or physically cumbersome activities. To effectively provide care, robotic agents must be able to meet the needs and preferences of care receivers. However, these needs and preferences may change over time, making it intractable to pre-define the way the care robot should act before deployment. Instead, the care robot should be able to learn directly from non-expert end-user demonstrations. However, prior work investigating the feasibility of learning a policy from older adult demonstrations finds that older adult demonstrators desire a better understanding of what the robot needs them to do, and how. To help demonstrators understand how to improve on suboptimal or heterogeneous demonstrations, we propose to utilize a Large Language Model to provide human-interpretable explanations of Shapley values of a policy. These explanations enable the demonstrator to understand how the policy is performing, and what changes are needed, informing their corrective demonstrations. We showcase our framework's performance in deterministic and stochastic versions of Wumpus World. Rynaa Grover, Aryan Vats, Nina Moorman, Aviral Agrawal, Matthew Gombolay Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31783 Fri, 08 Nov 2024 00:00:00 -0800 Towards a Common Metrics and Evaluation Framework for Assessment of Older Adults and Caregivers Interacting with Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31784 Artificial intelligence (AI) has applications in assisting older adults to age in place and provide support to them and their caregivers as their cognition declines with age. However, effective assessment methods of this technology are needed in order to benchmark their performance and a common set of metrics and evaluation methods would enable such assessments to be compared to one another. To this end, we propose a common framework for human-AI interaction involving care recipients and their care networks. From the results of a literature review exercise, a framework with sample metrics, related measures, qualified evaluation tools, and contextual factors that impact assessment are reviewed. This paper provides a sample of common metrics in one of the framework’s measurement spaces (human-AI interaction) and discusses some of the impacts of contextual factors and how use of the common metrics and evaluation framework can be used for meta-analysis and to guide future research. Additional future articles are planned to cover the other measurement spaces in the framework (system performance, task performance, and well-being), including their particular common metrics and evaluation methods. This effort aims to provide guidance for researchers in this domain as well as highlight measurement gaps that can be filled by future research. Jasmin Marwad, Daisy M. Kiyemba, Elizabeth J. Carter, Adam Norton Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31784 Fri, 08 Nov 2024 00:00:00 -0800 Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model https://ojs.aaai.org/index.php/AAAI-SS/article/view/31785 Despite the plethora of telehealth applications to assist home-based older adults and healthcare providers, basic messaging and phone calls are still the most common communication methods, which suffer from limited availability, information loss, and process inefficiencies. One promising solution to facilitate patient-provider communication is to leverage large language models (LLMs) with their powerful natural conversation and summarization capability. However, there is a limited understanding of LLMs' role during the communication. We first conducted two interview studies with both older adults (N=10) and healthcare providers (N=9) to understand their needs and opportunities for LLMs in patient-provider asynchronous communication. Based on the insights, we built an LLM-powered communication system, Talk2Care, and designed interactive components for both groups: (1) For older adults, we leveraged the convenience and accessibility of voice assistants (VAs) and built an LLM-powered conversational interface for effective information collection. (2) For health providers, we built an LLM-based dashboard to summarize and present important health information based on older adults' conversations with the VA. We further conducted two user studies with older adults and providers to evaluate the usability of the system. The results showed that Talk2Care could facilitate the communication process, enrich the health information collected from older adults, and considerably save providers' efforts and time. We envision our work as an initial exploration of LLMs' capability in the intersection of healthcare and interpersonal communication. Xuhai Xu, Bingsheng Yao, Ziqi Yang, Shao Zhang, Ethan Rogers, Stephen Intille, Nawar Shara, Guodong Gao, Dakuo Wang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31785 Fri, 08 Nov 2024 00:00:00 -0800 Robot Age: Considering Robot Origin and Voice Accent https://ojs.aaai.org/index.php/AAAI-SS/article/view/31786 This paper investigates how social cues may influence judgements of a robots perceived age. To investigate this topic a 3 x 3 factorial study was run with the Misty II robot presenting with American, Chinese, and Mexican national origins and voice accents. In the study, the participant’s task was to estimate the robots’ age as a function of the robot’s origin and spoken accent. The results showed that Misty was thought to be the age of a teenager, but that females judged Misty to be significantly older than males. However, versions of Misty which presented with an American accent tended to produce more agreement among males and females in estimates of robot age than robots presenting with an accent identifying a Chinese or Mexican identity. Implications for the design of robot support systems for the elderly is discussed. Jessica K. Barfield Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31786 Fri, 08 Nov 2024 00:00:00 -0800 Designing AI for Partnership in Care https://ojs.aaai.org/index.php/AAAI-SS/article/view/31787 As the global population ages, promoting active aging and enabling older adults to age in place has become increasingly important. This paper explores the role of technology in this process, particularly in helping older adults identify and pursue meaningful activities and provide care for loved ones. Through two example projects -- the design of a social robot that supports older adults' experience of a sense of meaning and purpose (ikigai), and an exploration of how technology can assist with caregiving -- we demonstrate how technology can enhance the care for older adults. This short paper highlights how technology can create a more fulfilling and supportive active aging experience, empowering older adults to engage with what and who they care about and enhancing their caring relationships. Long-Jing Hsu, Selma Šabanović Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31787 Fri, 08 Nov 2024 00:00:00 -0800 Empowering Dementia Communication via Virtual Reality AI-Driven Simulations https://ojs.aaai.org/index.php/AAAI-SS/article/view/31788 The use of effective communication skills is essential for direct care workers to provide quality care for persons living with dementia (PLWD) in long-term service and support settings (LTSS). However, direct care workers often report feeling unprepared, lacking competence in care, and fear interacting with PLWD. This proof-of-concept study proposes Virtual reality Communication Training Optimizing Real-world Interactions (VICTORI) intervention, aiming to develop and evaluate the usability, acceptability, and preliminary effectiveness of virtual reality (VR) simulated communication training for direct care workers using artificial intelligence (AI)-generated patients with dementia. Care-related and social communication scenarios in dementia care will be developed, based on current literature, our validated communication instrument, and AI technology. Six modules consisting of pre-briefing, simulation facilitation, and de-briefing will be developed, refined, and evaluated based on an advisory board of experts and direct care workers’ quantitative and qualitative feedback. To test the usability, acceptability, and effectiveness of the VICTORI intervention, we will use a single group pre- and post-test intervention design, integrating quantitative measures and qualitative semi-structured interviews in this mixed-methods study. Thirty direct care workers will be recruited at two LTSS in North Texas. Direct care workers will participate in six VR-AI simulation commu-nication training sessions that include communication with AI-generated patients, real-time feedback, and detailed evaluation of communication behaviors. Each simulation will be recorded to further evaluate the partici-pant’s communication skills using the Dyadic Communi-cation Observational coding scheme in Dementia care (DCODE). Additionally, participants' potential side effects during VR simulation will be assessed using continuous physiological data and self-reported questionnaires at each simulation session. Communication knowledge and com-petence in dementia care will be assessed before and after training using self-reported questionnaires. Direct care workers will also be interviewed to assess acceptability and satisfaction with the VICTORI training. This proposal will introduce innovative training methods that leverage AI technology into communication training in dementia care. Objective assessments, including AI-based feedback, physiological data, and observational assessments, ensure the reliability and validity of the data to evaluate plausibility of VICTORI intervention. Sohyun Kim, Noelle Fields, Jennifer Roye Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31788 Fri, 08 Nov 2024 00:00:00 -0800 Transactive Memory in Caregiver Networks Using Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31789 As the population ages and an increasing number of adults want to age in place in their homes, they will rely on a network of family, friends, and other caregivers to provide various forms of assistance. Coordination across this loosely connected network is a common challenge, requiring information sharing, schedule alignment and task coordination. Here, we propose that artificial intelligence (AI) may be used to develop tools to help loosely connected care networks develop better collective cognition. Specifically, we focus on helping members of care networks develop a transactive memory system, or a shared system for storing and retrieving knowledge that expands the capacity of a group to effectively use information. In this paper, we describe the motivation for our study, and our planned research program based on the use of an online experimental platform facilitating human-AI collaboration to develop and test tools to enhance collective cognition in care networks. Andrew Kuznetsov, Ping-Ya Chao, Christopher Dishop, Allen Brown, Anita Williams Woolley Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31789 Fri, 08 Nov 2024 00:00:00 -0800 Understanding the Daily Lives of Older Adults: Integrating Multi-modal Personal Health Tracking Data through Visualization and Large Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31790 Understanding the daily lives and routines of older adults is crucial to facilitate aging in place. Ubiquitous computing technologies like smartphones and wearables that are easy to deploy and scale, have become a popular method to collect comprehensive and longitudinal data for various demographics. Despite their popularity, several challenges persist when targeting the older adult population such as low compliance and hard to obtain feedback. In this work-in-progress paper, we present the design and development of a multi-modal sensing system that includes a phone, watch, and voice assistant. We are conducting an initial longitudinal study with one older adult participant over 30 days to explore how various types of data can be integrated through visualization techniques and large language models (LLMs). As a work-in-progress, we discussed our preliminary insights from the collected data, and conclude with a discussion of our future plans and directions for this research. Jiachen Li, Justin Steinberg, Xiwen Li, Bingsheng Yao, Dakuo Wang, Elizabeth Mynatt, Varun Mishra Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31790 Fri, 08 Nov 2024 00:00:00 -0800 Supporting Aging in Place with the Introduction of Artificial Intelligence Technologies https://ojs.aaai.org/index.php/AAAI-SS/article/view/31791 As our growing population ages, there is a stronger push to age in place. Adults want to stay in their homes and communities as long as possible despite degenerative health issues. With adequate education and support, the growing desire for adults to age in their homes can be met with the strategic integration of AI technologies to maximize safety and bolster engagement in meaningful activities. This seemingly simple solution is complicated by the intersection of unique learning styles and the complexities that come with a generation that is cautious of even the simplest technology. Adopting a person-centered framework is vital for the success of AI-supported aging in place. As an occupational therapist, with experience across geriatric populations and a special interest in adaptive technology, I provide unique insight into effectively introducing AI technologies into the daily lives of older adults throughout this experiential report. Tracy Moon Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31791 Fri, 08 Nov 2024 00:00:00 -0800 Supporting the Digital Autonomy of Elders Through LLM Assistance https://ojs.aaai.org/index.php/AAAI-SS/article/view/31792 The internet offers tremendous access to services, social connections, and needed products. However, to those without sufficient experience, engaging with businesses and friends across the internet can be daunting due to the ever present danger of scammers and thieves, to say nothing of the myriad of potential computer viruses. Like a forest rich with both edible and poisonous plants, those familiar with the norms inhabit it safely with ease while newcomers need a guide. However, reliance on a human digital guide can be taxing and often impractical. We propose and pilot a simple but unexplored idea: could an LLM provide the necessary support to help the elderly who are separated by the digital divide safely achieve digital autonomy? Jesse Roberts, Lindsey Roberts, Alice Reed Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31792 Fri, 08 Nov 2024 00:00:00 -0800 Investigating Open Source LLMs to Retrofit Competency Questions in Ontology Engineering https://ojs.aaai.org/index.php/AAAI-SS/article/view/31793 Competency Questions (CQs) are essential in ontology engineering; they express an ontology's functional requirements as natural language questions, offer crucial insights into an ontology's scope and are pivotal for various tasks, e.g. ontology reuse, testing, requirement specification, and pattern definition. Despite their importance, the practice of publishing CQs alongside ontological artefacts is not commonly adopted. We propose an approach based on Generative AI, specifically Large Language Models (LLMs) for retrofitting CQs from existing ontologies and we investigate how open LLMs (i.e. Llama-2-70b, Mistral 7B and Flan-T5-xl) perform in generating CQs for existing ontologies. We compare these results with our previous efforts using closed-source LLMs and we reflect on the results. Reham Alharbi, Valentina Tamma, Floriana Grasso, Terry R. Payne Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31793 Fri, 08 Nov 2024 00:00:00 -0800 Knowledge Aware Automated Health Claims Processing with Medical Ontologies and Large Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31794 One of the major challenges in automating health insurance claims processing lies in the complexity involved in validating an incoming claim's medical diagnoses against its policy Underwriting (UW) exclusions. Termed UW Exclusion Detection, this process ensures claims are only paid out if their diagnoses are not medically associated with conditions excluded under the policy. Medical diagnoses in health insurance claims are typically represented by the International Classification of Disease (ICD) codes, established by the World Health Organization. For example, given a policy that excludes "all respiratory illness". A claim with the ICD code J45 (Asthma) will be subject to rejection as J45 is a respiratory-related diagnosis that falls within the scope of the policy's UW exclusion. The key challenge in automating this process lies in the wide range of available ICD codes. The ICD-10-CM coding scheme consists of over 40,000 codes, which often results in scenarios where codes encountered during inference are absent from the training data. These unseen ICD codes limit the effectiveness of data-driven approaches, which depend on the training data to discern medically relevant associations between UW exclusions and ICD codes. This underscores the need to supplement data-driven approaches with additional domain knowledge. We hypothesize that integrating implicit medical domain knowledge inherent in Large Language Models (LLMs) with explicit domain knowledge from medical ontologies, will enhance data-driven approaches for UW Exclusion Detection. Thoroughly validated on real-world health insurance claims data, our proposed approach proved effective in accurately establishing medically relevant associations between UW exclusions and ICD codes. Sheng Jie Lui, Cheng Xiang, Shonali Krishnaswamy Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31794 Fri, 08 Nov 2024 00:00:00 -0800 LLMasMMKG: LLM Assisted Synthetic Multi-Modal Knowledge Graph Creation For Smart City Cognitive Digital Twins https://ojs.aaai.org/index.php/AAAI-SS/article/view/31795 The concept of a Smart City (SC) Cognitive Digital Twin (CDT) presents significant potential for optimizing urban environments through sophisticated simulations, predictions, and informed decision-making. Comprehensive Knowledge Representations (KRs) that effectively integrate the diverse data streams generated by a city are crucial to realizing this potential. This paper addresses this by introducing a novel approach that leverages Large Language Models (LLMs) to automate the construction of synthetic Multi-Modal (MM) Knowledge Graphs (KGs) specifically designed for a SC CDT. Recognizing the challenges in fusing and aligning information from disparate sources, our method harnesses the power of LLMs for natural language understanding, entity recognition, and relationship extraction to seamlessly integrate data from sensor networks, social media feeds, official reports, and other relevant sources. Furthermore, we explore the use of LLM-driven synthetic data generation to address data sparsity issues, leading to more comprehensive and robust KGs. Initial outputs demonstrate the effectiveness of our approach in constructing semantically rich and interconnected synthetic KGs, highlighting the significant potential of LLMs for advancing SC CDT technology. Sukanya Mandal, Noel E. O'Connor Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31795 Fri, 08 Nov 2024 00:00:00 -0800 Knowledge Graph Modeling-Driven Large Language Model Operating System (LLM OS) for Task Automation in Process Engineering Problem-Solving https://ojs.aaai.org/index.php/AAAI-SS/article/view/31796 We present the Process Engineering Operations Assistant (PEOA), an AI-driven framework designed to solve complex problems in the chemical and process industries. The framework employs a modular architecture orchestrated by a meta-agent, which serves as the central coordinator, managing an action generator and instruction-tuned small-scale language models (expert models). The action generator decomposes complex problems into sub-tasks and identifies suitable expert models to execute each, delivering precise solutions for multi-step problem-solving. Key techniques include advanced knowledge modeling using property graphs for improved information retrieval, facilitating more accurate and contextually relevant solutions. Additionally, the framework utilizes a teacher-student transfer-learning approach with GPT-4 (Omni) to fine-tune the action generator and expert models for domain adaptation, alongside an iterative problem-solving mechanism with sophisticated error handling. Custom datasets were developed to evaluate the framework against leading proprietary language models on various engineering tasks. The results demonstrate the framework’s effectiveness in automating calculations, accelerating prototyping, and providing AI-augmented decision support for industrial processes, marking a significant advancement in process engineering capabilities. Sagar Srinivas Sakhinana, Vijay Sri Vaikunth, Venkataramana Runkana Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31796 Fri, 08 Nov 2024 00:00:00 -0800 Generating Ontology-Learning Training-Data through Verbalization https://ojs.aaai.org/index.php/AAAI-SS/article/view/31797 Ontologies play an important role in the organization and representation of knowledge. However, in most cases, ontologies do not fully cover domain knowledge, resulting in a gap. This gap, often expressed as a lack of concepts, relations, or axioms, is usually filled by domain experts in a manual and tedious process. Utilizing large language models (LLMs) can ease this process; a fine-tuned LLM could receive as input up-to-date and reliable domain knowledge natural text and output a structured graph in OWL RDF/Turtle format, which is the standard format of ontologies. Thus, to fine-tune a model, text-owl sentence pairs that constitute such a dataset must be acquired. Unfortunately, such a dataset does not exist in the literature or within the open-source community. Therefore, this paper introduces our LLM-assisted verbalizer to create such a data set by converting OWL statements from existing ontologies into natural text. We evaluate the verbalizer on 322 classes from four different ontologies using two different LLMs, achieving precision and recall as high as 0.99 and 0.96, respectively. Antonio Zaitoun, Tomer Sagi, Mor Peleg Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31797 Fri, 08 Nov 2024 00:00:00 -0800 StructuGraphRAG: Structured Document-Informed Knowledge Graphs for Retrieval-Augmented Generation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31798 Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating external data sources beyond their training sets and querying predefined knowledge bases to generate accurate, context-rich responses. Most RAG implementations use vector similarity searches, but the effectiveness of this approach and the representation of knowledge bases remain underexplored. Emerging research suggests knowledge graphs as a promising solution. Therefore, this paper presents StructuGraphRAG, which leverages document structures to inform the extraction process and constructs knowledge graphs to enhance RAG for social science research, specifically using NSDUH datasets. Our method parses document structures to extract entities and relationships, constructing comprehensive and relevant knowledge graphs. Experimental results show that StructuGraphRAG outperforms traditional RAG methods in accuracy, comprehensiveness, and contextual relevance. This approach provides a robust tool for social science researchers, facilitating precise analysis of social determinants of health and justice, and underscores the potential of structured document-informed knowledge graph construction in AI and social science research. Xishi Zhu, Xiaoming Guo, Shengting Cao, Shenglin Li, Jiaqi Gong Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31798 Fri, 08 Nov 2024 00:00:00 -0800 Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting https://ojs.aaai.org/index.php/AAAI-SS/article/view/31799 Past ontology requirements engineering (ORE) has primarily relied on manual methods, such as interviews and collaborative forums, to gather user requirements from domain experts, especially in large projects. Current OntoChat offers a framework for ORE that utilises large language models (LLMs) to streamline the process through four key functions: user story creation, competency question (CQ) extraction, CQ filtration and analysis, and ontology testing support. In OntoChat, users are expected to prompt the chatbot to generate user stories. However, preliminary evaluations revealed that they struggle to do this effectively. To address this issue, we experimented with a research method called participatory prompting, which involves researcher-mediated interactions to help users without deep knowledge of LLMs use the chatbot more effectively. The participatory prompting user study produces pre-defined prompt templates based on user query, focusing on creating and refining personas, goals, scenarios, sample data, and data resources for user story. These refined user stories will subsequently be converted into CQs. Yihang Zhao, Bohui Zhang, Xi Hu, Shuyin Ouyang, Jongmo Kim, Nitisha Jain, Jacopo de Berardinis, Albert Meroño-Peñuela, Elena Simperl Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31799 Fri, 08 Nov 2024 00:00:00 -0800 Equitable Skin Disease Prediction Using Transfer Learning and Domain Adaptation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31800 In the realm of dermatology, the complexity of diagnosing skin conditions manually necessitates the expertise of dermatologists. Accurate identification of various skin ailments, ranging from cancer to inflammatory diseases, is paramount. However, existing artificial intelligence (AI) models in dermatology face challenges, particularly in accurately diagnosing diseases across diverse skin tones, with a notable performance gap in darker skin. Additionally, the scarcity of publicly available, unbiased datasets hampers the development of inclusive AI diagnostic tools. To tackle the challenges in accurately predicting skin conditions across diverse skin tones, we employ a transfer-learning approach that capitalizes on the rich, transferable knowledge from various image domains. Our method integrates multiple pre-trained models from a wide range of sources, including general and specific medical images, to improve the robustness and inclusiveness of the skin condition predictions. We rigorously evaluated the effectiveness of these models using the Diverse Dermatology Images (DDI) dataset, which uniquely encompasses both underrepresented and common skin tones, making it an ideal benchmark for assessing our approach. Among all methods, Med-ViT emerged as the top performer due to its comprehensive feature representation learned from diverse image sources. To further enhance performance, we conducted domain adaptation using additional skin image datasets such as HAM10000. This adaptation significantly improved model performance across all models. Sajib Acharjee Dip, Kazi Hasan Ibn Arif, Uddip Acharjee Shuvo, Ishtiaque Ahmed Khan, Na Meng Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31800 Fri, 08 Nov 2024 00:00:00 -0800 DeepAge: Harnessing Deep Neural Network for Epigenetic Age Estimation From DNA Methylation Data of Human Blood Samples https://ojs.aaai.org/index.php/AAAI-SS/article/view/31801 Accurate prediction of biological age from DNA methylation data is a critical endeavor in understanding the molecular mechanisms of aging and developing age-related disease interventions. Traditional epigenetic clocks rely on linear regression or basic machine learning models, which often fail to capture the complex, non-linear interactions within methylation data. This study introduces DeepAge, a novel deep learning framework utilizing Temporal Convolutional Networks (TCNs) to enhance the prediction of biological age from DNA methylation profiles using selected CpGs by a Dual-Correlation based apparoach. DeepAge leverages a sequence-based approach with dilated convolutions to effectively capture long-range dependencies between CpG sites, addressing the limitations of prior models by incorporating advanced network architectures including residual connections and dropout regularization. The dual correlation feature selection enhances our model's predictive capabilities by identifying the most age-relevant CpG sites. Our model outperforms existing epigenetic clocks across multiple datasets, offering significant improvements in accuracy and providing deeper insights into the epigenetic determinants of aging. The proposed method not only sets a new standard in age estimation but also highlights the potential of deep learning in biologically relevant feature extraction and interpretation, contributing to the broader field of computational biology and precision medicine. Sajib Acharjee Dip, Da Ma, Liqing Zhang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31801 Fri, 08 Nov 2024 00:00:00 -0800 Health Equity in AI Development and Policy: An AI-enabled Study of International, National and Intra-national AI Infrastructures https://ojs.aaai.org/index.php/AAAI-SS/article/view/31802 This study examines how concerns related to equity in AI for health are reflected at the international, national, and sub-national level. Utilizing unsupervised learning over corpora of published AI policy documents and graph structurization and analysis, the research identifies and visualizes the presence and variation of these concerns across different geopolitical contexts. The findings reveal interesting differences in how these issues are prioritized and addressed, highlighting the influence of local policies and cultural factors. The study underscores the importance of tailored approaches to AI governance in healthcare, advocating for increased global collaboration and knowledge sharing to ensure equitable and ethical AI deployment. By providing a comprehensive analysis of policy documents, this research contributes to a deeper understanding of the global landscape of AI in health, potentially offering insights for policymakers and stakeholders. Manpriya Dua, J.P. Singh, Amarda Shehu Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31802 Fri, 08 Nov 2024 00:00:00 -0800 Investigating Remote Healthcare Accessibility with AI: Deep Learning and NLP-Based Knowledge Graph for Digitalized Diagnostics https://ojs.aaai.org/index.php/AAAI-SS/article/view/31803 Introduction: Recently, vast generational modern AI techniques have facilitated developments for accessing digital healthcare diagnosis with capabilities of detecting illnesses. Problem: There exists a lot of scepticism for e-health couple with high similarities on health symptoms which hinder text data analysis for remote diagnosis limiting remote services and affecting tech development. Objective: This research investigates and substantiates opportunities associated with computational leverage of text data analytics and cognitive extraction of knowledge insights to improve healthcare outcomes. Significance: The study presents an overview of public, an integrated deep learning (DL), and AI knowledge graph (KG) for healthcare accessibility of remote diagnostics with NLP assist. Method: This research applied both qualitative and quantitative analysis. Questionnaires were used to understand the computational analytics and cognitive extraction of AI knowledge graphs on healthcare data. Also, an AI model was built to detect, diagnose based on text data and streamline five (5) related disease symptoms for each given text input. Results: The result of the survey was tested with hypotheses of H1, H2, H3, H4, H5. Results show that deep learning models and knowledge graphs can effectively lead to a well-defined class of data classification. Our model also exhibits a tremendous level of acceptable prediction of health symptoms based on text data. The significant group was accepted as an identified health issue and the non-significant was identified as a non-health issue. Conclusion: The study concludes that a well-defined system based on a rigorous ethical healthcare standard can easily support determining a feasible remote diagnosis. Pascal Muam Mah Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31803 Fri, 08 Nov 2024 00:00:00 -0800 Robust and Explainable Stage Prediction in Duchenne Muscular Dystrophy https://ojs.aaai.org/index.php/AAAI-SS/article/view/31804 Duchenne muscular dystrophy (DMD) is one of the life-threatening rare genetic disease affecting millions of male minors across the globe. Given its progressive nature, we can demarcate the various stages of DMD through the loss of muscular movements, ambulation, respiratory difficulties, and cardiac dysfunction. In this work, we employ machine learning models for understanding the progression of DMD through the prediction of its stages. Our attempts to predict the stages of DMD on the data collected by Molecular Diagnostics, Counseling, Care and Research Center (MDCRC) from 223 visits of 90 subjects demonstrate more than 80% accuracy with the state-of-the-art methods. We further study the biological/physiological importance of features in characterizing the stages of DMD. Promita Ghosh, Ragav Krishna, Lakshmi B. Raman, Malay Bhattacharyya Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31804 Fri, 08 Nov 2024 00:00:00 -0800 Emerging Directions in Leveraging Machine Intelligence for Explainable and Equity-Focused Simulation Models of Mental Health https://ojs.aaai.org/index.php/AAAI-SS/article/view/31805 Simulation models support policymakers, clinicians, and community members in identifying and evaluating interventions to improve population health. While these models are particularly valuable to measure the fairness of interventions, such measurements may require simulating massive populations in order to isolate effects for specific groups (e.g., by race and ethnicity, gender, age). This can create a computational bottleneck, forcing tradeoffs such as simplifying a model (thus potentially losing accuracy) or running fewer simulations (thus accepting wider confidence intervals) in exchange for sufficiently large populations. In addition, policymakers, clinicians, and community members can be involved at the design stage of a simulation model but its complex set of rules often tends to preclude participation at later stages. This discussion considers the use of Machine Intelligence to tackle both challenges, by automatically scaling up simulations and explaining them to stakeholders. This potential is illustrated through the public health challenge of mental health, focusing on agent-based models for suicide prevention. Philippe J. Giabbanelli Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31805 Fri, 08 Nov 2024 00:00:00 -0800 Investigating the Fairness of Deep Learning Models in Breast Cancer Diagnosis Based on Race and Ethnicity https://ojs.aaai.org/index.php/AAAI-SS/article/view/31806 Breast cancer is the leading cancer affecting women globally. Despite deep learning models making significant strides in diagnosing and treating this disease, ensuring fair outcomes across diverse populations presents a challenge, particularly when certain demographic groups are underrepresented in training datasets. Addressing the fairness of AI models across varied demographic backgrounds is crucial. This study analyzes demographic representation within the publicly accessible Emory Breast Imaging Dataset (EMBED), which includes de-identified mammography and clinical data. We spotlight the data disparities among racial and ethnic groups and assess the biases in mammography image classification models trained on this dataset, specifically ResNet-50 and Swin Transformer V2. Our evaluation of classification accuracies across these groups reveals significant variations in model performance, highlighting concerns regarding the fairness of AI diagnostic tools. This paper emphasizes the imperative need for fairness in AI and suggests directions for future research aimed at increasing the inclusiveness and dependability of these technologies in healthcare settings. Code is available at: https://github.com/kuanhuang0624/EMBEDFairModels. Kuan Huang, Yingfeng Wang, Meng Xu Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31806 Fri, 08 Nov 2024 00:00:00 -0800 Automatic Screening for Children with Speech Disorder Using Automatic Speech Recognition: Opportunities and Challenges https://ojs.aaai.org/index.php/AAAI-SS/article/view/31807 Speech is a fundamental aspect of human life, crucial not only for communication but also for cognitive, social, and academic development. Children with speech disorders (SD) face significant challenges that, if unaddressed, can result in lasting negative impacts. Traditionally, speech and language assessments (SLA) have been conducted by skilled speech-language pathologists (SLPs), but there is a growing need for efficient and scalable SLA methods powered by artificial intelligence. This position paper presents a survey of existing techniques suitable for automating SLA pipelines, with an emphasis on adapting automatic speech recognition (ASR) models for children’s speech, an overview of current SLAs and their automated counterparts to demonstrate the feasibility of AI-enhanced SLA pipelines, and a discussion of practical considerations, including accessibility and privacy concerns, associated with the deployment of AI-powered SLAs. Dancheng Liu, Jason Yang, Ishan Albrecht-Buehler, Helen Qin, Sophie Li, Yuting Hu, Amir Nassereldine, Jinjun Xiong Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31807 Fri, 08 Nov 2024 00:00:00 -0800 Remote Kinematic Analysis for Mobility Scooter Riders Leveraging Edge AI https://ojs.aaai.org/index.php/AAAI-SS/article/view/31808 Current kinematic analysis for patients with upper or lower extremity challenges is usually performed indoors at the clin- ics, which may not always be accessible for all patients. On the other hand, mobility scooter is a popular assistive tool used by people with mobility disabilities. In this study, we introduce a remote kinematic analysis system for mobility scooter riders to use in their local communities. In order to train the human pose estimation model for the kinematic anal- ysis application, we have collected our own mobility scooter riding video dataset which captures riders’ upper-body move- ments. The ground truth data is labeled by the collaborating clinicians. The evaluation results show high system accuracy both in the keypoints prediction and in the downstream kine- matic analysis, compared with the general-purpose pose mod- els. Our efficiency test results on NVIDIA Jetson Orin Nano also validate the feasibility of running the system in real-time on edge devices. Thanh-Dat Nguyen, Chenrui Zhang, Melvin Gitbumrungsin, Amar Raheja, Tingting Chen Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31808 Fri, 08 Nov 2024 00:00:00 -0800 Machine Unlearning in Digital Healthcare: Addressing Technical and Ethical Challenges https://ojs.aaai.org/index.php/AAAI-SS/article/view/31809 The ``Right to be Forgotten," as outlined in regulatory frameworks such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), allows individuals to request the deletion of their personal data from deployed machine learning models. This provision ensures that individuals can maintain control over their personal information. In the digital health era, this right has become a critical concern for both patients and healthcare providers. To facilitate the effective removal of personal data from machine learning models, the concept of ``machine unlearning" has been introduced. This paper highlights the technical and ethical challenges associated with machine unlearning in digital healthcare. By examining current unlearning methodologies and their limitations, we propose a roadmap for future research and development in this field. Shahnewaz Karim Sakib, Mengjun Xie Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31809 Fri, 08 Nov 2024 00:00:00 -0800 Promoting Equity in AI-Driven Mental Health Care for Marginalized Populations https://ojs.aaai.org/index.php/AAAI-SS/article/view/31810 Artificial Intelligence (AI) is increasingly used in mental health care, but its equitability is a pressing concern. This paper examines the potential biases in AI-driven mental health tools and their impact on marginalized communities. It explores several strategies to mitigate bias in AI-driven mental health tools, focusing on promoting equity and inclusivity. Nii Tawiah, Judith P. Monestime Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31810 Fri, 08 Nov 2024 00:00:00 -0800 Building Trustworthy AI: The Role of Patient and Public Involvement in Healthcare AI Development https://ojs.aaai.org/index.php/AAAI-SS/article/view/31811 AI is helping researchers make great strides in healthcare. However, there is a trust deficit in AI when applied to critical areas like healthcare. Hence communicating the beneficial medical applications of AI and engaging the public with healthcare AI research is critical. One way to achieve this is by getting the community involved in co-designing better AI systems in healthcare projects. We argue that AI algorithms for healthcare should be co-designed with patients and healthcare workers, so that they are useful and trustworthy. We suggest a roadmap for this collaborative approach in AI model building. This will involve actively including patients with lived experience of a disease, as well as creating a research advisory group to walk patients through the process of AI model building. We suggest formulating and scoping a problem, and then generating a hypothesis that patients and scientists agree on. The road to building trustworthy AI systems may become easier if all stakeholders are involved in co-creating AI models. Soumya Banerjee Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31811 Fri, 08 Nov 2024 00:00:00 -0800 The Need for a Feminist Approach to Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31812 Artificial intelligence (AI) presents immense potential and significant challenges concerning algorithmic bias. This paper explores how feminist theory provides a criti-cal lens for understanding and addressing algorithmic bi-as’s root causes and impacts. The historical context of systemic discrimination reveals how power imbalances have shaped data collection and analysis, leading to bi-ased datasets that perpetuate inequalities through AI sys-tems. The "black box" problem further obscures these bi-ases, amplifying discriminatory outcomes in various domains. Feminist interventions, particularly intersec-tional feminism, offer a framework for uncovering how algorithmic bias interacts with multiple forms of oppres-sion. Feminist data science challenges traditional meth-odologies and advocates for transparency, accountabil-ity, and diversity in AI development. Critiques of tech-no-solutionism highlight the need for broader societal change alongside technical fixes. By embracing a feminist approach, we can envision and work toward a future where AI technology is used for social justice, inclusivi-ty, and collective liberation. Christo El Morr Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31812 Fri, 08 Nov 2024 00:00:00 -0800 Self-Attention Mechanisms as Representations for Gene Interaction Networks in Hypothesis-Driven Gene-based Transformer Genomics AI Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31813 In this position paper, we propose a framework for hypothesis-driven genomic AI using self-attention mechanisms in gene-based transformer models to represent gene interaction networks. Hypotheses can be introduced as attention masks in these transformer models with genes treated as tokens. This approach can bridge the gap between genotypic data and phenotypic observations by using prior knowledge-based masks in the transformer models. By using attention masks as hypotheses to guide the model fitting, the proposed framework can potentially assess various hypotheses to determine which best explains the experimental observations. The proposed framework can enhance the interpretability and predictive power of genomic AI to advance personalized medicine and promote healthcare equity. Hong Qin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31813 Fri, 08 Nov 2024 00:00:00 -0800 Large-Scale Knowledge Graphs as a Tool for Enhanced Robotic Perception https://ojs.aaai.org/index.php/AAAI-SS/article/view/31814 Autonomous robotic systems depend on their perception and understanding of their environment for informed decision-making. One of the goals of the Semantic Web is to make knowledge on the Web machine-readable, which can significantly aid robots by providing background knowledge, and thereby support their understanding. In this paper, we present a reasoning system that uses the Ontology for Robotic Knowledge Acquisition (ORKA) to integrate the sensory data and perception algorithms of the robot, thereby enhancing its autonomous capabilities. This reasoning system is subsequently employed to retrieve and integrate information from the Semantic Web, thereby improving the robot's comprehension of its environment. To achieve this, the system employs a Perceived-Entity Linking (PEL) pipeline that associates regions in the sensory data of the robotic agent with concepts in a target knowledge graph. As a use-case for the linking process, the Perceived-Entity Typing task is used to determine the more fine-grained subclass of the perceived entities. Specifically, we provide an analysis of the performance of different knowledge graph embedding methods on the task using a annotated observations and WikiData as a target knowledge graph. The experiments indicate that relying on pre-trained embedding methods results in an increased performance when using TransE as the embedding method for the observations of the robot. This contribution advances the field by demonstrating the potential of integrating Semantic Web technologies with robotic perception, thereby enabling more nuanced and context-aware decision-making in autonomous systems. Mark Adamik, Ilaria Tiddi, Romana Pernisch, Stefan Schlobach Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31814 Fri, 08 Nov 2024 00:00:00 -0800 Agreeing to Disagree: Translating Representations to Uncover a Unified Representation for Social Robot Actions https://ojs.aaai.org/index.php/AAAI-SS/article/view/31815 Researchers and designers of social robots often approach robot control system design from a single perspective; such as designing autonomous robots, teleoperated robots, or robots programmed by an end-user. While each design approach presents a tradeoff between some advantages and limitations, there is an opportunity to integrate these approaches where people benefit from the best-fit approach for their use case. In this work, we propose integrating these seemingly distinct robot control approaches to uncover a common data representation of social actions defining social expression by a robot. We demonstrate the value of integrating an authoring system, teleoperation interface, and robot planning system by integrating instances of these systems for robot storytelling. By relying on an integrated system, domain experts can define behaviors through end-user interfaces that teleoperators and autonomous robot programmers can use directly thus providing a cohesive expert-driven robot system. Saad Elbeleidy, Jason R. Wilson Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31815 Fri, 08 Nov 2024 00:00:00 -0800 The Role of Embodiment in Learning Representations: Discovering Space Topology via Sensory-Motor Prediction https://ojs.aaai.org/index.php/AAAI-SS/article/view/31816 This paper explores the crucial role of embodiment in learning representations for space topology in robotics. Embodiment, the ability of an agent to interact with its environment and receive sensory feedback, is fundamental to developing accurate and efficient representations. In this work, we investigate this by applying an action-conditional prediction algorithm to data collected from a simulated environment, aiming to learn the topology of the environment through sequences of random interactions. Using a simple mobile-robot-like scenario, by leveraging sensory-motor interactions we demonstrate how the agent can discover the topology of its environment. Our results demonstrate the importance of embodiment in the development of representations and potential applicability in robotic tasks, and a simple but effective method of integrating actions into a learning loop. We suggest that building abstract representations through the use of action-conditional prediction is a step towards unification of the representations used in robotics. Oksana Hagen, Swen Gaudl Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31816 Fri, 08 Nov 2024 00:00:00 -0800 Using Social Robots and AI to Perform Genetic Risk Assessment for Cancer https://ojs.aaai.org/index.php/AAAI-SS/article/view/31817 Genetic risk assessment (GRA) and genetic counseling have become integral to optimal patient care for patients with cancer. At present, there is a limited number of qualified healthcare providers who provide this service. To assist professionals in the GRA process, we have combined social robotics and retrieval-augmented generative artificial intelligence (RAG AI) to provide education related to hereditary cancer to be included in GRA sessions for individuals at risk. This GRA application pushes the boundary on previously available chatbots and AI systems by creating a novel and interactive experience enhanced by professionally verified information. In the future, we seek to improve the app as it is and obtain feedback from both GRA professionals and potential end-users that will be used to enhance the system and provide customized risk assessment. Overall, our GRA system takes the next step towards informing patients of their hereditary cancer risk and pertinent care options. Tyler Morris, Conor Brown, Seungwoo An, Jeremiah Augustine, Andrew Ward, Laura Enomoto, Erin Campbell, Xiaopeng Zhao Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31817 Fri, 08 Nov 2024 00:00:00 -0800 4D-based Robot Navigation Using Relativistic Image Processing https://ojs.aaai.org/index.php/AAAI-SS/article/view/31818 Machine perception is an important prerequisite for safe interaction and locomotion in dynamic environments. This requires not only the timely perception of surrounding geometries and distances but also the ability to react to changing situations through predefined, learned but also reusable skill endings of a robot so that physical damage or bodily harm can be avoided. In this context, 4D perception offers the possibility of predicting one’s own position and changes in the environment over time. In this paper, we present a 4D-based approach to robot navigation using relativistic image processing. Relativistic image processing handles the temporalrelated sensor information in a tensor model within a constructive 4D space. 4D-based navigation expands the causal understanding and the resulting interaction radius of a robot through the use of visual and sensory 4D information. Simone Müller, Dieter Kranzlmüller Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31818 Fri, 08 Nov 2024 00:00:00 -0800 Human Perception of Robot Failure and Explanation During a Pick-and-Place Task https://ojs.aaai.org/index.php/AAAI-SS/article/view/31819 In recent years, researchers have extensively used non-verbal gestures, such as head and arm movements, to express a robot's intentions and capabilities to humans. Inspired by past research, we investigated how different explanation modalities can aid human understanding and perception of how robots communicate failures and provide explanations during block pick-and-place tasks. Through an in-person, within-subjects experiment with 24 participants, we studied four modes of explanations across four types of failures. Some of these were chosen to mimic combinations from prior work in order to both extend and replicate past findings by the community. We found that speech explanations were preferred to non-verbal and visual cues in terms of similarity to humans. Additionally, projected images had a comparable effect on explanation as other non-verbal modules. We also found consistent results with a prior online study. Huy Quyen Ngo, Elizabeth J. Carter, Aaron Steinfeld Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31819 Fri, 08 Nov 2024 00:00:00 -0800 Statewise: A Petri Net-Based Visual Editor for Specifying Robotic Systems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31820 We present Statewise, a visual editor designed to enable developers to model and simulate complex systems using colored Petri nets in an intuitive, graphical way. Utilizing Statewise, we explore two use cases to demonstrate its capabilities. We also discuss potential enhancements to further extend its applicability in more complex scenarios. Zejun Zhou, Yuchen Jin, Pragathi Praveena Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31820 Fri, 08 Nov 2024 00:00:00 -0800 Towards Pragmatic Temporal Alignment in Stateful Generative AI Systems: A Configurable Approach https://ojs.aaai.org/index.php/AAAI-SS/article/view/31821 Temporal alignment in stateful generative artificial intelligence (AI) systems remains an underexplored area, particularly beyond goal-driven approaches in planning. Stateful refers to maintaining a persistent memory or ``state'' across runs or sessions. This helps with referencing past information to make system outputs more contextual and relevant. This position paper proposes a position framework for temporal alignment with configurable toggles. We present five alignment mechanisms: knowledge graph path-based, neural score-based, vector similarity-based, and sequential process-guided alignment. By offering these interchangeable approaches, we aim to provide a flexible solution adaptable to complex and real-world application scenarios. This paper discusses the potential benefits and challenges of each alignment method and positions the importance of a configurable system in advancing progress in stateful generative AI systems. Kaushik Roy, Yuxin Zi, Amit Sheth Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31821 Fri, 08 Nov 2024 00:00:00 -0800 Promoting Transparent and Consistent Frameworks for Interactive Digital Testimonies: A Case Study on Preserving Zilli Schmidt’s Story https://ojs.aaai.org/index.php/AAAI-SS/article/view/31822 Interactive Digital Testimonies (IDTs) combine digital archives of purpose-made recordings, conversational agents, and immersive display technology to preserve and recreate interactive conversations with contemporary witnesses in a lifelike manner. IDTs represent a specific subcategory of (Embodied) Conversational Agents (ECAs) due to the constraint of not including AI-generated or otherwise synthetic responses or actions. While numerous IDTs have been developed over the last few years, the descriptions of these systems and their respective evaluations frequently lack consistency and transparency, which has led to considerable heterogeneity and a lack of comparability. To counteract these developments, we present the IDT of Holocaust survivor and member of the German-speaking Romani community Zilli Schmidt, which we developed since 2021. We transparently share both content-related and technical features of this IDT. Fabian Heindl, Daniel Kolb, Markus Gloe Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31822 Fri, 08 Nov 2024 00:00:00 -0800 Towards a Verifiable Toolchain for Robotics https://ojs.aaai.org/index.php/AAAI-SS/article/view/31823 There is a growing need for autonomous robots to complete complex tasks robustly in dynamic and unstructured environments. However, current robot performance is limited to simple tasks in controlled environments. To improve robot autonomy in complex environments, the robot's deliberation system must be able to synthesise correct plans for a task and generate contingency plans for handling anomalous scenarios that were not expected at design time. The robustness of such a system can be quantified using techniques for formal verification and validation. This paper outlines the progress of EU project CONVINCE (CONtext-aware Verifiable and adaptIve dyNamiC dEliberation), which aims to develop a software toolchain that aids developers in designing, developing, and deploying robot deliberation systems that are fully verified. We describe our modelling approach, each of the toolchain components, and how they interact. We also discuss survey results which demonstrate the demand for a verifiable toolchain among the robotics community. Charlie Street, Yazz Warsame, Masoumeh Mansouri, Michaela Klauck, Christian Henkel, Marco Lampacrescia, Matteo Palmas, Ralph Lange, Enrico Ghiorzi, Armando Tacchella, Razane Azrou, Raphaël Lallement, Matteo Morelli, Ginny I. Chen, Danielle Wallis, Stefano Bernagozzi, Stefano Rosa, Marco Randazzo, Sofia Faraci, Lorenzo Natale Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31823 Fri, 08 Nov 2024 00:00:00 -0800 2022 Flood Impact in Pakistan: Remote Sensing Assessment of Agricultural and Urban Damage https://ojs.aaai.org/index.php/AAAI-SS/article/view/31824 Pakistan was hit by the world's deadliest flood in June 2022, causing agriculture and infrastructure damage across the country. Remote sensing technology offers a cost-effective and efficient method for flood impact assessment. This study is aimed to assess the impact of flooding on crops and built-up areas. Landsat 9 imagery, European Space Agency-Land Use/Land Cover (ESA-LULC) and Soil Moisture Active Passive (SMAP) data are used to identify and quantify the extent of flood-affected areas, crop damage, and built-up area destruction. The findings indicate that Sindh, a province in Pakistan, suffered the most. This impact destroyed most Kharif season crops, typically cultivated from March to November. Using the SMAP satellite data, it is assessed that the high amount of soil moisture after flood also caused a significant delay in the cultivation of Rabi crops. The findings of this study provide valuable information for decision-makers and stakeholders involved in flood risk management and disaster response. Hafiz Muhammad Abubakar, Arbaz Khan, Aqs Younas, Zia Tahseen, Aqeel Arshad, Murtaza Taj, Usman Nazir Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31824 Fri, 08 Nov 2024 00:00:00 -0800 Continual Learning and Adaptation In Resource-Constrained Environments (CLAIRE) https://ojs.aaai.org/index.php/AAAI-SS/article/view/31825 As climate-related extreme events continue to increase and impact the world, of particular importance are the threats posed to food, agriculture, and water (FAW) systems. Deep learning could benefit FAW systems for classification of threats and for forecasting potential future events given historical patterns. However, many FAW systems are faced with operational environments that are resource-constrained, which could present challenges in deploying deep learning models. Continual learning offers a way to overcome certain deployment challenges by enabling deep learning models that are more robust to data distribution changes, without the need for GPUs or off-line training. We describe a continual learning approach to forecasting extreme air quality events developed for the National Oceanic and Atmospheric Administration to provide operational air quality guidance to the Continental United States. We describe how this deep learning model is resilient to future data distribution changes by performing curriculum learning, and how it can be deployed as a continual learner, offering better predictive performance for resource-constrained environments. Megan Baker, Jennifer Sleeman, Christopher Ribaudo, Ivanka Stajner, Kai Wang, Jianping Huang, Ho-Chun Huang, Raffaele Montuoro, Haixia Liu Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31825 Fri, 08 Nov 2024 00:00:00 -0800 ML-based Anomaly Detection for CAN Bus Network in Agriculture Machinery https://ojs.aaai.org/index.php/AAAI-SS/article/view/31826 The adoption of advanced automation and next-generation technologies like the Internet of Things (IoT) and modern communication networks has revolutionized the food and agriculture sector, boosting the efficiency and precision of farm machinery. However, this increased inter-connectivity has also exposed significant vulnerabilities, particularly in Controller Area Network (CAN) protocols, widely used in advanced agricultural machinery and equipment. Due to its lack of inherent security features, CAN is susceptible to various cyber-attacks, potentially leading to severe consequences if these attacks remain undetected and unmitigated. This paper introduces a supervised machine learning (ML)-based anomaly detection system (CAN-ADS) designed to detect various cyber-attacks on CAN-based agricultural machinery. The system leverages network traffic augmentation and data balancing techniques to train ML algorithms on CAN-specific datasets. Experimental results show that CAN-ADS achieves high accuracy (approx 98%) and true-positive rates with low false-negative rates (approx 1%). Souradeep Bhattacharya, Ranuka G. Gallolukankanamalage, Brian L. Steward, Manimaran Govindarasu Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31826 Fri, 08 Nov 2024 00:00:00 -0800 Self-attention-based Diffusion Model for Time-series Imputation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31827 Time-series modeling is essential for applications in agriculture, weather forecasting, food production, and more. However, missing data due to sensor malfunctions, power outages, and human errors is a common issue, complicating the training of machine learning models. We propose a diffusion-based generative model to address this problem and fill the gaps in the data. Our approach captures feature and time correlations through a two-stage imputation process. Our model outperforms state-of-the-art imputation methods and is more scalable in GPU resources. Mohammad Rafid Ul Islam, Prasad Tadepalli, Alan Fern Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31827 Fri, 08 Nov 2024 00:00:00 -0800 Towards an AI-Driven Cyber-Physical System for Closed-Loop Control of Plant Diseases https://ojs.aaai.org/index.php/AAAI-SS/article/view/31828 Plant diseases are a major biosecurity threat to food production and the bio-energy industry. Early detection and control of plant diseases can improve producers’ profitability and reduce environmental impacts from chemical inputs. We proposed to develop a cyber-physical system with three major components: an AI-driven imaging system for early stress detection, an autonomous robotic system to collect plant samples, and a sequencing pipeline to detect molecular signatures of pathogens for disease confirmation. This system is envisioned to control a detected disease by removing or pruning infected plants. This manuscript describes the major milestones achieved by this CPS project and provides a future perspective on disease control automation in agriculture. Abhisesh Silwal, Xuemei M. Zhang, Thomas Hadlock, Jacob Neice, Shadab Haque, Adwait Kaundanya, Chang Lu, Boris A. Vinatzer, George Kantor, Song Li Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31828 Fri, 08 Nov 2024 00:00:00 -0800 Towards Water Systems Security and Sustainability Using Deep Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31829 Wastewater treatment plants (WWTPs) face significant challenges due to varying influent conditions, multiple operational constraints, and a constant lack of reliable datasets to manage and monitor water quality and flow using automated approaches. This paper introduces a novel framework showcasing soft sensors that are aimed at enhancing the sustainability and security of wastewater quality indicators using deep learning. We develop a trustworthy soft sensor that utilizes artificial intelligence (AI) approaches to provide nitrate NO3 predictions at the WWTP, as well as context-based evaluations to estimate overall predictive uncertainty. Contextual elements are injected into the model to allow for more accurate and relevant water quality monitoring, especially in different conditions (such as rain and snow). In addition, in this paper, we present a time-series Generative Adversarial Network (GAN), namely H2OGAN to address data scarcity and to improve model training by generating synthetic data that mirrors the statistical properties of water datasets from both controlled and real-world environments. Data in turn also train against data poisoning attacks on water supply systems, rendering these systems more secure. Our results indicate the potential uses of the integration of soft sensors and H2OGAN to significantly improve the operational efficiency of WWTPs by providing robust AI-driven tools for offering secure and sustainable water monitoring solutions. Chhayly Sreng, Justice Lin, Dong Sam Ha, Sook S. Ha, A. Lynn Abbott, Feras A. Batarseh Copyright (c) 2024 Association for the Advancement of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31829 Fri, 08 Nov 2024 00:00:00 -0800