Proceedings of the AAAI Symposium Series https://ojs.aaai.org/index.php/AAAI-SS <p>The AAAI Symposium Series, previously published as AAAI Technical Reports, are held three times a year (Spring, Summer, Fall) and are designed to bring colleagues together to share ideas and learn from each other’s artificial intelligence research. The series affords participants a smaller, more intimate setting where they can share ideas and learn from each other’s artificial intelligence research. Topics for the symposia change each year, and the limited seating capacity and relaxed atmosphere allow for workshop-like interaction. The format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.<br /><br />The AAAI Spring Symposium Series is typically held during spring break (generally in March) on the west coast. The AAAI Summer Symposium Series is the newest in the annual set of meetings run in parallel at a common site. The inaugural 2023 Summer Symposium Series was held July 17-19, 2023, in Singapore. The AAAI Fall Symposium series is usually held on the east coast during late October or early November.</p> AAAI Press en-US Proceedings of the AAAI Symposium Series 2994-4317 Centering Humans in Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31170 AI systems are breaking into new domains and applications, and it is pivotal to center humans in contemporary AI systems and contemplate what this means. This discussion considers three perspectives or human roles in AI as users, contributors, and researchers-in-training, to illustrate this notion. Cecilia O. Alm Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 2 3 10.1609/aaaiss.v3i1.31170 The Arithmetic of Machine Decision : How to Find the Symmetries of Complete Chaos https://ojs.aaai.org/index.php/AAAI-SS/article/view/31171 This present work is deliberately placed in the context capable of defining the requirements expressed by machine decision-making calculations. The informational nature of a decision requires abandoning any invariant preserving the structure but on the contrary switching into total chaos, a necessary and sufficient condition for exploiting the symmetries allowing the calculation to converge. Decision arithmetic is the best way to precisely define the nature of these symmetries. Olivier Bartheye Laurent Chaudron Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 4 11 10.1609/aaaiss.v3i1.31171 Toward Risk Frameworks for Autonomous Systems that Take Societal Safety-related Benefits into Account https://ojs.aaai.org/index.php/AAAI-SS/article/view/31172 Current risk frameworks such as probabilistic risk analy-sis methodologies do not take societal safety-related benefits into account. To inform human-AI collaborative system development, this manuscript highlights the need for updated risk frameworks and suggestions for relevant considerations. Ellen J. Bass Steven Weber Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 12 13 10.1609/aaaiss.v3i1.31172 Communicating Unnamable Risks: Aligning Open World Situation Models Using Strategies from Creative Writing https://ojs.aaai.org/index.php/AAAI-SS/article/view/31173 How can a machine warn its human collaborator about an unexpected risk if the machine does not possess the explicit language required to name it? This research transfers techniques from creative writing into a conversational format that could enable a machine to convey a novel, open-world threat. Professional writers specialize in communicating unexpected conditions with inadequate language, using overlapping contextual and analogical inferences to adjust a reader’s situation model. This paper explores how a similar approach could be used in conversation by a machine to adapt its human collaborator’s situation model to include unexpected information. This method is necessarily bi-directional, as the process of refining unexpected meaning requires each side to check in with each other and incrementally adjust. A proposed method and example is presented, set five years hence, to envisage a new kind of capability in human-machine interaction. A near-term goal is to develop foundations for autonomous communication that can adapt across heterogeneous contexts, especially when a trusted outcome is critical. A larger goal is to make visible the level of communication above explicit communication, where language is collaboratively adapted. Beth Cardier Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 14 21 10.1609/aaaiss.v3i1.31173 Subjectivity in Unsupervised Machine Learning Model Selection https://ojs.aaai.org/index.php/AAAI-SS/article/view/31174 Model selection is a necessary step in unsupervised machine learning. Despite numerous criteria and metrics, model selection remains subjective. A high degree of subjectivity may lead to questions about repeatability and reproducibility of various machine learning studies and doubts about the robustness of models deployed in the real world. Yet, the impact of modelers' preferences on model selection outcomes remains largely unexplored. This study uses the Hidden Markov Model as an example to investigate the subjectivity involved in model selection. We asked 33 participants and three Large Language Models (LLMs) to make model selections in three scenarios. Results revealed variability and inconsistencies in both the participants’ and the LLMs' choices, especially when different criteria and metrics disagree. Sources of subjectivity include varying opinions on the importance of different criteria and metrics, differing views on how parsimonious a model should be, and how the size of a dataset should influence model selection. The results underscore the importance of developing a more standardized way to document subjective choices made in model selection processes. Wanyi Chen Mary Cummings Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 22 29 10.1609/aaaiss.v3i1.31174 Learning Subjective Knowledge with Designer-Like Thinking and Interactive Machine Teaching https://ojs.aaai.org/index.php/AAAI-SS/article/view/31175 Aesthetics is a crucial aspect of design that plays a critical role in the creation process and customers' perception of outcomes. However, aesthetic expressions are highly subjective and nuanced. It often relies on designers' experiences and many trials and errors to get it right. Our research first investigated how designers and artists curated aesthetic materials and utilized them in their daily practice. Based on the result, we applied Langley's human-like learning framework to develop an interactive Style Agent system. It aims to learn designers' aesthetic expertise and utilize AI's capability to empower practitioner's creativity. In this paper, we used typographic posters as examples and conducted a preliminary evaluation of our prototype. The results showed that our system provided a modular structure for effortlessly annotating users' subjective perceptions and making the visualizations easy to interpret through performance. Overall, it acts as a facilitator to help enhance their own aesthetic awareness and empowers them to expand their design space. Yaliang Chuang Poyang David Huang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 30 34 10.1609/aaaiss.v3i1.31175 Shaped-Charge Architecture for Neuro-Symbolic Systems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31176 In spite of the great progress of large language models (LLMs) in recent years, there is a popular belief that their limitations need to be addressed “from outside”, by building hybrid neurosymbolic systems which add robustness, explainability, perplexity and verification done at a symbolic level. We propose shape-charged learning in the form of Meta-learning/DNN - kNN that enables the above features by integrating LMM with explainable nearest neighbor learning (kNN) to form the object-level, having deductive reasoning-based metalevel control learning processes, performing validation and correction of predictions in a way that is more interpretable by humans. Boris Galitsky Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 35 42 10.1609/aaaiss.v3i1.31176 Perception-Dominant Control Types for Human/Machine Systems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31177 We explore a novel approach to complex domain modelling by emphasising primitives based on perception. The usual approach either focuses on actors or cognition associated with tokens that convey information. In related research, we have examined using effects and/or outcomes as primitives, and influences as the generator of those outcomes via categoric functors. That approach (influences, effects) has advantages: it leverages what is known and supports the expanded logics we use, where we want to anticipate and engineer possible futures. But it has weaknesses when placed in a dynamic human-machine system where what is perceived or assumed matters more than what is known. The work reported here builds on previous advances in type specification and reasoning to ‘move the primitives forward’ more toward situation encounter and away from situation understanding. The goal is in the context of shared human-machine systems where: • reaction times are shorter than the traditional ingestion/comprehension/response loop can support; • situations that are too complex or dynamic for current comprehension by any means; • there simply is insufficient knowledge about governing situations for the comprehension model to support action; and/or, • the many machine/human and system/system interfaces that are incapable of conveying the needed insights; that is, the communication channels choke the information or influence flows. While the approach is motivated by the above unfriendly conditions, we expect significant benefits. We will explore these but engineer toward a federated decision paradigm where decisions by local human, machine or synthesis are not whole-situation-aware, but that collectively ‘swarm’ locally across the larger system to be more effective, ‘wiser’ than a convention paradigm may produce. The supposed implementation strategy will be through extending an existing ‘playbooks as code’ project whose goals are to advise on local action by modelling and gaming complex system dynamics. A sponsoring context is ‘grey zone’ competition that avoids armed conflict, but that can segue to a mixed system course of action advisory. The general context is a costly ‘blue swan’ risk in large commercial and government enterprises. The method will focus on patterns and relationships in synthetic categories used to model type transitions within topological models of system influence. One may say this is applied intuitionistic type theory, following mechanisms generally described by synthetic differential geometry. In this context, the motivating supposition of this study is that information-carrying influence channels are best modelled in our challenging domain as perceived types rather than understood types. Ted Goranson Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 43 44 10.1609/aaaiss.v3i1.31177 On Replacing Humans with Large Language Models in Voice-Based Human-in-the-Loop Systems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31178 It is easy to assume that Large Language Models (LLMs) will seamlessly take over applications, especially those that are largely automated. In the case of conversational voice assistants, commercial systems have been widely deployed and used over the past decade. However, are we indeed on the cusp of the future we envisioned? There exists a social-technical gap between what people want to accomplish and the actual capability of technology. In this paper, we present a case study comparing two voice assistants built on Amazon Alexa: one employing a human-in-the-loop workflow, the other utilizes LLM to engage in conversations with users. In our comparison, we discovered that the issues arising in current human-in-the-loop and LLM systems are not identical. However, the presence of a set of similar issues in both systems leads us to believe that focusing on the interaction between users and systems is crucial, perhaps even more so than focusing solely on the underlying technology itself. Merely enhancing the performance of the workers or the models may not adequately address these issues. This observation prompts our research question: What are the overlooked contributing factors in the effort to improve the capabilities of voice assistants, which might not have been emphasized in prior research? Shih-Hong Huang Ting-Hao 'Kenneth' Huang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 45 49 10.1609/aaaiss.v3i1.31178 Responsible Integration of Large Language Models (LLMs) in Navy Operational Plan Generation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31179 This paper outlines an approach for assessing and quantifying the risks associated with integrating Large Language Models (LLMs) in generating naval operational plans. It aims to explore the potential benefits and challenges of LLMs in this context and to suggest a methodology for a comprehensive risk assessment framework. Simon Kapiamba Hesham Fouad Ira S. Moskowitz Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 50 53 10.1609/aaaiss.v3i1.31179 Credit Assignment: Challenges and Opportunities in Developing Human-like Learning Agents https://ojs.aaai.org/index.php/AAAI-SS/article/view/31180 Temporal credit assignment is the process of distributing delayed outcomes to each action in a sequence, which is essential for learning to adapt and make decisions in dynamic environments. While computational methods in reinforcement learning, such as temporal difference (TD), have shown success in tackling this issue, it remains unclear whether these mechanisms accurately reflect how humans handle feedback delays. Furthermore, cognitive science research has not fully explored the credit assignment problem in humans and cognitive models. Our study uses a cognitive model based on Instance-Based Learning Theory (IBLT) to investigate various credit assignment mechanisms, including equal credit, exponential credit, and TD credit, using the IBL decision mechanism in a goal-seeking navigation task with feedback delays and varying levels of decision complexity. We compare the performance and process measures of the different models with human decision-making in two experiments. Our findings indicate that the human learning process cannot be fully explained by any of the mechanisms. We also observe that decision complexity affects human behavior but not model behavior. By examining the similarities and differences between human and model behavior, we summarize the challenges and opportunities for developing learning agents that emulate human decisions in dynamic environments. Thuy Ngoc Nguyen Chase McDonald Cleotilde Gonzalez Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 54 57 10.1609/aaaiss.v3i1.31180 Exploiting Machine Learning Bias: Predicting Medical Denials https://ojs.aaai.org/index.php/AAAI-SS/article/view/31181 For a large healthcare system, ignoring costs associated with managing the patient encounter denial process (staffing, contracts, etc.), total denial-related amounts can be more than $1B annually in gross charges. Being able to predict a denial before it occurs has the potential for tremendous savings. Using machine learning to predict denial has the potential to allow denial-preventing interventions. However, challenges of data imbalance make creating a single generalized model difficult. We employ two biased models in a hybrid voting scheme to achieve results that exceed the state-of-the art and allow for incremental predictions as the encounter progresses. The model had the added benefit of monitoring the human-driven denial process that affect the underlying distribution, on which the models’ bias is based. Stephen Russell Fabio Montes Suros Ashwin Kumar Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 58 63 10.1609/aaaiss.v3i1.31181 A Generative AI-Based Virtual Physician Assistant https://ojs.aaai.org/index.php/AAAI-SS/article/view/31182 We describe "Dr. A.I.", a virtual physician assistant that uses generative AI to conduct a pre-visit patient interview and to create a draft clinical note for the physician. We document the effectiveness of Dr. A.I. by measuring the concordance of the actual diagnosis made by the doctor with the generated differ-ential diagnosis (DDx) list. This application demonstrates the practical healthcare capabilities of a large language model to improve efficiency of doctor visits while also addressing safety concerns for the use of generative AI in the workflow of patient care. Geoffrey W. Rutledge Alexander Sivura Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 64 65 10.1609/aaaiss.v3i1.31182 Human-AI Interaction in the Age of Large Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31183 Large language models (LLMs) have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how LLMstransform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part looks at social skill learning via LLMs by empowering therapists and learners with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI collaboration via LLMs can empower individuals and foster positive change. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems. Diyi Yang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 66 67 10.1609/aaaiss.v3i1.31183 Accounting for Human Engagement Behavior to Enhance AI-Assisted Decision Making https://ojs.aaai.org/index.php/AAAI-SS/article/view/31184 Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers. Ming Yin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 68 70 10.1609/aaaiss.v3i1.31184 Personalised Course Recommender: Linking Learning Objectives and Career Goals through Competencies https://ojs.aaai.org/index.php/AAAI-SS/article/view/31185 This paper presents a Knowledge-Based Recommender System (KBRS) that aims to align course recommendations with students' career goals in the field of information systems. The developed KBRS uses the European Skills, Competences, qualifications, and Occupations (ESCO) ontology, course descriptions, and a Large Language Model (LLM) such as ChatGPT 3.5 to bridge course content with the skills required for specific careers in information systems. In this context, no reference is made to the previous behavior of students. The system links course content to the skills required for different careers, adapts to students' changing interests, and provides clear reasoning for the courses proposed. An LLM is used to extract learning objectives from course descriptions and to map the promoted competency. The system evaluates the degree of relevance of courses based on the number of job-related skills supported by the learning objectives. This recommendation is supported by information that facilitates decision-making. The paper describes the system's development, methodology and evaluation and highlights its flexibility, user orientation and adaptability. It also discusses the challenges that arose during the development and evaluation of the system. Nils Beutling Maja Spahic-Bogdanovic Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 72 81 10.1609/aaaiss.v3i1.31185 GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding https://ojs.aaai.org/index.php/AAAI-SS/article/view/31186 Integrating large language models with knowledge graphs derived from domain-specific data represents an important advancement towards more powerful and factual reasoning. As these models grow more capable, it is crucial to enable them to perform multi-step inferences over real-world knowledge graphs while minimizing hallucination. While large language models excel at conversation and text generation, their ability to reason over domain-specialized graphs of interconnected entities remains limited. For example, can we query a model to identify the optimal contact in a professional network for a specific goal, based on relationships and attributes in a private database? The answer is no – such capabilities lie beyond current methods. However, this question underscores a critical technical gap that must be addressed. Many high-value applications in areas such as science, security, and e-commerce rely on proprietary knowledge graphs encoding unique structures, relationships, and logical constraints. We introduce a fine-tuning framework for developing Graph-aligned Language Models (GaLM) that transforms a knowledge graph into an alternate text representation with labeled question-answer pairs. We demonstrate that grounding the models in specific graph-based knowledge expands the models’ capacity for structure-based reasoning. Our methodology leverages the large-language model's generative capabilities to create the dataset and proposes an efficient alternate to retrieval-augmented generation styled methods. Stefan Dernbach Khushbu Agarwal Alejandro Zuniga Michael Henry Sutanay Choudhury Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 82 89 10.1609/aaaiss.v3i1.31186 Modeling Patterns for Neural-Symbolic Reasoning Using Energy-based Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31187 Neural-symbolic (NeSy) AI strives to empower machine learning and large language models with fast, reliable predictions that exhibit commonsense and trustworthy reasoning by seamlessly integrating neural and symbolic methods. With such a broad scope, several taxonomies have been proposed to categorize this integration, emphasizing knowledge representation, reasoning algorithms, and applications. We introduce a knowledge representation-agnostic taxonomy focusing on the neural-symbolic interface capturing methods that reason with probability, logic, and arithmetic constraints. Moreover, we derive expressions for gradients of a prominent class of learning losses and a formalization of reasoning and learning. Through a rigorous empirical analysis spanning three tasks, we show NeSy approaches reach up to a 37% improvement over neural baselines in a semi-supervised setting and a 19% improvement over GPT-4 on question-answering. Charles Dickens Connor Pryor Lise Getoor Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 90 99 10.1609/aaaiss.v3i1.31187 Concept-Guided LLM Agents for Human-AI Safety Codesign https://ojs.aaai.org/index.php/AAAI-SS/article/view/31188 Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people. This also leads to high quality requirements for generative AI. Therefore, the simplistic use of Large Language Models (LLMs) alone will not meet these quality demands. It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems. Ultimately, humans must understand and take responsibility for the suggestions provided by generative AI to ensure system safety. To this end, we present an efficient, hybrid strategy to leverage LLMs for safety analysis and Human-AI codesign. In particular, we develop a customized LLM agent that uses elements of prompt engineering, heuristic reasoning, and retrieval-augmented generation to solve tasks associated with predefined safety concepts, in interaction with a system model graph. The reasoning is guided by a cascade of micro-decisions that help preserve structured information. We further suggest a graph verbalization which acts as an intermediate representation of the system model to facilitate LLM-graph interactions. Selected pairs of prompts and responses relevant for safety analytics illustrate our method for the use case of a simplified automated driving system. Florian Geissler Karsten Roscher Mario Trapp Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 100 104 10.1609/aaaiss.v3i1.31188 Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics https://ojs.aaai.org/index.php/AAAI-SS/article/view/31189 In this paper, we present an exploration of LLMs' abilities to problem solve with physical reasoning in situated environments. We construct a simple simulated environment and demonstrate examples of where, in a zero-shot setting, both text and multimodal LLMs display atomic world knowledge about various objects but fail to compose this knowledge in correct solutions for an object manipulation and placement task. We also use BLIP, a vision-language model trained with more sophisticated cross-modal attention, to identify cases relevant to object physical properties that that model fails to ground. Finally, we present a procedure for discovering the relevant properties of objects in the environment and propose a method to distill this knowledge back into the LLM. Sadaf Ghaffari Nikhil Krishnaswamy Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 105 114 10.1609/aaaiss.v3i1.31189 Fusing Domain-Specific Content from Large Language Models into Knowledge Graphs for Enhanced Zero Shot Object State Classification https://ojs.aaai.org/index.php/AAAI-SS/article/view/31190 Domain-specific knowledge can significantly contribute to addressing a wide variety of vision tasks. However, the generation of such knowledge entails considerable human labor and time costs. This study investigates the potential of Large Language Models (LLMs) in generating and providing domain-specific information through semantic embeddings. To achieve this, an LLM is integrated into a pipeline that utilizes Knowledge Graphs and pre-trained semantic vectors in the context of the Vision-based Zero-shot Object State Classification task. We thoroughly examine the behavior of the LLM through an extensive ablation study. Our findings reveal that the integration of LLM-based embeddings, in combination with general-purpose pre-trained embeddings, leads to substantial performance improvements. Drawing insights from this ablation study, we conduct a comparative analysis against competing models, thereby highlighting the state-of-the-art performance achieved by the proposed approach. Filippos Gouidis Katerina Papantoniou Konstantinos Papoutsakis Theodore Patkos Antonis Argyros Dimitris Plexousakis Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 115 124 10.1609/aaaiss.v3i1.31190 Can LLMs Answer Investment Banking Questions? Using Domain-Tuned Functions to Improve LLM Performance on Knowledge-Intensive Analytical Tasks https://ojs.aaai.org/index.php/AAAI-SS/article/view/31191 Large Language Models (LLMs) can increase the productivity of general-purpose knowledge work, but accuracy is a concern, especially in professional settings requiring domain-specific knowledge and reasoning. To evaluate the suitability of LLMs for such work, we developed a benchmark of 16 analytical tasks representative of the investment banking industry. We evaluated LLM performance without special prompting, with relevant information provided in the prompt, and as part of a system giving the LLM access to domain-tuned functions for information retrieval and planning. Without access to functions, state-of-the-art LLMs performed poorly, completing two or fewer tasks correctly. Access to appropriate domain-tuned functions yielded dramatically better results, although performance was highly sensitive to the design of the functions and the structure of the information they returned. The most effective designs yielded correct answers on 12 out of 16 tasks. Our results suggest that domain-specific functions and information structures, by empowering LLMs with relevant domain knowledge and enabling them to reason in domain-appropriate ways, may be a powerful means of adapting LLMs for use in demanding professional settings. Nicholas Harvel Felipe Bivort Haiek Anupriya Ankolekar David James Brunner Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 125 133 10.1609/aaaiss.v3i1.31191 GPT-4V Takes the Wheel: Promises and Challenges for Pedestrian Behavior Prediction https://ojs.aaai.org/index.php/AAAI-SS/article/view/31192 Predicting pedestrian behavior is the key to ensure safety and reliability of autonomous vehicles. While deep learning methods have been promising by learning from annotated video frame sequences, they often fail to fully grasp the dynamic interactions between pedestrians and traffic, crucial for accurate predictions. These models also lack nuanced common sense reasoning. Moreover, the manual annotation of datasets for these models is expensive and challenging to adapt to new situations. The advent of Vision Language Models (VLMs) introduces promising alternatives to these issues, thanks to their advanced visual and causal reasoning skills. To our knowledge, this research is the first to conduct both quantitative and qualitative evaluations of VLMs in the context of pedestrian behavior prediction for autonomous driving. We evaluate GPT-4V(ision) on publicly available pedestrian datasets: JAAD and WiDEVIEW. Our quantitative analysis focuses on GPT-4V's ability to predict pedestrian behavior in current and future frames. The model achieves a 57% accuracy in a zero-shot manner, which, while impressive, is still behind the state-of-the-art domain-specific models (70%) in predicting pedestrian crossing actions. Qualitatively, GPT-4V shows an impressive ability to process and interpret complex traffic scenarios, differentiate between various pedestrian behaviors, and detect and analyze groups. However, it faces challenges, such as difficulty in detecting smaller pedestrians and assessing the relative motion between pedestrians and the ego vehicle. Jia Huang Peng Jiang Alvika Gautam Srikanth Saripalli Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 134 142 10.1609/aaaiss.v3i1.31192 LLMs in Automated Essay Evaluation: A Case Study https://ojs.aaai.org/index.php/AAAI-SS/article/view/31193 This study delves into the application of large language models (LLMs), such as ChatGPT-4, for the automated evaluation of student essays, with a focus on a case study conducted at the Swiss Institute of Business Administration. It explores the effectiveness of LLMs in assessing German-language student transfer assignments, and contrasts their performance with traditional evaluations by human lecturers. The primary findings highlight the challenges faced by LLMs in terms of accurately grading complex texts according to predefined categories and providing detailed feedback. This research illuminates the gap between the capabilities of LLMs and the nuanced requirements of student essay evaluation. The conclusion emphasizes the necessity for ongoing research and development in the area of LLM technology to improve the accuracy, reliability, and consistency of automated essay assessments in educational contexts. Milan Kostic Hans Friedrich Witschel Knut Hinkelmann Maja Spahic-Bogdanovic Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 143 147 10.1609/aaaiss.v3i1.31193 An LLM-Aided Enterprise Knowledge Graph (EKG) Engineering Process https://ojs.aaai.org/index.php/AAAI-SS/article/view/31194 Conventional knowledge engineering approaches aiming to create Enterprise Knowledge Graphs (EKG) still require a high level of manual effort and high ontology expertise, which hinder their adoption across industries. To tackle this issue, we explored the use of Large Language Models (LLMs) for the creation of EKGs through the lens of a design-science approach. Findings from the literature and from expert interviews led to the creation of the proposed artefact, which takes the form of a six-step process for EKG development. Scenarios on how to use LLMs are proposed and implemented for each of the six steps. The process is then evaluated with an anonymised data set from a large Swiss company. Results demonstrate that LLMs can support the creation of EKGs, offering themselves as a new aid for knowledge engineers. Emanuele Laurenzi Adrian Mathys Andreas Martin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 148 156 10.1609/aaaiss.v3i1.31194 ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering https://ojs.aaai.org/index.php/AAAI-SS/article/view/31195 Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset. Pei-Ying Lin Erick Chandra Jane Yung-jen Hsu Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 157 166 10.1609/aaaiss.v3i1.31195 Empowering Large Language Models in Hybrid Intelligence Systems through Data-Centric Process Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31196 Hybrid intelligence systems aim to leverage synergies in closely collaborating teams of humans and artificial intelligence (AI). To guide the realization of such teams, recent research proposed design patterns that capture role-based knowledge on human-AI collaborations. Building on these patterns requires hybrid intelligence systems to provide mechanisms that orchestrate human and AI contributions accordingly. So far, it is unclear if such mechanisms can be provided based on shared representations of the required knowledge. In this regard, we expect ontology-based data-centric process modeling to be a promising direction for hybrid intelligence systems that aim to support knowledge-intensive processes (KiPs). We illustrate this through exemplary process models (realized with our ontology- and data-driven business process model -- ODD-BP) that reflect the team design patterns for hybrid intelligence systems. We point out that relying on such process models enables multiple actors to fulfill roles jointly and allows them to address individual shortcomings. This is examined by discussing integrating large language models (LLMs) into the process models and describing how complementary AI actors could help to empower LLMs to fulfill their role in human-AI collaboration more comprehensively. Future work will extend the provided concepts while their evaluation initially focuses on the KiP of medical emergency call handling. Carsten Maletzki Eric Rietzke Ralph Bergmann Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 167 174 10.1609/aaaiss.v3i1.31196 Domain-specific Embeddings for Question-Answering Systems: FAQs for Health Coaching https://ojs.aaai.org/index.php/AAAI-SS/article/view/31197 FAQs are widely used to respond to users’ knowledge needs within knowledge domains. While LLM might be a promising way to address user questions, they are still prone to hallucinations i.e., inaccurate or wrong responses, which, can, inter alia, lead to massive problems, including, but not limited to, ethical issues. As a part of the healthcare coach chatbot for young Nigerian HIV clients, the need to meet their information needs through FAQs is one of the main coaching requirements. In this paper, we explore if domain knowledge in HIV FAQs can be represented as text embeddings to retrieve similar questions matching user queries, thus improving the understanding of the chatbot and the satisfaction of the users. Specifically, we describe our approach to developing an FAQ chatbot for the domain of HIV. We used a predefined FAQ question-answer knowledge base in English and Pidgin co-created by HIV clients and experts from Nigeria and Switzerland. The results of the post-engagement survey show that the chatbot mostly understood the user’s questions and could identify relevant matching questions and retrieve an appropriate response. Andreas Martin Charuta Pande Sandro Schwander Ademola J. Ajuwon Christoph Pimmer Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 175 179 10.1609/aaaiss.v3i1.31197 ChEdBot: Designing a Domain-Specific Conversational Agent in a Simulational Learning Environment Using LLMs https://ojs.aaai.org/index.php/AAAI-SS/article/view/31198 We propose conversational agents as a means to simulate expert interviews, integrated into a simulational learning environment: ChEdventure. Designing and developing conversational agents using the existing tools and frameworks requires technical knowledge and a considerable learning curve. Recently, LLMs are being leveraged for their adaptability to different domains and their ability to perform various tasks in a natural, human-like conversational style. In this work, we explore if LLMs can help educators easily create conversational agents for their individual teaching goals. We propose a generalized template-based approach using LLMs that can instantiate conversational agents as an integrable component of teaching and learning activities. We evaluate our approach using prototypes generated from this template and identify guidelines to improve the experience of educators. Andreas Martin Charuta Pande Hans Friedrich Witschel Judith Mathez Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 180 187 10.1609/aaaiss.v3i1.31198 Semantic Verification in Large Language Model-based Retrieval Augmented Generation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31199 This position paper presents a novel approach of semantic verification in Large Language Model-based Retrieval Augmented Generation (LLM-RAG) systems, focusing on the critical need for factually accurate information dissemination during public debates, especially prior to plebiscites e.g. in direct democracies, particularly in the context of Switzerland. Recognizing the unique challenges posed by the current generation of Large Language Models (LLMs) in maintaining factual integrity, this research proposes an innovative solution that integrates retrieval mechanisms with enhanced semantic verification processes. The paper outlines a comprehensive methodology following a Design Science Research approach, which includes defining user personas, designing conversational interfaces, and iteratively developing a hybrid dialogue system. Central to this system is a robust semantic verification framework that leverages a knowledge graph for fact-checking and validation, ensuring the correctness and consistency of information generated by LLMs. The paper discusses the significance of this research in the context of Swiss direct democracy, where informed decision-making is pivotal. By improving the accuracy and reliability of information provided to the public, the proposed system aims to support the democratic process, enabling citizens to make well-informed decisions on complex issues. The research contributes to advancing the field of natural language processing and information retrieval, demonstrating the potential of AI and LLMs in enhancing civic engagement and democratic participation. Andreas Martin Hans Friedrich Witschel Maximilian Mandl Mona Stockhecke Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 188 192 10.1609/aaaiss.v3i1.31199 Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs https://ojs.aaai.org/index.php/AAAI-SS/article/view/31200 The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user. Orfeas Menis Mastromichalakis Edmund Dervakos Alexandros Chortaras Giorgos Stamou Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 193 202 10.1609/aaaiss.v3i1.31200 Enhancing Knowledge Graph Consistency through Open Large Language Models: A Case Study https://ojs.aaai.org/index.php/AAAI-SS/article/view/31201 High-quality knowledge graphs (KGs) play a crucial role in many applications. However, KGs created by automated information extraction systems can suffer from erroneous extractions or be inconsistent with provenance/source text. It is important to identify and correct such problems. In this paper, we study leveraging the emergent reasoning capabilities of large language models (LLMs) to detect inconsistencies between extracted facts and their provenance. With a focus on ``open'' LLMs that can be run and trained locally, we find that few-shot approaches can yield an absolute performance gain of 2.5-3.4% over the state-of-the-art method with only 9% of training data. We examine the LLM architectures' effect and show that Decoder-Only models underperform Encoder-Decoder approaches. We also explore how model size impacts performance and counterintuitively find that larger models do not result in consistent performance gains. Our detailed analyses suggest that while LLMs can improve KG consistency, the different LLM models learn different aspects of KG consistency and are sensitive to the number of entities involved. Ankur Padia Francis Ferraro Tim Finin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 203 208 10.1609/aaaiss.v3i1.31201 LLMs Among Us: Generative AI Participating in Digital Discourse https://ojs.aaai.org/index.php/AAAI-SS/article/view/31202 The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of many social media platforms. While this can bring promising opportunities, it also raises many threats, such as biases and privacy concerns, and may contribute to the spread of propaganda by malicious actors. We developed the "LLMs Among Us" experimental framework on top of the Mastodon social media platform for bot and human participants to communicate without knowing the ratio or nature of bot and human participants. We built 10 personas with three different LLMs, GPT-4, Llama 2 Chat, and Claude. We conducted three rounds of the experiment and surveyed participants after each round to measure the ability of LLMs to pose as human participants without human detection. We found that participants correctly identified the nature of other users in the experiment only 42% of the time despite knowing the presence of both bots and humans. We also found that the choice of persona had substantially more impact on human perception than the choice of mainstream LLMs. Kristina Radivojevic Nicholas Clark Paul Brenner Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 209 218 10.1609/aaaiss.v3i1.31202 K-PERM: Personalized Response Generation Using Dynamic Knowledge Retrieval and Persona-Adaptive Queries https://ojs.aaai.org/index.php/AAAI-SS/article/view/31203 Personalizing conversational agents can enhance the quality of conversations and increase user engagement. However, they often lack external knowledge to appropriately tend to a user’s persona. This is crucial for practical applications like mental health support, nutrition planning, culturally sensitive conversations, or reducing toxic behavior in conversational agents. To enhance the relevance and comprehensiveness of personalized responses, we propose using a two-step approach that involves (1) selectively integrating user personas and (2) contextualizing the response by supplementing information from a background knowledge source. We develop K-PERM (Knowledge-guided PErsonalization with Reward Modulation), a dynamic conversational agent that combines these elements. K-PERM achieves state-of-the- art performance on the popular FoCus dataset, containing real-world personalized conversations concerning global landmarks.We show that using responses from K-PERM can improve performance in state-of-the-art LLMs (GPT 3.5) by 10.5%, highlighting the impact of K-PERM for personalizing chatbots. Kanak Raj Kaushik Roy Vamshi Bonagiri Priyanshul Govil Krishnaprasad Thirunarayan Raxit Goswami Manas Gaur Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 219 226 10.1609/aaaiss.v3i1.31203 Causal Event Graph-Guided Language-based Spatiotemporal Question Answering https://ojs.aaai.org/index.php/AAAI-SS/article/view/31204 Large Language Models have excelled at encoding and leveraging language patterns in large text-based corpora for various tasks, including spatiotemporal event-based question answering (QA). However, due to encoding a text-based projection of the world, they have also been shown to lack a full bodied understanding of such events, e.g., a sense of intuitive physics, and cause-and-effect relationships among events. In this work, we propose using causal event graphs (CEGs) to enhance language understanding of spatiotemporal events in language models, using a novel approach that also provides proofs for the model’s capture of the CEGs. A CEG consists of events denoted by nodes, and edges that denote cause and effect relationships among the events. We perform experimentation and evaluation of our approach for benchmark spatiotemporal QA tasks and show effective performance, both quantitative and qualitative, over state-of-the-art baseline methods. Kaushik Roy Alessandro Oltramari Yuxin Zi Chathurangi Shyalika Vignesh Narayanan Amit Sheth Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 227 233 10.1609/aaaiss.v3i1.31204 Multi-Modal Instruction-Tuning Small-Scale Language-and-Vision Assistant for Semiconductor Electron Micrograph Analysis https://ojs.aaai.org/index.php/AAAI-SS/article/view/31205 We present a novel framework for analyzing and interpreting electron microscopy images in semiconductor manufacturing using vision-language instruction tuning. The framework employs a unique teacher-student approach, leveraging pretrained multimodal large language models such as GPT-4 to generate instruction-following data for zero-shot visual question answering (VQA) and classification tasks, customizing smaller multimodal models (SMMs) for microscopy image analysis, resulting in an instruction tuned language-and-vision assistant. Our framework merges knowledge engineering with machine learning to integrate domain-specific expertise from larger to smaller multimodal models within this specialized field, greatly reducing the need for extensive human labeling. Our study presents a secure, cost-effective, and customizable approach for analyzing microscopy images, addressing the challenges of adopting proprietary models in semiconductor manufacturing. Sagar Srinivas Sakhinana Geethan Sannidhi Venkataramana Runkana Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 234 242 10.1609/aaaiss.v3i1.31205 A Framework for Enhancing Behavioral Science Research with Human-Guided Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31206 Many behavioral science studies result in large amounts of unstructured data sets that are costly to code and analyze, requiring multiple reviewers to agree on systematically chosen concepts and themes to categorize responses. Large language models (LLMs) have potential to support this work, demonstrating capabilities for categorizing, summarizing, and otherwise organizing unstructured data. In this paper, we consider that although LLMs have the potential to save time and resources performing coding on qualitative data, the implications for behavioral science research are not yet well understood. Model bias and inaccuracies, reliability, and lack of domain knowledge all necessitate continued human guidance. New methods and interfaces must be developed to enable behavioral science researchers to efficiently and systematically categorize unstructured data together with LLMs. We propose a framework for incorporating human feedback into an annotation workflow, leveraging interactive machine learning to provide oversight while improving a language model's predictions over time. Jaelle Scheuerman Dina Acklin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 243 247 10.1609/aaaiss.v3i1.31206 What Can Computers Do Now? Dreyfus Revisited for the Third Wave of Artificial Intelligence https://ojs.aaai.org/index.php/AAAI-SS/article/view/31207 In recent years, artificial intelligence (AI) has seen significant advances that have in fact exceeded even optimistic prognoses. Using data-driven AI, namely deep learning techniques, it has been demonstrated that computers may now be equipped with abilities of remarkable scope and quality, such as solving image and text processing tasks at human level. Large language models, in particular, have sparked debates regarding opportunities and challenges of this rapidly developing area. Will remaining fundamental challenges of data-driven AI, such as factual or logical mistakes, be overcome for good if complemented and hybridized with symbolic AI techniques, such as knowledge representation and reasoning? Will systems of artificial general intelligence (AGI) emerge from this, possessing common sense and in fact completing the decades-old quest for AI that motivated the raise of the field in the 1950s? In the light of these questions, we review the likewise, decades-old philosophical debate about capabilities and limitations of computers from a hybrid AI point of view. Here, we discuss how hybrid AI is coming closer to disproving Hubert Dreyfus’ famous statements regarding what computers can not do. At the same time, we shed light on a lesser discussed challenge for hybrid AI: the possibility that its developers might be its biggest limiters. Ben Schuering Thomas Schmid Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 248 252 10.1609/aaaiss.v3i1.31207 Advancing Ontology Alignment in the Labor Market: Combining Large Language Models with Domain Knowledge https://ojs.aaai.org/index.php/AAAI-SS/article/view/31208 One of the approaches to help the demand and supply problem in the labor market domain is to change from degree-based hiring to skill-based hiring. The link between occupations, degrees and skills is captured in domain ontologies such as ESCO in Europe and O*NET in the US. Several countries are also building or extending these ontologies. The alignment of the ontologies is important, as it should be clear how they all relate. Aligning two ontologies by creating a mapping between them is a tedious task to do manually, and with the rise of generative large language models like GPT-4, we explore how language models and domain knowledge can be combined in the matching of the instances in the ontologies and in finding the specific relation between the instances (mapping refinement). We specifically focus on the process of updating a mapping, but the methods could also be used to create a first-time mapping. We compare the performance of several state-of-the-art methods such as GPT-4 and fine-tuned BERT models on the mapping between ESCO and O*NET and ESCO and CompetentNL (the Dutch variant) for both ontology matching and mapping refinement. Our findings indicate that: 1) Match-BERT-GPT, an integration of BERT and GPT, performs best in ontology matching, while 2) TaSeR outperforms GPT-4, albeit marginally, in the task of mapping refinement. These results show that domain knowledge is still important in ontology alignment, especially in the updating of a mapping in our use cases in the labor domain. Lucas L. Snijder Quirine T. S. Smit Maaike H. T. de Boer Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 253 262 10.1609/aaaiss.v3i1.31208 Faithful Reasoning over Scientific Claims https://ojs.aaai.org/index.php/AAAI-SS/article/view/31209 Claim verification in scientific domains requires models that faithfully incorporate relevant knowledge from the ever-growing, vast existing literature. Unfaithful claim verifications can lead to misinformation such as those observed during the COVID-19 pandemic. Fact-checking systems often fail to capture the complex relationship between claims and evidence, especially with ambiguous claims and implicit assumptions. Relying only on current LLMs poses challenges due to hallucinations and information traceability issues. To address these challenges, our approach considers multiple viewpoints onto the scientific literature, enabling the assessment of contradictory arguments and implicit assumptions. Our proposed inference method adds faithful reasoning to large language models by distilling information from diverse, relevant scientific abstracts. This method provides a verdict label that can be weighted by the reputation of the scientific articles and an explanation that can be traced back to sources. Our findings demonstrate that humans not only perceive our explanation to be significantly superior to the off-the-shelf model, but they also evaluate it as faithfully enabling the tracing of evidence back to its original sources. Neşet Özkan Tan Niket Tandon David Wadden Oyvind Tafjord Mark Gahegan Michael Witbrock Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 263 272 10.1609/aaaiss.v3i1.31209 Retrieval-Augmented Generation and LLM Agents for Biomimicry Design Solutions https://ojs.aaai.org/index.php/AAAI-SS/article/view/31210 We present BIDARA, a Bio-Inspired Design And Research Assistant, to address the complexity of biomimicry -- the practice of designing modern-day engineering solutions inspired by biological phenomena. Large Language Models (LLMs) have been shown to act as sufficient general-purpose task solvers, but they often hallucinate and fail in regimes that require domain-specific and up-to-date knowledge. We integrate Retrieval-Augmented Generation (RAG) and Reasoning-and-Action agents to aid LLMs in avoiding hallucination and utilizing updated knowledge during generation of biomimetic design solutions. We find that incorporating RAG increases the feasibility of the design solutions in both prompting and agent settings, and we use these findings to guide our ongoing work. To the extent of our knowledge, this is the first work that integrates and evaluates Retrieval-Augmented Generation within LLM-generated biomimetic design solutions. Christopher Toukmaji Allison Tee Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 273 278 10.1609/aaaiss.v3i1.31210 Exploring Alternative Approaches to Language Modeling for Learning from Data and Knowledge https://ojs.aaai.org/index.php/AAAI-SS/article/view/31211 Despite their extensive application in language understanding tasks, large language models (LLMs) still encounter challenges including hallucinations - occasional fabrication of information - and alignment issues - lack of associations with human-curated world models (e.g., intuitive physics or common-sense knowledge). Moreover, the black-box nature of LLMs presents significant obstacles in training them effectively to achieve desired behaviors. In particular, modifying the concept embedding spaces of LLMs can be highly intractable. This process involves analyzing the implicit impact of such adjustments on the myriad parameters within LLMs and the resulting inductive biases. We propose a novel architecture that wraps powerful function approximation architectures within an outer, interpretable read-out layer. This read-out layer can be scrutinized to explicitly observe the effects of concept modeling during the training of the LLM. Our method stands in contrast with gradient-based implicit mechanisms, which depend solely on adjustments to the LLM parameters and thus evade scrutiny. By conducting extensive experiments across both generative and discriminative language modeling tasks, we evaluate the capabilities of our proposed architecture relative to state-of-the-art LLMs of similar sizes. Additionally, we offer a qualitative examination of the interpretable read-out layer and visualize the concepts it captures. The results demonstrate the potential of our approach for effectively controlling LLM hallucinations and enhancing the alignment with human expectations. Yuxin Zi Kaushik Roy Vignesh Narayanan Amit Sheth Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 279 286 10.1609/aaaiss.v3i1.31211 Building Communication Efficient Asynchronous Peer-to-Peer Federated LLMs with Blockchain https://ojs.aaai.org/index.php/AAAI-SS/article/view/31212 Large language models (LLM) have gathered attention with the advent of ChatGPT. However, developing personalized LLM models faces challenges in real-world applications due to data scarcity and privacy concerns. Federated learning addresses these issues, providing collaborative training while preserving the client’s data. Although it has made significant progress, federated learning still faces ongoing challenges, such as communication efficiency, heterogeneous data, and privacy-preserving methods. This paper presents a novel, fully decentralized federated learning framework for LLMs to address these challenges. We utilize different blockchain-federated LLM (BC-FL) algorithms, effectively balancing the trade-off between latency and accuracy in a decentralized-federated learning environment. Additionally, we address the challenge of communication overhead in peer-to-peer networks by optimizing the path for weight transfer and mitigating node anomalies. We conducted experiments to evaluate memory usage and latency in server and serverless environments. Our results demonstrate a decrease in latency by 5X and a 13% increase in accuracy for serverless cases. Comparisons between synchronous and asynchronous scenarios revealed a 76% reduction in information passing time for the latter. The PageRank method is most efficient in eliminating anomalous nodes for better performance of the global federated LLM model. The code is available on GitHub (https://github.com/Sreebhargavibalijaa/Federated_finetuning_LLM-s_p2p_environment) Sree Bhargavi Balija Amitash Nanda Debashis Sahoo Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 288 292 10.1609/aaaiss.v3i1.31212 Is Federated Learning Still Alive in the Foundation Model Era? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31213 Federated learning (FL) has arisen as an alternative to collecting large amounts of data in a central place to train a machine learning (ML) model. FL is privacy-friendly, allowing multiple parties to collaboratively train an ML model without exchanging or transmitting their training data. For this purpose, an aggregator iteratively coordinates the training process among parties, and parties simply share with the aggregator model updates, which contain information pertinent to the model such as neural network weights. Besides privacy, generalization has been another key driver for FL: parties who do not have enough data to train a good performing model by themselves can now engage in FL to obtain an ML model suitable for their tasks. Products and real applications in the industry and consumer space have demonstrated the power of this learning paradigm. Recently, foundation models have taken the AI community by storm, promising to solve the shortage of labeled data. A foundation model is a powerful model that can be recycled for a variety of use cases by applying techniques such as zero-shot learning and full or parameter-efficient fine tuning. The premise is that the amount of data required to fine tune a foundation model for a new task is much smaller than fully training a traditional model from scratch. The reason why this is the case is that a good foundation model has already learned relevant general representations, and thus, adapting it to a new task only requires a minimal number of additional samples. This raises the question: Is FL still alive in the era of foundation models? In this talk, I will address this question. I will present some use cases where FL is very much alive. In these use cases, finding a foundation model with a desired representation is difficult if not impossible. With this pragmatic point of view, I hope to shed some light into a real use case where disparate private data is available in isolation at different parties and where labels may be located at a single party that doesn’t have any other information, making it impossible for a single party to train a model on its own. Furthermore, in some vertically-partitioned scenarios, cleaning data is not an option due to privacy-related reasons and it is not clear how to apply foundation models. Finally, I will also go over a few other requirements that are often overlooked, such as unlearning of data and its implications for the lifecycle management of FL and systems based on foundation models. Nathalie Baracaldo Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 293 293 10.1609/aaaiss.v3i1.31213 Advancing Federated Learning by Addressing Data and System Heterogeneity https://ojs.aaai.org/index.php/AAAI-SS/article/view/31214 In the emerging field of federated learning (FL), the challenge of heterogeneity, both in data and systems, presents significant obstacles to efficient and effective model training. This talk focuses on the latest advancements and solutions addressing these challenges. The first part of the talk delves into data heterogeneity, a core issue in FL, where data distributions across different clients vary widely and affect FL convergence. We will introduce the FedCor framework addressing this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation. The second part of the talk shifts focus to system heterogeneity, an equally critical challenge in FL. System heterogeneity involves the varying computational capabilities, network speeds, and other resource-related constraints of participating devices in FL. To address this, we introduce FedSEA, which is a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Additionally, we discuss FedRepre, a framework specifically designed to enhance FL in real-world environments by addressing challenges including unbalanced local dataset distributions, uneven computational capabilities, and fluctuating network speeds. By introducing a client selection mechanism and a specialized server architecture, FedRepre notably improves the efficiency, scalability, and performance of FL systems. Our talk aims to provide a comprehensive overview of the current research and advancements in tackling both data and system heterogeneity in federated learning. We hope to highlight the path forward for FL, underlining its potential in diverse real-world applications while maintaining data privacy and optimizing resource usage. Yiran Chen Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 294 294 10.1609/aaaiss.v3i1.31214 Operational Environments at the Extreme Tactical Edge https://ojs.aaai.org/index.php/AAAI-SS/article/view/31215 You can’t get more “on the tactical edge” than in space. No other operational domain suffers from the combinations of distance from the operator, harsh environments, unreachable assets with aging hardware, and increadably long communications as space systems. The complexity of developing and deploying AI solutions in satellites and probes is far more difficult than deploying similar AI on Earth. This talk explores some of the considerations involved in deploying AI and machine learning (ML) in the space domain. Mark J. Gerken Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 295 295 10.1609/aaaiss.v3i1.31215 Confluence of Random Walks, Interacting Particle Systems, and Distributed Machine Learning: Federated Learning through Crawling over Networks https://ojs.aaai.org/index.php/AAAI-SS/article/view/31216 In this work, we aim to unveil a new class of intermediate FL architectures between centralized and decentralized schemes called “FedCrawl.” FedCrawl takes advantage of benefits of D2D communications similar to decentralized schemes; however, it uses them in a nuanced way. FedCrawl is inspired by web crawlers, which effectively explore the websites to find updated/new content posted on the internet. The cornerstone of FedCrawl is its innovative conceptualization of neural networks (NNs) or other used ML models as autonomous entities, called random walkers, with the capability to move or jump across nodes in the network through peer-to-peer (P2P) or device-to-device (D2D) connections. We introduce five research aspects to study the nuanced intricacies governing random walker behavior in these environments. The first research aspect addresses the interplay between network topology and data distribution, emphasizing the importance of considering both factors for designing efficient random walks in FedCrawl. The second research aspect explores the applicability of node importance metrics in optimizing random walker paths for FedCrawl. We propose a dynamic perception-aware design, discussed in the third research aspect, where transition matrices adapt to the evolving state of random walkers, balancing exploration and exploitation. The fourth research aspect introduces innovative features like skipping, memory look-back, and caching/trailing to enhance random walker performance. Lastly, the fifth research aspect delves into the dynamics of multiple random walkers in networked environments, introducing the concept of multi-pole random walkers. Complementing these five research aspects, we present five conjectures, each introducing novel perspectives and methodologies in the domain of decentralized learning. These conjectures encompass areas such as temperature-based characterization of random walkers and network nodes, dynamic transition matrices, non-Markovian processes, and an evolutionary framework for random walker patterns. Seyyedali Hosseinalipour Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 296 296 10.1609/aaaiss.v3i1.31216 Revolutionizing AI-Assisted Education with Federated Learning: A Pathway to Distributed, Privacy-Preserving, and Debiased Learning Ecosystems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31217 The majority of current research on the application of artificial intelligence (AI) and machine learning (ML) in science, technology, engineering, and mathematics (STEM) education relies on centralized model training architectures. Typically, this involves pooling data at a centralized location alongside an ML model training module, such as a cloud server. However, this approach necessitates transferring student data across the network, leading to privacy concerns. In this paper, we explore the application of federated learning (FL), a highly recognized distributed ML technique, within the educational ecosystem. We highlight the potential benefits FL offers to students, classrooms, and institutions. Also, we identify a range of technical, logistical, and ethical challenges that impede the sustainable implementation of FL in the education sector. Finally, we discuss a series of open research directions, focusing on nuanced aspects of FL implementation in educational contexts. These directions aim to explore and address the complexities of applying FL in varied educational settings, ensuring its deployment is technologically sound, beneficial, and equitable for all stakeholders involved. Anurata Prabha Hridi Rajeev Sahay Seyyedali Hosseinalipour Bita Akram Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 297 303 10.1609/aaaiss.v3i1.31217 Framework for Federated Learning and Edge Deployment of Real-Time Reinforcement Learning Decision Engine on Software Defined Radio https://ojs.aaai.org/index.php/AAAI-SS/article/view/31218 Machine learning promises to empower dynamic resource allocation requirements of Next Generation (NextG) wireless networks including 6G and tactical networks. Recently, we have seen the impact machine learning can make on various aspects of wireless networks. Yet, in most cases, the progress has been limited to simulations and/or relies on large processing units to run the decision engines as opposed to deploying it on the radio at the edge. While relying on simulations for rapid and efficient training of deep reinforcement learning (DRL) may be necessary, it is key to mitigate the sim-real gap while trying to improve the generalization capability. To mitigate these challenges, we developed the Marconi-Rosenblatt Framework for Intelligent Networks (MR-iNet Gym), an open-source architecture designed for accelerating the deployment of novel DRL for NextG wireless networks. To demonstrate its impact, we tackled the problem of distributed frequency and power allocation while emphasizing the generalization capability of DRL decision engine. The end-to-end solution was implemented on the GPU-embedded software-defined radio and validated using over-the-air evaluation. To the best of our knowledge, these were the first instances that established the feasibility of deploying DRL for optimized distributed resource allocation for next-generation of GPU-embedded radios. Jithin Jagannath Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 304 304 10.1609/aaaiss.v3i1.31218 Resource-aware Federated Data Analytics in Edge-Enabled IoT Systems https://ojs.aaai.org/index.php/AAAI-SS/article/view/31219 In a resource constrained environment like Internet-of-Things (IoT) systems, it is critical to make optimal decisions on how much resources to allocate pre-processing and how much to allocate to model training, and which specific combination of preprocessing and learning should be selected. This talk first, provides an overview of some initial steps we took towards developing federated data pre-processing in IoT environments, and then a visionary overview of potential research problems related to developing an integrated resource-aware and Quality-of-Service (QoS)-aware data pre-processing and model training system is provided. Hana Khamfroush Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 305 305 10.1609/aaaiss.v3i1.31219 Towards Fault-Tolerant Federated and Distributed Machine Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31220 Machine learning (ML) models are routinely trained and deployed among distributed devices, e.g., learning with geo-distributed data centers and federated learning with mobile devices. Such shared computing platforms are susceptible to hardware, software, communication errors, and security concerns. This talk will outline some of the threat models in distributed learning, along with robust learning methods proposed to augment the fault tolerance of distributed machine learning, showing both theoretical and empirical evidence of robustness to benign and adversarial attacks. Sanmi Koyejo Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 306 306 10.1609/aaaiss.v3i1.31220 Federated Learning of Things - Expanding the Heterogeneity in Federated Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31221 The Internet of Things (IoT) has revolutionized how our devices are networked, connecting multiple aspects of our life from smart homes and wearables to smart cities and warehouses. IoT’s strength comes from the ever-expanding diverse heterogeneous sensors, applications, and concepts that are all centered around the core concept collecting and sharing data from sensors. Simultaneously, deep learning has changed how our systems operate, allowing them to learn from data and change the way we interface with the world. Federated learning moves these two paradigm shifts together, leveraging the data (securely) from the IoT to train deep learning architectures for performant edge applications. However, today’s federated learning has not yet benefited from the scale of diversity that the IoT and deep learning sensors and applications provide. This talk explores how we can better tap into the heterogeneity that surrounds the potential of federated learning and use it to build better models. This includes the heterogeneity from device hardware to training paradigms (supervised, unsupervised, reinforcement, self-supervised). Scott Kuzdeba Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 307 307 10.1609/aaaiss.v3i1.31221 Towards Robust Multi-Agent Reinforcement Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31222 Stochastic gradient descent (SGD) is at the heart of large-scale distributed machine learning paradigms such as federated learning (FL). In these applications, the task of training high-dimensional weight vectors is distributed among several workers that exchange information over networks of limited bandwidth. While parallelization at such an immense scale helps to reduce the computational burden, it creates several other challenges: delays, asynchrony, and most importantly, a significant communication bottleneck. The popularity and success of SGD can be attributed in no small part to the fact that it is extremely robust to such deviations from ideal operating conditions. Inspired by these findings, we ask: Are common reinforcement learning (RL) algorithms also robust to similarly structured perturbations? Perhaps surprisingly, despite the recent surge of interest in multi-agent/federated RL, almost nothing is known about the above question. This paper collects some of our recent results in filling this void. Aritra Mitra Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 308 308 10.1609/aaaiss.v3i1.31222 Adaptive Federated Learning for Automatic Modulation Classification Under Class and Noise Imbalance https://ojs.aaai.org/index.php/AAAI-SS/article/view/31223 The ability to rapidly understand and label the radio spectrum in an autonomous way is key for monitoring spectrum interference, spectrum utilization efficiency, protecting passive users, monitoring and enforcing compliance with regulations, detecting faulty radios, dynamic spectrum access, opportunistic mesh networking, and numerous NextG regulatory and defense applications. We consider the problem of automatic modulation classification (AMC) by a distributed network of wireless sensors that monitor the spectrum for signal transmissions of interest over a large deployment area. Each sensor receives signals under a specific channel condition depending on its location and trains an individual model of a deep neural network (DNN) accordingly to classify signals. To improve modulation classification accuracy, we consider federated learning (FL) where each individual sensor shares its trained model with a centralized controller, which, after aggregation, initializes its model for the next round of training. Without exchanging any spectrum data (such as in cooperative spectrum sensing), this process is repeated over time. A common DNN is built across the net- work while preserving the privacy associated with signals collected at different locations. Given their distributed nature, the statistics of the data across these sensors are likely to differ significantly. We propose the use of adaptive federated learning for AMC. Specifically, we use FEDADAM -an algorithm using Adam for server optimization – and ex- amine how it compares to the FEDAVG algorithm -one of the standard FL algorithms, which averages client parameters after some local iterations, in particular in challenging scenarios that include class imbalance and/or noise-level imbalance across the network. Our extensive numerical studies over 11 standard modulation classes corroborate the merit of adaptive FL, outperforming its standard alternatives in various challenging cases and for various network sizes. Jose Angel Sanchez Viloria Dimitris Stripelis Panos P. Markopoulos George Sklivanitis Dimitris A. Pados Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 309 309 10.1609/aaaiss.v3i1.31223 Now It Sounds Like You: Learning Personalized Vocabulary On Device https://ojs.aaai.org/index.php/AAAI-SS/article/view/31224 In recent years, Federated Learning (FL) has shown significant advancements in its ability to perform various natural language processing (NLP) tasks. This work focuses on applying personalized FL for on-device language modeling. Due to limitations of memory and latency, these models cannot support the complexity of sub-word tokenization or beam search decoding, resulting in the decision to deploy a closed-vocabulary language model. However, closed-vocabulary models are unable to handle out-of-vocabulary (OOV) words belonging to specific users. To address this issue, We propose a novel technique called "OOV expansion" that improves OOV coverage and increases model accuracy while minimizing the impact on memory and latency. This method introduces a personalized "OOV adapter" that effectively transfers knowledge from a central model and learns word embedding for personalized vocabulary. OOV expansion significantly outperforms standard FL personalization methods on a set of common FL benchmarks. Ashish Shenoy Sid Wang Pierce Chuang John Nguyen Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 310 315 10.1609/aaaiss.v3i1.31224 You Can Have Your Cake and Eat It Too: Ensuring Practical Robustness and Privacy in Federated Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31225 Inherently, federated learning (FL) robustness is very challenging to guarantee, especially when trying to maintain privacy. Compared to standard ML settings, FL's open training process allows for malicious clients to easily go under the radar. Alongside this, malicious clients can easily collude to attack the training process continuously, and without detection. FL models are also still susceptible to attacks on standard ML training procedures. This massive attack surface makes balancing the tradeoff between utility, practicality, robustness, and privacy extremely challenging. While there have been proposed defenses to attacks using popular privacy-preserving primitives, such as fully homomorphic encryption, they often face trouble balancing an all-important question that is present in all privacy-preserving systems: How much utility and practicality am I willing to give up to ensure privacy and robustness? In this work, we discuss a practical approach towards secure and robust FL and the challenges that face this field of emerging research. Nojan Sheybani Farinaz Koushanfar Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 316 316 10.1609/aaaiss.v3i1.31225 Advancing Neuro-Inspired Lifelong Learning for Edge with Co-Design https://ojs.aaai.org/index.php/AAAI-SS/article/view/31226 Lifelong learning, which refers to an agent's ability to continuously learn and enhance its performance over its lifespan, is a significant challenge in artificial intelligence (AI), that biological systems tackle efficiently. This challenge is further exacerbated when AI is deployed in untethered environments with strict energy and latency constraints. We take inspiration from neural plasticity and investigate how to leverage and build energy-efficient lifelong learning machines. Specifically, we study how a combination of neural plasticity mechanisms, namely neuromodulation, synaptic consolidation, and metaplasticity, enhance the continual learning capabilities of AI models. We further co-design architectures that leverage compute-in-memory topologies and sparse spike-based communication with quantization for the edge. Aspects of this co-design can be transferred to federated lifelong learning scenarios. Nicholas Soures Vedant Karia Dhireesha Kudithipudi Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 317 317 10.1609/aaaiss.v3i1.31226 Multi-Criterion Client Selection for Efficient Federated Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31227 Federated Learning (FL) has received tremendous attention as a decentralized machine learning (ML) framework that allows distributed data owners to collaboratively train a global model without sharing raw data. Since FL trains the model directly on edge devices, the heterogeneity of participating clients in terms of data distribution, hardware capabilities and network connectivity can significantly impact the overall performance of FL systems. Optimizing for model accuracy could extend the training time due to the diverse and resource-constrained nature of edge devices while minimizing training time could compromise the model's accuracy. Effective client selection thus becomes crucial to ensure that the training process is not only efficient but also capitalizes on the diverse data and computational capabilities of different devices. To this end, we propose FedPROM, a novel framework that tackles client selection in FL as a multi-criteria optimization problem. By leveraging the PROMETHEE method, FedPROM ranks clients based on their suitability for a given FL task, considering multiple criteria such as system resources, network conditions, and data quality. This approach allows FedPROM to dynamically select the most appropriate set of clients for each learning round, optimizing both model accuracy and training efficiency. Our evaluations on diverse datasets demonstrate that FedPROM outperforms several state-of-the-art FL client selection protocols in terms of convergence speed, and accuracy, highlighting the framework's effectiveness and the importance of multi-criteria client selection in FL. Mehreen Tahir Muhammad Intizar Ali Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 318 322 10.1609/aaaiss.v3i1.31227 Federated Variational Inference: Towards Improved Personalization and Generalization https://ojs.aaai.org/index.php/AAAI-SS/article/view/31228 Conventional federated learning algorithms train a single global model by leveraging all participating clients’ data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropriately approximate the predictive process, converge to an optimal state, or generalize to new clients. We study personalization and generalization in stateless cross-device federated learning setups assuming heterogeneity in client data distributions and predictive models. We first propose a hierarchical generative model and formalize it using Bayesian Inference. We then approximate this process using Variational Inference to train our model efficiently. We call this algorithm Federated Variational Inference (FedVI). We use PAC-Bayes analysis to provide generalization bounds for FedVI. We evaluate our model on FEMNIST and CIFAR-100 image classification and show that FedVI beats the state-of-the-art on both tasks. Elahe Vedadi Joshua V. Dillon Philip Andrew Mansfield Karan Singhal Arash Afkanpour Warren Richard Morningstar Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 323 327 10.1609/aaaiss.v3i1.31228 Reconciling Privacy and Byzantine-robustness in Federated Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31229 In this talk, we will discuss how to make federated learning secure for the server and private for the clients simultaneously. Most prior efforts fall into either of the two categories. At one end of the spectrum, some work uses techniques like secure aggregation to hide the individual client’s updates and only reveal the aggregated global update to a malicious server that strives to infer the clients’ privacy from their updates. At the other end of the spectrum, some work uses Byzantine-robust FL protocols to suppress the influence of malicious clients’ updates. We present a protocol that offers bidirectional defense to simultaneously combat against the malicious centralized server and Byzantine malicious clients. Our protocol also improves the dimension dependence and achieve a near-optimal statistical rate for strongly convex cases. Lun Wang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 328 328 10.1609/aaaiss.v3i1.31229 GenAI and Socially Responsible AI in Natural Language Processing Applications: A Linguistic Perspective https://ojs.aaai.org/index.php/AAAI-SS/article/view/31230 It is a widely-accepted fact that the processing of very large amounts of data with state-of-the-art Natural Language Processing (NLP) practices (i.e. Machine Learning –ML, language agnostic approaches) has resulted to a dramatic improvement in the speed and efficiency of systems and applications. However, these developments are accompanied with several challenges and difficulties that have been voiced within the last years. Specifically, in regard to NLP, evident improvement in the speed and efficiency of systems and applications with GenAI also entails some aspects that may be problematic, especially when particular text types, languages and/or user groups are concerned. State-of-the-art NLP approaches with automated processing of vast amounts of data in GenAI are related to observed problematic Aspects 1-7, namely: (1) Underrepresentation, (2) Standardization. These result to (3) Barriers in Text Understanding, (4) Discouragement of HCI Usage for Special Text Types and/or User Groups, (5) Barriers in Accessing Information, (6) Likelihood of Errors and False Assumptions and (7) Difficulties in Error Detection and Recovery. An additional problem are typical cases, such as less-resourced languages (A), less experienced users (B) and less agile users (C). A hybrid approach involving the re-introduction and integration of traditional concepts in state-of-the-art processing approaches, whether they are automatic or interactive, concerns the following targets: i), (ii) and (iii): Making more types of information accessible to more types of recipients and user groups (i), Making more types of services accessible and user-friendly to more types of user groups (ii), Making more types of feelings, opinions, voices and reactions visible from more types of user groups (iii) Specifically, in the above-presented cases traditional and classical theories, principles and models are re-introduced and can be integrated into state-of-the art data-driven approaches involving Machine Learning and neural networks, functioning as training data and seed data in Natural Language Processing applications where user requirements and customization are of particular interest and importance. A hybrid approach may be considered a compromise between speed and correctness / userfriendliness in (types of) NLP applications where the achievement of this balance plays a crucial role. In other words, a hybrid approach and the examples presented here target to prevent mechanisms from adopting human biases, ensuring fairness and socially responsible outcome and responsible Social Media. A hybrid approach and the examples presented here also target to customizing content to different linguistic and cultural groups, ensuring equitable information distribution. Here, we present characteristic examples with cases employing the re-introduction of four typical types of traditional concepts concerning classical theories, principles and models. These four typical classical theories, principles and models are also not considered to be flawless, however they can be transformed into practical strategies that can be integrated into evaluation modules, neural networks and training data (including knowledge graphs) and dialogue design. The proposed and discussed re-introduction of traditional concepts is not limited only to the particular models, principles and theories presented here. The first example concerns the application of a classic principle from Theoretical Linguistics. The concept employed in the second example concerns a model from the field of Linguistics and Translation. The third and the fourth examples demonstrate the interdisciplinary application of models and theoretical frameworks from the fields of Linguistics-Cognitive Science and Linguistics-Psychology respectively. Christina Alexandris Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 330 337 10.1609/aaaiss.v3i1.31230 A Dataset for Estimating Participant Inspiration in Meetings toward AI-Based Meeting Support System to Improve Worker Wellbeing https://ojs.aaai.org/index.php/AAAI-SS/article/view/31231 Various meetings are carried out in intellectual production activities and workers have to spend much time to create ideas. In creative meetings, it is sometime difficult for the meeting moderators and facilitators to efficiently conduct the meetings because the participants are required to come up with new ideas one after another and some participants hesitate to express unconventional ideas. Therefore, we propose to develop an AI-based meeting support system that estimates participants’ inspiration and helps to generate comfortable meeting environments for improvement of worker wellbeing. Participants’ inspiration is assumed to be estimated based on their speech and micro behaviors including smiles and nods. In this paper, a dataset we collected for the development of the proposed system is reported. The dataset consists of participants’ brain blood flows measured near-infrared spectrometers, micro behavior annotated from video recording, and inspiration the participants reported with buttons. The data for 1020 min was collected by conducting simulation meetings. In future work, we plan to train an LSTM (long short-term memory) based neural network model to realize the proposed system. Soki Arai Yuki Yamamoto Yuji Nozaki Haruka Matsukura Maki Sakamoto Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 338 339 10.1609/aaaiss.v3i1.31231 How Can Generative AI Enhance the Well-being of Blind? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31232 This paper examines the question of how generative AI can improve the well-being of blind or visually impaired people. It refers to a current example, the Be My Eyes app, in which the Be My AI feature was integrated in 2023, which is based on GPT-4 from OpenAI. The author’s tests are described and evaluated. There is also an ethical and social discussion. The power of the tool, which can analyze still images in an amazing way, is demonstrated. Those affected gain a new independence and a new perception of their environment. At the same time, they are dependent on the world view and morality of the provider or developer, who prescribe or deny them certain descriptions. An outlook makes it clear that the analysis of moving images will mean a further leap forward. It is fair to say that generative AI can fundamentally improve the well-being of blind and visually impaired people and will change it in various ways. Oliver Bendel Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 340 347 10.1609/aaaiss.v3i1.31232 Diversity, Equity, and Inclusion, and the Deployment of Artificial Intelligence Within the Department of Defense https://ojs.aaai.org/index.php/AAAI-SS/article/view/31233 Artificial Intelligence (AI) adoption has seen substantial growth across industries. This paper explores the escalating use of AI within the United States Department of Defense (DoD) and the implications that diversity, equity, and inclusion (DEI) have on Service members and Civilians across the Department. More specifically, this paper explores the DEI considerations within AI technologies on individual, team, and Department readiness. The DoD's AI usage spans various strategic and operational capabilities, however this paper explores two critical domains: healthcare and recruitment. In healthcare, AI offers the promise of early disease detection, enhanced diagnostic capabilities, and streamlined administrative processes. However, potential biases stemming from homogenous training data threaten the accuracy and reliability of these systems, jeopardizing Service member health and eroding trust in AI-assisted medical decision-making and potentially the DoD at large. In recruitment, while AI promises efficiency in identifying ideal candidates, its deployment can perpetuate biases, especially when the training data used is not representative of all demographics. Despite efforts to design "unbiased" systems by excluding demographic data, such strategies may inadvertently overlook the unique challenges faced by marginalized communities, further entrenching existing disparities. Both case studies underscore the importance of considering DEI in the development and deployment of AI systems. As the DoD continues to integrate AI into its operations, this paper’s recommendations stress the necessity of continuous DEI assessment to ensure that AI serves as an asset rather than a liability. The authors recommend the following: 1. Data diversity & review 2. Continuous monitoring and calibration 3. Stakeholder engagement 4. Adoption of DEI requirements within Ethical AI Frameworks 5. Further research Sara Darwish Alison Bragaw-Butler Paul Marcelli Kaylee Gassner Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 348 353 10.1609/aaaiss.v3i1.31233 How Can GenAI Foster Well-being in Self-regulated Learning? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31234 This paper explores how generative AI (GenAI) can improve the well-being of learners within self-regulated learning (SRL) frameworks in the corporate context. In the “GenAI to Support SRL” section, it presents three custom versions of ChatGPT aimed at assisting learners. These so-called GPTs demonstrate the GenAI’s potential to actively support learners in SRL and positively influence their well-being. The “Discussion” and “Summary and Outlook” sections provide a balanced overview of the opportunities and risks associated with GenAI in the field of learning and highlight directions for future research. The results indicate that GenAI could improve the well-being of learners in SRL through providing personalized guidance, reducing feelings of stress, and increasing motivation and self-efficacy. At the same time, there are several challenges for companies and employees that need to be overcome. Stefanie Hauske Oliver Bendel Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 354 361 10.1609/aaaiss.v3i1.31234 Engineering Approach to Explore Language Reflecting Well-Being https://ojs.aaai.org/index.php/AAAI-SS/article/view/31235 Although well-being is helpful in measuring the state of society from various perspectives, past research has been limited to (1) questionnaire surveys, which make it difficult to target a large number of people, and (2) the major indices focus on individual factors and do not incorporate group factors. To tackle these issues, we collected daily reports from the company employees that included text, their individual subjective well-being, and team subjective well-being. By using the collected data, we constructed a well-being estimation model based on the Large Language Model and examined an indicator called ``sharedness index'', as a state of the team that influences an individual well-being, measured using both score- and text-based methods. Kazuhiro Ito Junko Hayashi Shoko Wakamiya Masae Manabe Yasushi Watanabe Masataka Nakayama Yukiko Uchida Eiji Aramaki Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 362 364 10.1609/aaaiss.v3i1.31235 The Challenges for GenAI in Social and Individual Well-Being https://ojs.aaai.org/index.php/AAAI-SS/article/view/31236 At the AAAI Spring Symposium 2024, we explore the important challenges facing Generative Artificial Intelligence (GenAI) concerning both social structures and individual welfare. Our discussion revolves around two perspectives. Individual Impact of GenAI on Well-being: This perspective focuses on the design of AI systems with keen consideration for individual well-being. It seeks to understand how digital experiences influence emotions and the quality of life at a personal level. By examining the effects of AI technologies on individuals, we aim to tailor solutions to enhance personal welfare and fulfillment. Social Impact of GenAI on Well-being: Here, emphasis shifts to the broader societal implications of GenAI. We strive for decisions and implementations that foster fairness and benefit all members of society. This perspective acknowledges the interconnectedness of individuals within social structures and seeks to ensure that GenAI advancements positively contribute to collective well-being. In this paper, we provide an overview of the motivations driving our exploration, elucidate key terms essential for understanding the discourse, outline the primary areas of focus of our symposium, and pose research inquiries that will guide our discussions. Through this comprehensive approach, we aim to address the multifaceted challenges and opportunities presented by GenAI in promoting both social and individual well-being. Takashi Kido Keiki Takadama Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 365 367 10.1609/aaaiss.v3i1.31236 Sleep Stage Estimation by Introduction of Sleep Domain Knowledge to AI: Towards Personalized Sleep Counseling System with GenAI https://ojs.aaai.org/index.php/AAAI-SS/article/view/31237 As a first step towards realizing an AI sleep counselor capable of generating personalized advice, this paper proposes a method for monitoring daily sleep conditions with a mattress sensor. To improve the accuracy of sleep stage estimation and to get accurate sleep structure, this paper introduced sleep domain knowledge to machine learning for improving the accuracy of sleep stage estimation. Concretely, the proposed method estimates ultradian rhythm based on the body movement density, updates prediction probabilities of each sleep stage by ML model and applies WAKE/NR3 detection based on the large/small body movement. Through the human subject experiment, the following implications have been revealed: (1) the proposed method improved the percentage of Accuracy by 65.0% from 61.5% and the QWK score by 0.196 from 0.297 by the conventional machine learning method; (2) the proposed method prevents over-NR12 estimating and is useful for understanding sleep structure by estimating REM sleep and NR3 sleep correctly. (3) the correct estimation of ultradian rhythms significantly improved the sleep stage estimation, with an Accuracy of 77.6% and a QWK score of 0.52 when all subjects' ultradian rhythms were estimated correctly. Iko Nakari Keiki Takadama Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 368 373 10.1609/aaaiss.v3i1.31237 Personalized Image Generation Through Swiping https://ojs.aaai.org/index.php/AAAI-SS/article/view/31238 Generating preferred images from GANs is a challenging task due to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images from users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of StyleGAN, creating meaningful subspaces. Additionally, we use a multi-armed bandit algorithm to decide which dimensions to explore, focusing on the user's preferences. Our experiments show that our method is more efficient in generating preferred images than the baseline. Yuto Nakashima Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 374 375 10.1609/aaaiss.v3i1.31238 Artificial Intelligence: The Biggest Threat to Democracy Today? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31239 The impact of generative artificial intelligence (GenAI) on increasing misinformation is well-understood. But there remain questions on how GenAI impacts the well-being of individuals and societies at large. This paper tackles this question from a political science standpoint and considers the impact on democracy, which is linked to individual and social well-being. It examines aspects of AI systems, including GenAI systems, that threaten to undermine democracy the most, such as misinformation. This paper also clarifies the nature of these threats to democracy, makes the connection to epistemic agency and political trust, and outlines potential outcomes to society and political institutions, including accelerating the rise of populism, the enhancement of authoritarian governments, and the threat of rule by algorithms. Michelle Nie Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 376 379 10.1609/aaaiss.v3i1.31239 Cultural Algorithm Guided Policy Gradient with Parameter Exploration https://ojs.aaai.org/index.php/AAAI-SS/article/view/31240 This study explores the integration of cultural algorithms (CA) with the Policy Gradients with Parameter-Based Exploration (PGPE) algorithm for the task of MNIST hand-written digit classification within the EvoJAX framework. The PGPE algorithm is enhanced by incorporating a belief space, consisting on Domain, Situational, and History knowledge sources (KS), to guide the search process and improve convergence speed. The PGPE algorithm, implemented within the EvoJAX framework, can efficiently find an optimal parameter-space policy for the MNIST task. However, increasing the complexity of the task and policy space, such as the CheXpert dataset and DenseNet, requires a more sophisticated approach to efficiently navigate the search space. We introduce CA-PGPE, a novel approach that integrates CA with PGPE to guide the search process and improve convergence speed. Future work will focus on incorporating exploratory knowledge sources and evaluate the enhanced CA-PGPE algorithm on more complex datasets and model architectures, such as CIFAR-10 and CheXpert with DenseNet. Mark Nuppnau Khalid Kattan R. G. Reynolds Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 380 386 10.1609/aaaiss.v3i1.31240 Collect and Connect Data Leaves to Feature Concepts: Interactive Graph Generation Toward Wellbeing https://ojs.aaai.org/index.php/AAAI-SS/article/view/31241 Feature concepts and data leaves have been invented to foster thoughts for creating social and physical well-being through the use of datasets. The idea, simply put, is to at-tach selected and collected Data Leaves that are summaries of event flows to be discovered from corresponding datasets, on the target Feature Concept representing the expected scenarios of well-being individuals and well-being society. A graph of existing or expected datasets, attached in the form of Data Leaves on a Feature Concept, was generated semi-automatically. Rather than sheer auto-mated generative AI, our work addresses the process of generative artificial and natural intelligence to create the basis for collecting and connecting useful data. Yukio Ohsawa Tomohide Maekawa Hiroki Yamaguchi Hiro Yoshida Kaira Sekiguchi Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 387 388 10.1609/aaaiss.v3i1.31241 Generating a Map of Well-being Regions Using Multi-scale Moving Direction Entropy on Mobile Sensors https://ojs.aaai.org/index.php/AAAI-SS/article/view/31242 The well-being of individuals in a crowd is interpreted as a product of individuals crossing over from heterogeneous communities, via interactions with other crowds. Here, the index moving-direction entropy corresponding to the diversity of the moving directions of individuals is introduced to represent such an inter-community crossover and extended with multiscale scopes. Multiscale moving direction entropies, composed of various geographical mesh sizes to compute the index values, are used to capture the flow and interaction of information owing to human movements from/to various crowds. The generated map of high values of multiscale moving direction entropy was visualized, where the peaks coincided significantly with the preference of people to live in each region. Yukio Ohsawa Sae Kondo Yi Sun Kaira Sekiguchi Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 389 390 10.1609/aaaiss.v3i1.31242 Ethical Considerations of Generative AI: A Survey Exploring the Role of Decision Makers in the Loop https://ojs.aaai.org/index.php/AAAI-SS/article/view/31243 We explore the foresighted concerns that Norbert Wiener voiced in 1960 about the potential of machines to learn and create strategies that could not be anticipated, drawing parallels to the fable "The Sorcerer's Apprentice" by Goethe. The progress in artificial intelligence (AI) has brought these worries back to the forefront, as shown by a survey AI Impacts conducted in 2022 with more than 700 machine learning researchers. This survey found a five percentage probability that advanced AI might cause "extremely adverse" outcomes, including the possibility of human extinction. Importantly, the introduction of OpenAI's ChatGPT, powered by GPT-4, has led to a surge in entrepreneurial activities, highlighting the ease of use of large language models (LLMs).AI's potential for adverse outcomes, such as military control and unregulated AI races, is explored alongside concerns about AI's role in governance, healthcare, media portrayal, and surpassing human intelligence. Given their transformative impact on content creation, the prominence of generative AI tools such as ChatGPT is noted. The societal assessment of Artificial Intelligence (AI) has grown increasingly intricate and pressing in tandem with the rapid evolution of this technology. As AI continues to advance at a swift pace, the need to comprehensively evaluate its societal implications has become more complex and urgent, necessitating a thorough examination of its potential impact on various domains such as governance, healthcare, media portrayal, and surpassing human intelligence. This assessment is crucial in addressing ethical concerns related to bias, data misuse, technical limitations, and transparency gaps, and in integrating ethical and legal principles throughout AI algorithm lifecycles to ensure alignment with societal well-being. Furthermore, the urgency of addressing the societal implications of AI is underscored by the need for healthcare workforce upskilling and ethical considerations in the era of AI-assisted medicine, emphasizing the critical importance of integrating societal well-being into the development and deployment of AI technologies. Our study entails an examination of the ethical quandaries and obstacles presented when developing methods to evaluate and predict the broader societal impacts of AI on decision-making processes involving the generating of images, videos, and textual content. Yohn Jairo Parra Bautista Carlos Theran Richard Aló Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 391 398 10.1609/aaaiss.v3i1.31243 Generative AI Applications in Helping Children with Speech Language Issues https://ojs.aaai.org/index.php/AAAI-SS/article/view/31244 This paper reports how generative AI can help children with specific language impairment (SLI) issues by developing an AI-assisted tool to support children with challenges in phonological development in English, especially children with English as the secondary language in the United States. Children from bilingual families often experience challenges in developing proficiency in English pronunciation and communication, which has been exacerbated by remote learning during the pandemic and led to learning loss. School-aged children with speech problems require timely intervention because children with language disorders find it difficult to communicate with others, leading to social isolation and academic difficulties. The needed intervention is often delayed due to the high cost of speech services and the shortage of Speech and Language Pathologists (SLPs). Individuals with a history of SLI have an increased risk of unemployment. An AI-assisted Phonological Development (AI-PD) tool was prototyped, aiming to alleviate these challenges by assisting caregivers in evaluating children's phonological development, assisting SLPs in lesson preparation, and mitigating the severe shortage of SLPs. Helen Qin Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 399 400 10.1609/aaaiss.v3i1.31244 How Can Large Language Models Enable Better Socially Assistive Human-Robot Interaction: A Brief Survey https://ojs.aaai.org/index.php/AAAI-SS/article/view/31245 Socially assistive robots (SARs) have shown great success in providing personalized cognitive-affective support for user populations with special needs such as older adults, children with autism spectrum disorder (ASD), and individuals with mental health challenges. The large body of work on SAR demonstrates its potential to provide at-home support that complements clinic-based interventions delivered by mental health professionals, making these interventions more effective and accessible. However, there are still several major technical challenges that hinder SAR-mediated interactions and interventions from reaching human-level social intelligence and efficacy. With the recent advances in large language models (LLMs), there is an increased potential for novel applications within the field of SAR that can significantly expand the current capabilities of SARs. However, incorporating LLMs introduces new risks and ethical concerns that have not yet been encountered, and must be carefully be addressed to safely deploy these more advanced systems. In this work, we aim to conduct a brief survey on the use of LLMs in SAR technologies, and discuss the potentials and risks of applying LLMs to the following three major technical challenges of SAR: 1) natural language dialog; 2) multimodal understanding; 3) LLMs as robot policies. Zhonghao Shi Ellen Landrum Amy O'Connell Mina Kian Leticia Pinto-Alva Kaleen Shrestha Xiaoyuan Zhu Maja J Matarić Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 401 404 10.1609/aaaiss.v3i1.31245 NREM3 Sleep Stage Estimation Based on Accelerometer by Body Movement Count and Biological Rhythms https://ojs.aaai.org/index.php/AAAI-SS/article/view/31246 This paper proposes the method by physiological knowledge to improve the estimation performance of the NREM3 sleep based on the waist-attached accelerometer. Specifically, this paper proposes the hybrid method that combines the method based on body movement counts and the method based on biological rhythms of sleep. Through the human subject experiment, the following implications were revealed: (1) the proposed method can outperform famous machine learning models (Random Forest and LSTM) trained with automatically generated features that do not sufficiently incorporate domain knowledge; (2) when the input features are based on domain knowledge, the estimator explicitly designed by humans can outperform the machine learning method; and (3) combining the body movement counting method and the biological rhythm-based method can suppress the error of the body movement counting method and reduce false positives. Daiki Shintani Iko Nakari Satomi Washizaki Keiki Takadama Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 405 411 10.1609/aaaiss.v3i1.31246 Modes of Tracking Mal-Info in Social Media with AI/ML Tools to Help Mitigate Harmful GenAI for Improved Societal Well Being https://ojs.aaai.org/index.php/AAAI-SS/article/view/31247 A rapidly developing threat to societal well-being is from misinformation widely spread on social media. Even more concerning is ”mal-info” (malicious) which is amplified on certain social networks. Now there is an additional dimension to that threat, which is the use of Generative AI to deliberately augment the mis-info and mal-info. This paper highlights some of the ”fringe” social media channels which have a high level of mal-info as characterized by our AI/ML algorithms. We discuss various channels and focus on one in particular, ”GAB”, as representative of the potential negative impacts. We outline some of the current mal-info as an example. We capture elements, and observe the trends in time. We provide a set of AI/ML modes which can characterize the mal-info and allow for capture, tracking, and potentially for responding or for mitigation. We highlight the concern about malicious agents using GenAI for deliberate mal-info messaging specifically to disrupt societal well being. We suggest the characterizations presented as a methodology for initiating a more deliberate and quantitative approach to address these harmful aspects of social media which would adversely impact societal well being. The article highlights the potential for ”mal-info,” including disinfo, cyberbullying, and hate speech, to disrupt segments of society. The amplification of mal-info can result in serious real-world consequences such as mass shootings. Despite attempts to introduce moderation on major platforms like Facebook and to some extent on X/Twitter, there are now growing social networks such as Gab, Gettr, and Bitchute that offer completely unmoderated spaces. This paper presents an introduction to these platforms and the initial results of a semiquantitative analysis of Gab’s posts. The paper examines several characterization modes using text analysis. The paper emphasizes the developing dangerous use of generative AI algorithms by Gab and other fringe platforms, highlighting the risks to societal well being. This article aims to lay the foundation for capturing, monitoring, and mitigating these risks. Andy Skumanich Han Kyul Kim Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 412 417 10.1609/aaaiss.v3i1.31247 Toward Application to General Conversation Detection of Dementia Tendency from Conversation Based on Linguistic and Time Features of Speech https://ojs.aaai.org/index.php/AAAI-SS/article/view/31248 Currently, MRI examinations and neuropsychological tests by physicians and clinical psychologists are used to screen for dementia, but they are problematic because they overwhelm medical resources and are highly invasive to patients. If automatic detection of dementia from conversations becomes feasible, it will reduce the burden on medical institutions and realize a less invasive screening method. In this paper, we constructed a machine learning model to identify dementia by extracting linguistic features and time features from the elderly corpus with a control group. Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR) were used in the model. We compared the AUC of the single topic model and the general topic model in three cases: (I) All Features, (II) Gini Impurity, and (III) PCA + Gini Impurity. The AUC of the model constructed using RF in (III) for a single topic was 0.91, showing higher AUC than in the previous study. Furthermore, topic analysis showed that topics with high similarity in utterance content are effective in identifying MCI. In the case of the general topic, the model with AUC of 0.8 showed a high identification performance for unknown topics by cross validation on a topic-by-topic basis, indicating that the general topic model developed in this study can be applied to general conversation. Hiroshi Sogabe Masayuki Numao Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 418 425 10.1609/aaaiss.v3i1.31248 AI Health Agents: Pathway2vec, ReflectE, Category Theory, and Longevity https://ojs.aaai.org/index.php/AAAI-SS/article/view/31249 Health Agents are introduced as the concept of a personalized AI health advisor overlay for continuous health monitoring (e.g. 1000x/minute) medical-grade smartwatches and wearables for “healthcare by app” instead of “sickcare by appointment.” Individuals can customize the level of detail in the information they view. Health Agents “speak” natural language to humans and formal language to the computational infrastructure, possibly outputting the mathematics of personalized homeostatic health as part of their reinforcement learning agent behavior. As an AI health interface, the agent facilitates the management of precision medicine as a service. Healthy longevity is a high-profile area characterized by the increasing acceptance of medical intervention, longevity biotech venture capital investment, and global priority as 2 billion people will be over 65 in 2050. Aging hallmarks, biomarkers, and clocks provide a quantitative measure for intervention. Some of the leading interventions include metformin, rapamycin, spermidine, NAD+/sirtuins, alpha-ketoglutarate, and taurine. AI-driven digital biology, longevity medicine, and Web3 personalized healthcare come together in the idea of Health Agents. This Web3 genAI tool for automated health management, specifically via digital-biological twins and pathway2vec approaches, demonstrates human-AI intelligence amplification and works towards healthy longevity for global well-being. Melanie Swan Takashi Kido Eric Roland Renato P. dos Santos Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 426 433 10.1609/aaaiss.v3i1.31249 What Is a Correct Output by Generative AI From the Viewpoint of Well-Being? – Perspective From Sleep Stage Estimation – https://ojs.aaai.org/index.php/AAAI-SS/article/view/31250 This paper explores an answer to the question of “what is a correct output by generative AI from the viewpoint of well-being?” and discusses an effectiveness of taking account of a biological rhythm for this issue. Concretely, this paper focuses on an estimation of the REM sleep stage as one of sleep stages, and compared its estimations based on random forest as one of the machine learning methods and the ultradian rhythm as one of the biological rhythms. From the human subject experiment, the following implications have been revealed: (1) the REM sleep stage is wrongly estimated in many areas by random forest; and (2) the integration of the REM sleep stage estimation based on the biological rhythm with that based on random forest improves the F-score of the estimated REM sleep stage. Keiki Takadama Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 434 439 10.1609/aaaiss.v3i1.31250 The Psychosocial Impacts of Generative AI Harms https://ojs.aaai.org/index.php/AAAI-SS/article/view/31251 The rapid emergence of generative Language Models (LMs) has led to growing concern about the impacts that their unexamined adoption may have on the social well-being of diverse user groups. Meanwhile, LMs are increasingly being adopted in K-20 schools and one-on-one student settings with minimal investigation of potential harms associated with their deployment. Motivated in part by real-world/everyday use cases (e.g., an AI writing assistant) this paper explores the potential psychosocial harms of stories generated by five leading LMs in response to open-ended prompting. We extend findings of stereotyping harms analyzing a total of 150K 100-word stories related to student classroom interactions. Examining patterns in LM-generated character demographics and representational harms (i.e., erasure, subordination, and stereotyping) we highlight particularly egregious vignettes, illustrating the ways LM-generated outputs may influence the experiences of users with marginalized and minoritized identities, and emphasizing the need for a critical understanding of the psychosocial impacts of generative AI tools when deployed and utilized in diverse social contexts. Faye-Marie Vassel Evan Shieh Cassidy R. Sugimoto Thema Monroe-White Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 440 447 10.1609/aaaiss.v3i1.31251 AI-Assisted Talk: A Narrative Review on the New Social and Conversational Landscape https://ojs.aaai.org/index.php/AAAI-SS/article/view/31252 In this ongoing narrative review, I summarize the existing body of literature on the role of artificial intelligence in mediating human communication, focusing on how it is currently transforming our communication patterns. Moreover, this re-view uniquely contributes by critically analyzing potential future shifts in these patterns, particularly in light of the advancing capabilities of artificial intelligence. Special emphasis is placed on the implications of emerging generative AI technologies, projecting how they might redefine the landscape of human interaction. Kevin Vo Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 448 449 10.1609/aaaiss.v3i1.31252 Social Smarts with Tech Sparks: Harnessing LLMs for Youth Socioemotional Growth https://ojs.aaai.org/index.php/AAAI-SS/article/view/31253 This study proposal combines the transformative potential of GPT-4 with an innovative approach to learning social and emotional skills, offering a novel conversational aid designed to enhance adolescents' social competence, and ultimately combat social disconnection in the digital era. Kevin Vo Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 450 451 10.1609/aaaiss.v3i1.31253 Evaluating Large Language Models with RAG Capability: A Perspective from Robot Behavior Planning and Execution https://ojs.aaai.org/index.php/AAAI-SS/article/view/31254 After the significant performance of Large Language Models (LLMs) was revealed, their capabilities were rapidly expanded with techniques such as Retrieval Augmented Generation (RAG). Given their broad applicability and fast development, it's crucial to consider their impact on social systems. On the other hand, assessing these advanced LLMs poses challenges due to their extensive capabilities and the complex nature of social systems. In this study, we pay attention to the similarity between LLMs in social systems and humanoid robots in open environments. We enumerate the essential components required for controlling humanoids in problem solving which help us explore the core capabilities of LLMs and assess the effects of any deficiencies within these components. This approach is justified because the effectiveness of humanoid systems has been thoroughly proven and acknowledged. To identify needed components for humanoids in problem-solving tasks, we create an extensive component framework for planning and controlling humanoid robots in an open environment. Then assess the impacts and risks of LLMs for each component, referencing the latest benchmarks to evaluate their current strengths and weaknesses. Following the assessment guided by our framework, we identified certain capabilities that LLMs lack and concerns in social systems. Jin Yamanaka Takashi Kido Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 452 456 10.1609/aaaiss.v3i1.31254 Fair Machine Guidance to Enhance Fair Decision Making https://ojs.aaai.org/index.php/AAAI-SS/article/view/31255 Human judgment is often subject to bias, leading to unfair decisions. This is particularly problematic when assessments have significant consequences, underscoring the importance of guiding humans towards fairness. Although recent advancements in AI have facilitated decision support, it is not always feasible to employ AI assistance in real-world scenarios. Therefore, this study focuses on developing and evaluating a method to guide humans in making fair judgments. Our experimental results confirmed that our approach effectively promotes fairness in human decision-making. Mingzhe Yang Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 457 458 10.1609/aaaiss.v3i1.31255 The Impacts of Text-to-Image Generative AI on Creative Professionals According to Prospective Generative AI Researchers: Insights from Japan https://ojs.aaai.org/index.php/AAAI-SS/article/view/31256 The growing interest in Japan to implement text-to-image (T2I) generative artificial intelligence (GenAI) technologies in creative workflows has raised concern over what ethical and social implications these technologies will have on creative professionals. Our pilot study is the first to discuss what social and ethical oversights may emerge regarding such issues from prospective Japanese researchers – computer science (CS) graduate students studying in Japan. Given that these students are the primary demographic hired to work at research and development (R&D) labs at the forefront of such innovations in Japan, any social and ethical oversight on such issues may unequip them as future knowledge experts who will play a pivotal role in helping shape Japan’s policies regarding image generating AI technologies. Sharon Chee Yin Ho Arisa Ema Tanja Tajmel Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 459 463 10.1609/aaaiss.v3i1.31256 An Analysis Method for the Impact of GenAI Code Suggestions on Software Engineers’ Thought Processes https://ojs.aaai.org/index.php/AAAI-SS/article/view/31257 Interactive generative AI can be used in software programming to generate sufficient quality of code. Software developers can utilize the output code of generative AI as well as website resources from search engine results. In this research, we present a framework for defining states of programming activity and for capturing the actions of developers in a time series. We also describe a scheme for analyzing the thought process of software developers by using a graph structure to describe state transitions. By applying these means, we showed that it is feasible to analyze the effects of changes in the development environment on programming activities. Takahiro Yonekawa Hiroko Yamano Ichiro Sakata Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 464 465 10.1609/aaaiss.v3i1.31257 Enhancing AI Education at an MSI: A Design-Based Research Approach https://ojs.aaai.org/index.php/AAAI-SS/article/view/31258 While students are often passionate about their chosen fields, they often have limited awareness of the profound impact of AI technologies on their professions. In order to advance efforts in building subject-relevant AI literacy among undergraduate students studying Computer Science and non-Computer Science (Criminal Justice and Forensic Science) it is imperative to engage in rigorous efforts to develop and study curricular infusion of Artificial Intelligence topics. Using a Design-Based Research model, the project team and the external evaluators studied the first iteration of the module development and implementation. Using data collected through surveys, focus groups, critical review, and reflection exercises the external evaluation team produced findings that informed the project team in revising and improving their materials and approach for the second iteration. These efforts can help educators and the AI module developers tailor their AI curriculum to address these specific areas, ensuring that students develop a more accurate understanding of applications of AI in their future career field. Sambit Bhattacharya Bogdan Czejdo Rebecca A. Zulli Adrienne A. Smith Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 467 472 10.1609/aaaiss.v3i1.31258 AI for Social Good Education at Hispanic Serving Institutions https://ojs.aaai.org/index.php/AAAI-SS/article/view/31259 This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity. Yu Chen Gabriel Granco Yunfei Hou Heather Macias Frank A. Gomez Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 473 473 10.1609/aaaiss.v3i1.31259 Bridging the Gap: Diversity Initiatives in AI Education https://ojs.aaai.org/index.php/AAAI-SS/article/view/31260 This position paper highlights the critical need to enhance diversity in artificial intelligence (AI) education, focusing on K-8 students. As AI increasingly shapes our societal landscape, ensuring equitable access and participation in AI-related fields is essential. However, the current AI education landscape lacks inclusivity, resulting in underrepresentation and limited opportunities for marginalized groups such as racial and ethnic minorities, women, individuals with disabilities, and those from economically disadvantaged backgrounds. The paper advocates for a comprehensive approach to address diversity gaps in AI education. This involves revising curricula to include diverse perspectives, integrating AI knowledge into core subject areas, and utilizing machine learning (ML) to enhance learning across disciplines. Educators can create inclusive learning environments by incorporating culturally relevant examples and interactive activities showcasing AI's positive impact on diverse communities. Furthermore, promoting diversity in AI education requires investment in teacher training and resources. Educators need support to implement inclusive teaching methods, understand cultural nuances, and address implicit biases. Bridging the digital gap is also crucial, as access to technology and hands-on AI experience ensures equal opportunities for all students regardless of socioeconomic back-ground. By embracing diversity and inclusivity in AI education at the K-8 level, we can cultivate a future generation of AI professionals and informed citizens who leverage technology to address diverse community needs. Ryan Evans Neelu Sinha Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 474 477 10.1609/aaaiss.v3i1.31260 Remote Possibilities: Where There Is a WIL, Is There a Way? AI Education for Remote Learners in a New Era of Work-Integrated-Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31261 Increasing diversity in educational settings is challenging in part due to the lack of access to resources for non-traditional learners in remote communities. Post-pandemic platforms designed specifically for remote and hybrid learning---supporting team-based collaboration online---are positioned to bridge this gap. Our work combines the use of these new platforms with co-creation and collaboration tools for AI assisted remote Work-Integrated-Learning (WIL) opportunities, including efforts in community and with the public library system. This paper outlines some of our experiences to date, and proposes methods to further integrate AI education into community-driven applications for remote WIL. Derek Jacoby Saiph Savage Yvonne Coady Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 478 485 10.1609/aaaiss.v3i1.31261 Leveraging Generative Artificial Intelligence to Broaden Participation in Computer Science https://ojs.aaai.org/index.php/AAAI-SS/article/view/31262 Generative Artificial Intelligence (AI) was incorporated into a competitive programming event that targeted undergraduate students, including those with little programming experience. The competition incorporated a range of challenge design approaches that promoted meaningful interaction with generative AI system, even while keeping the challenge difficulty level to an appropriate level. An analysis of survey responses and competition data showed that this format lowered barriers to participation, successfully engaged students throughout the competition, and increased the likelihood that they would participate in a similar event. In an extension of this work, a professional development workshop for high school teachers is being developed, along with a contest for high school students. Participant surveys and logs of interaction with the contest and generative AI systems will be analyzed to measure the effect of generative AI on student self-efficacy and suggest ways to integrate generative AI instruction into computer science curriculum. Devang Jayachandran Pranit Maldikar Tyler S. Love Jeremy J. Blum Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 486 492 10.1609/aaaiss.v3i1.31262 Increasing Diversity in Lifelong AI Education: Workshop Report https://ojs.aaai.org/index.php/AAAI-SS/article/view/31263 AI is rapidly emerging as a tool that can be used by everyone, increasing its impact on our lives, society, and the economy. There is a need to develop educational programs and curricula that can increase capacity and diversity in AI as well as awareness of the implications of using AI-driven technologies. This paper reports on a workshop whose goals include developing guidelines for ensuring that we expand the diversity of people engaged in AI while expanding the capacity for AI curricula with a scope of content that will reflect the competencies and needs of the workforce. The scope for AI education included K-Gray and considered AI knowledge and competencies as well as AI literacy (including responsible use and ethical issues). Participants discussed recommendations for metrics measuring capacity and diversity as well as strategies for increasing capacity and diversity at different level of education: K-12, undergraduate and graduate Computer Science (CS) majors and non-CS majors, the workforce, and the public. Mary Lou Maher Sri Yash Tadimalla Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 493 500 10.1609/aaaiss.v3i1.31263 A Human-Centric Approach towards Equity and Inclusion in AI Education https://ojs.aaai.org/index.php/AAAI-SS/article/view/31264 Artificial Intelligence (AI) has become pervasive in modern lives, with AI generative tools driving further transformation. However, a notable issue persists: the underrepresentation of females and individuals from ethnic and racial minorities in the tech industry. Despite generally positive attitudes toward technology among young students, this enthusiasm often does not extend to aspirations for careers in the field. To address this disparity, many schools in the United States are now offering computer science and AI courses at the high school level. Nevertheless, students from underrepresented groups often feel disconnected from these subjects, leading to low enrollment rates. Research underscores that students' career aspirations are solidified between the ages of 10-14 yrs, highlighting the importance of engaging them with computer science and computing skills during this formative period. Leveraging the Bourdieusian concept of social capital, this paper proposes educational interventions tailored for elementary schools. By nurturing students' technical social capital, these interventions aim to foster an inclusive ecosystem from an early age, when aspirations are taking shape. Ultimately, the goal is to enhance the accessibility of computer science education and related skills, empowering young students from underrepresented groups to pursue higher studies and careers in computer science and AI fields. Swati Mehrotra Neelu Sinha Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 501 507 10.1609/aaaiss.v3i1.31264 TinyML4D: Scaling Embedded Machine Learning Education in the Developing World https://ojs.aaai.org/index.php/AAAI-SS/article/view/31265 Embedded machine learning (ML) on low-power devices, also known as "TinyML," enables intelligent applications on accessible hardware and fosters collaboration across disciplines to solve real-world problems. Its interdisciplinary and practical nature makes embedded ML education appealing, but barriers remain that limit its accessibility, especially in developing countries. Challenges include limited open-source software, courseware, models, and datasets that can be used with globally accessible heterogeneous hardware. Our vision is that with concerted effort and partnerships between industry and academia, we can overcome such challenges and enable embedded ML education to empower developers and researchers worldwide to build locally relevant AI solutions on low-cost hardware, increasing diversity and sustainability in the field. Towards this aim, we document efforts made by the TinyML4D community to scale embedded ML education globally through open-source curricula and introductory workshops co-created by international educators. We conclude with calls to action to further develop modular and inclusive resources and transform embedded ML into a truly global gateway to embedded AI skills development. Brian Plancher Sebastian Buttrich Jeremy Ellis Neena Goveas Laila Kazimierski Jesus Lopez Sotelo Milan Lukic Diego Mendez Rosdiadee Nordin Andres Oliva Trevisan Massimo Pavan Manuel Roveri Marcus Rüb Jackline Tum Marian Verhelst Salah Abdeljabar Segun Adebayo Thomas Amberg Halleluyah Aworinde José Bagur Gregg Barrett Nabil Benamar Bharat Chaudhari Ronald Criollo David Cuartielles Jose Alberto Ferreira Filho Solomon Gizaw Evgeni Gousev Alessandro Grande Shawn Hymel Peter Ing Prashant Manandhar Pietro Manzoni Boris Murmann Eric Pan Rytis Paskauskas Ermanno Pietrosemoli Tales Pimenta Marcelo Rovai Marco Zennaro Vijay Janapa Reddi Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 508 515 10.1609/aaaiss.v3i1.31265 Inclusion Ethics in AI: Use Cases in African Fashion https://ojs.aaai.org/index.php/AAAI-SS/article/view/31266 This paper addresses the ethics of inclusion in artificial in-telligence in the context of African fashion. Despite the proliferation of fashion-related AI applications and da-tasets global diversity remains limited, and African fash-ion is significantly underrepresented. This paper docu-ments two use-cases that enhance AI's inclusivity by in-corporating sub-Saharan fashion elements. The first case details the creation of a Senegalese fashion dataset and a model for classifying traditional apparel using transfer learning. The second case investigates African wax textile patterns generated through generative adversarial net-works (GANs), specifically StyleGAN architectures, and machine learning diffusion models. Alongside the practi-cal, technological advances, theoretical ethical progress is made in two directions. First, the cases are used to elabo-rate and define the ethics of inclusion, while also contrib-uting to current debates about how inclusion differs from ethical fairness. Second, the cases engage with the ethical debate on whether AI innovation should be slowed to prevent ethical imbalances or accelerated to solve them. Christelle Scharff James Brusseau Krishna Mohan Bathula Kaleemunnisa Fnu Samyak Rakesh Meshram Om Gaikhe Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 516 521 10.1609/aaaiss.v3i1.31266 AI Literacy for Hispanic-Serving Institution (HSI) Students https://ojs.aaai.org/index.php/AAAI-SS/article/view/31267 Degree completion rates for Hispanic students lag far be-hind their white non-Hispanic peers. To close this gap and accelerate degree completion for Hispanic students at Hispanic-Serving Institutions (HSIs), we offer a pedagogical framework to incorporate AI Literacy into existing programs and encourage faculty-mentored undergraduate research initiatives to solve real-world problems using AI. Using a holistic perspective that includes experience, perception, cognition, and behavior, we describe the ideal process of learning based on a four-step cycle of experience, reflecting, thinking, and acting. Additionally, we emphasize the role of social interaction and community in developing mental abilities and understand how cognitive development is influenced by cultural and social factors. Tailoring the content to be culturally relevant, accessible, and engaging to our Hispanic students, and employing projects-based learning, we offer hands-on activities based on social justice, inclusion, and equity to incorporate AI Literacy. Furthermore, combining the pedagogical framework along with faculty-mentored undergraduate research (the significance of which has been shown to have numerous benefits) will enable our Hispanic students develop competencies to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool anywhere; preparing them for the future and encouraging them to use AI ethically. Neelu Sinha Rama Madhavarao Robert Freeman Irene Oujo Janet Boyd Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 522 527 10.1609/aaaiss.v3i1.31267 Implications of Identity in AI: Creators, Creations, and Consequences https://ojs.aaai.org/index.php/AAAI-SS/article/view/31268 The field of Artificial Intelligence (AI) is rapidly advancing, with significant potential to transform society. However, it faces a notable challenge: lack of diversity, a longstanding issue in STEM fields. In this context, this position paper examines the intersection of AI and identity as a pathway to understanding biases, inequalities, and ethical considerations in AI development and deployment. We present a multifaceted definition of AI identity, which encompasses its creators, applications, and their broader impacts. Understanding AI's identity involves analyzing the diverse individuals involved in AI's development, the technologies produced, and the social, ethical, and psychological implications. After exploring the AI identity ecosystem and its societal dynamics, We propose a framework that highlights the need for diversity in AI across three dimensions: Creators, Creations, and Consequences through the lens of identity. This paper presents a research framework for examining the implications and changes needed to foster a more inclusive and responsible AI ecosystem through the lens of identity. Sri Yash Tadimalla Mary Lou Maher Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 528 535 10.1609/aaaiss.v3i1.31268 Designing Inclusive AI Certifications https://ojs.aaai.org/index.php/AAAI-SS/article/view/31269 For decades, the route to familiarity in AI was through technical studies such as computer science. Yet AI has infiltrated many areas of our society. Many fields are rightfully now demanding at least a passing familiarity with machine learning: understanding the standard architectures, knowledge on how to use them, and addressing common concerns. A few such fields look at the standard ethical issues such as fairness, accountability, and transparency. Very few fields situate AI technologies in sociotechnical system analysis, nor give a rigorous foundation in ethical analysis applied to the design, development, and use of the technologies. We have proposed an undergraduate certificate in AI that gives equal weight to social and ethical issues and to technical matters of AI system design and use, aimed at students outside of the traditional AI-related disciplines. By including social and ethical issues in our AI certificate requirements, we expect to attract a broader population of students. By creating an accessible AI certification, we create an opportunity for individuals from diverse experiences to contribute to the discussion of what AI is, what its impact is, and where it should go in the future. Kathleen Timmerman Judy Goldsmith Brent Harrison Zongming Fei Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 536 543 10.1609/aaaiss.v3i1.31269 Toward Autonomy: Metacognitive Learning for Enhanced AI Performance https://ojs.aaai.org/index.php/AAAI-SS/article/view/31270 Large Language Models (LLMs) lack robust metacognitive learning abilities and depend on human-provided algorithms and prompts for learning and output generation. Metacognition involves processes that monitor and enhance cognition. Learning how to learn - metacognitive learning - is crucial for adapting and optimizing learning strategies over time. Although LLMs possess limited metacognitive abilities, they cannot autonomously refine or optimize these strategies. Humans possess innate mechanisms for metacognitive learning that enable at least two unique abilities: discerning which metacognitive strategies are best and automatizing learning strategies. These processes have been effectively modeled in the ACT-R cognitive architecture, providing insights on a path toward greater learning autonomy in AI. Incorporating human-like metacognitive learning abilities into AI could potentially lead to the development of more autonomous and versatile learning mechanisms, as well as improved problem-solving capabilities and performance across diverse tasks. Brendan Conway-Smith Robert L. West Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 545 546 10.1609/aaaiss.v3i1.31270 Turing-like Experiment in a Cyber Defense Game https://ojs.aaai.org/index.php/AAAI-SS/article/view/31271 During the past decade, researchers of behavioral cyber security have created cognitive agents that are able to learn and make decisions in dynamic environments in ways that assimilate human decision processes. However, many of these efforts have been limited to simple detection tasks and represent basic cognitive functions rather than a whole set of cognitive capabilities required in dynamic cyber defense scenarios. Our current work aims at advancing the development of cognitive agents that learn and make defense-dynamic decisions during cyber attacks by intelligent attack agents. We also aim to evaluate the capability of these cognitive models in ``Turing-like'' experiments, comparing the decisions and performance of these agents against human cyber defenders. In this paper, we present an initial demonstration of a cognitive model of the defender that relies on a cognitive theory of dynamic decision-making, Instance-Based Learning Theory (IBLT); we also demonstrate the execution of the same defense task by human defenders. We rely on OpenAI Gym and CybORG and adapt an existing CAGE scenario to generate a simulation experiment using an IBL defender. We also offer a new Interactive Defense Game (IDG), where \textit{human} defenders can perform the same CAGE scenario simulated with the IBL model. Our results suggest that the IBL model makes decisions against two intelligent attack agents that are similar to those observed in a subsequent human experiment. We conclude with a description of the cognitive foundations required to build autonomous intelligent cyber defense agents that can collaborate with humans in autonomous cyber defense teams. Yinuo Du Baptiste Prebot Cleotilde Gonzalez Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 547 550 10.1609/aaaiss.v3i1.31271 Analogy as the Swiss Army Knife of Human-like Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31272 There is ample psychological evidence that analogy is ubiquitous in human learning, suggesting that computational models of analogy can play important roles in AI systems that learn in human-like ways. This talk will provide evidence for this, focusing mostly on recent advances in hierarchical analogical learning and working-memory analogical generalizations. Kenneth D. Forbus Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 551 552 10.1609/aaaiss.v3i1.31272 Human-like Learning in Temporally Structured Environments https://ojs.aaai.org/index.php/AAAI-SS/article/view/31273 Natural environments have correlations at a wide range of timescales. Human cognition is tuned to this temporal structure, as seen by power laws of learning and memory, and by spacing effects whereby the intervals between repeated training data affect how long knowledge is retained. Machine learning is instead dominated by batch iid training or else relatively simple nonstationarity assumptions such as random walks or discrete task sequences. The main contributions of our work are: (1) We develop a Bayesian model formalizing the brain's inductive bias for temporal structure and show our model accounts for key features of human learning and memory. (2) We translate the model into a new gradient-based optimization technique for neural networks that endows them with human-like temporal inductive bias and improves their performance in realistic nonstationary tasks. Our technical approach is founded on Bayesian inference over 1/f noise, a statistical signature of many natural environments with long-range, power law correlations. We derive a new closed-form solution to this problem by treating the state of the environment as a sum of processes on different timescales and applying an extended Kalman filter to learn all timescales jointly. We then derive a variational approximation of this model for training neural networks, which can be used as a drop-in replacement for standard optimizers in arbitrary architectures. Our optimizer decomposes each weight in the network as a sum of subweights with different learning and decay rates and tracks their joint uncertainty. Thus knowledge becomes distributed across timescales, enabling rapid adaptation to task changes while retaining long-term knowledge and avoiding catastrophic interference. Simulations show improved performance in environments with realistic multiscale nonstationarity. Finally, we present simulations showing our model gives essentially parameter-free fits of learning, forgetting, and spacing effects in human data. We then explore the analogue of human spacing effects in a deep net trained in a structured environment where tasks recur at different rates and compare the model's behavioral properties to those of people. Matt Jones Tyler R. Scott Michael C. Mozer Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 553 553 10.1609/aaaiss.v3i1.31273 Toward Human-Like Representation Learning for Cognitive Architectures https://ojs.aaai.org/index.php/AAAI-SS/article/view/31274 Human-like learning includes an ability to learn concepts from a stream of embodiment sensor data. Echoing previous thoughts such as those from Barsalou that cognition and perception share a common representation system, we suggest an addendum to the common model of cognition. This addendum poses a simultaneous semantic memory and perception learning that bypasses working memory, and that uses parallel processing to learn concepts apart from deliberate reasoning. The goal is to provide a general outline for how to extend a class of cognitive architectures to implement a more human-like interface between cognition and embodiment of an agent, where a critical aspect of that interface is that it is dynamic because of learning. Steven Jones Peter Lindes Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 554 555 10.1609/aaaiss.v3i1.31274 Modeling Human-Like Acquisition of Language and Concepts https://ojs.aaai.org/index.php/AAAI-SS/article/view/31275 Humans acquire language and related concepts in a trajectory over a lifetime. Concepts for simple interaction with the world are learned before language. Later, words are learned to name these concepts along with structures needed to represent larger meanings. Eventually, language advances to where it can drive the learning of new concepts. Throughout this trajectory a language processing capability uses architectural mechanisms to process language using the knowledge already acquired. We assume that this growing body of knowledge is made up of small units of form-meaning mapping that can be composed in many ways, suggesting that these units are learned incrementally from experience. In prior work we have built a system to comprehend human language within an autonomous robot using knowledge in such units developed by hand. Here we propose a research program to develop the ability of an artificial agent to acquire this knowledge incrementally and autonomously from its experience in a similar trajectory. We then propose a strategy for evaluating this human-like learning system using a large benchmark created as a tool for training deep learning systems. We expect that our human-like learning system will produce better task performance from training on only a small subset of this benchmark. Peter Lindes Steven Jones Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 556 558 10.1609/aaaiss.v3i1.31275 Pushing the Limits of Learning from Limited Data https://ojs.aaai.org/index.php/AAAI-SS/article/view/31276 What is the mechanism behind people's remarkable ability to learn from very little data, and what are its limits? Preliminary evidence suggests people can infer categories from extremely sparse data, even when they have fewer labeled examples than categories. However, the mechanisms behind this learning process are unclear. In our experiment, people learned 8 categories defined over a 2D manifold from just 4 labeled examples. Our results suggest that people are forming rich representations of the underlying categories despite this limited information. These results push the limits of how little information people need to build strong and systematic category representations. Maya Malaviya Ilia Sucholutsky Thomas L. Griffiths Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 559 561 10.1609/aaaiss.v3i1.31276 Teaching Functions with Gaussian Process Regression https://ojs.aaai.org/index.php/AAAI-SS/article/view/31277 Humans are remarkably adaptive instructors who adjust advice based on their estimations about a learner’s prior knowledge and current goals. Many topics that people teach, like goal-directed behaviors, causal systems, categorization, and time-series patterns, have an underlying commonality: they map inputs to outputs through an unknown function. This project builds upon a Gaussian process (GP) regression model that describes learner behavior as they search the hypothesis space of possible underlying functions to find the one that best fits their current data. We extend this work by implementing a teacher model that reasons about a learner’s GP regression in order to provide specific information that will help them form an accurate estimation of the function. Maya Malaviya Mark K. Ho Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 562 564 10.1609/aaaiss.v3i1.31277 Embodying Human-Like Modes of Balance Control Through Human-In-the-Loop Dyadic Learning https://ojs.aaai.org/index.php/AAAI-SS/article/view/31278 In this paper, we explore how humans and AIs trained to perform a virtual inverted pendulum (VIP) balancing task converge and differ in their learning and performance strategies. We create a visual analogue of disoriented IP balancing, as may be experienced by pilots suffering from spatial disorientation, and train AI models on data from human subjects performing a real-world disoriented balancing task. We then place the trained AI models in a dyadic human-in-the-loop (HITL) training setting. Episodes in which human subjects disagreed with AI actions were logged and used to fine-tune the AI model. Human subjects then performed the task while being given guidance from pretrained and dyadically fine-tuned versions of an AI model. We examine the effects of HITL training on AI performance, AI guidance on human performance, and the behavior patterns of human subjects and AI models during task performance. We find that in many cases, HITL training improves AI performance, AI guidance improves human performance, and after dyadic training the two converge on similar behavior patterns. Sheikh Mannan Vivekanand Pandey Vimal Paul DiZio Nikhil Krishnaswamy Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 565 569 10.1609/aaaiss.v3i1.31278 Learning Fast and Slow: A Redux of Levels of Learning in General Autonomous Intelligent Agents https://ojs.aaai.org/index.php/AAAI-SS/article/view/31279 Autonomous intelligent agents, including humans, operate in a complex, dynamic environment that necessitates continuous learning. We revisit our thesis that proposes that learning in human-like agents can be categorized into two levels: Level 1 (L1) involving innate and automatic learning mechanisms, while Level 2 (L2) comprises deliberate strategies controlled by the agent. Our thesis draws from our experiences in building artificial agents with complex learning behaviors, such as interactive task learning and open-world learning. Shiwali Mohan John E. Laird Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 570 571 10.1609/aaaiss.v3i1.31279 Learning Decision-Making Functions Given Cardinal and Ordinal Consensus Data https://ojs.aaai.org/index.php/AAAI-SS/article/view/31280 Decision-making and reaching consensus are an integral part of everyday life, and studying how individuals reach these decisions is an important problem in psychology, economics, and social choice theory. Our work develops methods and theory for learning the nature of decisions reached upon by individual decision makers or groups of individuals using data. We consider two tasks, where we have access to data on: 1) Cardinal utilities for d individuals with cardinal consensus values that the group or decision maker arrives at, 2) Cardinal utilities for d individuals for pairs of actions, with ordinal information about the consensus, i.e., which action is better according to the consensus. Under some axioms of social choice theory, the set of possible decision functions reduces to the set of weighted power means, M(u, w, p) = (∑ᵢ₌₁ᵈ wᵢ uᵢᵖ)¹ᐟᵖ, where uᵢ indicate the d utilities, w ∈ ∆_{d - 1} denotes the weights assigned to the d individuals, and p ∈ ℝ (Cousins 2023). For instance, p = 1 corresponds to a weighted utilitiarian function, and p = -∞ is the egalitarian welfare function. Our goal is to learn w ∈ ∆_{d - 1} and p ∈ ℝ for the two tasks given data. The first task is analogous to regression, and we show that owing to the monotonicity in w and p (Qi 2000}, learning these parameters given cardinal utilities and social welfare values is a PAC-learnable task. For the second task, we wish to learn w, p such that, given pairs of actions u, v ∈ ℝ₊ᵈ, the preference is given as C((u, v), w, p) = sign(ln(M(u, w, p)) - ln(M(v, w, p))). This is analogous to classification; however, convexity of the loss function in w and p is not guaranteed. We analyze two related cases - one in which the weights w are known, and another in which the weights are unknown. We prove that both cases are PAC-learnable given positive u, v by giving an O(log d) bound on the VC dimension for the known weights case, and an O(d log d) bound for the unknown weights case. We also establish PAC-learnability for noisy data under IID (Natarajan 2013) and logistic noise models for this task. Finally, we demonstrate how simple algorithms can be useful to learn w and p up to moderately high d through experiments on simulated data. Kanad Pardeshi Itai Shapira Ariel Procaccia Aarti Singh Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 572 572 10.1609/aaaiss.v3i1.31280 Task-driven Risk-bounded Hierarchical Reinforcement Learning Based on Iterative Refinement https://ojs.aaai.org/index.php/AAAI-SS/article/view/31281 Deep Reinforcement Learning (DRL) has garnered substantial acclaim for its versatility and widespread applications across diverse domains. Aligned with human-like learning, DRL is grounded in the fundamental principle of learning from interaction, wherein agents dynamically adjust behavior based on environmental feedback in the form of rewards. This iterative trial-and-error process, mirroring human learning, underscores the importance of observation, experimentation, and feedback in shaping understanding and behavior. DRL agents, trained to navigate complex surroundings, refine their knowledge through hierarchical and abstract representations, empowered by deep neural networks. These representations enable efficient handling of long-horizon tasks and flexible adaptation to novel situations, akin to the human ability to construct mental models for comprehending complex concepts and predicting outcomes. Hence, abstract representation building emerges as a critical aspect in the learning processes of both artificial agents and human learners, particularly in long-horizon tasks. Furthermore, human decision-making, deeply rooted in evolutionary history, exhibits a remarkable capacity to balance the tradeoff between risk and cost across various domains. This cognitive process involves assessing potential negative consequences, evaluating factors such as the likelihood of adverse outcomes, severity of potential harm, and overall uncertainty. Humans intuitively gauge inherent risks and adeptly weigh associated costs, extending beyond monetary expenses to include time, effort, and opportunity costs. The nuanced ability of humans to consider the tradeoff between risk and cost highlights the complexity and adaptability of human decision-making, a skill lacking in typical DRL agents. Principles like these derived from human-like learning present an avenue for inspiring advancements in DRL, fostering the development of more adaptive and intelligent artificial agents. Motivated by these observations and focusing on practical challenges in robotics, our efforts target risk-aware stochastic sequential decision-making problem which is crucial for tasks with extended time frames and varied strategies. A novel integration of model-based conditional planning with DRL is proposed, inspired by hierarchical techniques. This approach breaks down complex tasks into manageable subtasks(motion primitives), ensuring safety constraints and informed decision-making. Unlike existing methods, our approach addresses motion primitive improvement iteratively, employing diverse prioritization functions to guide the search process effectively. This risk-bounded planning algorithm seamlessly integrates conditional planning and motion primitive learning, prioritizing computational efforts for enhanced efficiency within specified time limits. Viraj Parimi Sungkweon Hong Brian Williams Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 573 575 10.1609/aaaiss.v3i1.31281 A Model of Cognizing Supporting the Origination of Cognizing in Nature https://ojs.aaai.org/index.php/AAAI-SS/article/view/31282 Our model of cognizing roots in developmental psychology by Jean Piaget, follows researchers in modeling cognizing by solvers of combinatorial games, enriches object–oriented representatives of realities by input classifiers and relationships in English, while tends to be consistent with questioning the origination of cognizing in nature. Let us introduce the basics of the model, provide arguments for its adequacy, followed by those supporting the origination of cognizing. Edward M. Pogossian Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 576 578 10.1609/aaaiss.v3i1.31282 Exploring the Gap: The Challenge of Achieving Human-like Generalization for Concept-based Translation Instruction Using Large Language Models https://ojs.aaai.org/index.php/AAAI-SS/article/view/31283 Our study utilizes concept description instructions and few-shot learning examples to examine the effectiveness of a large language model (GPT-4) in generating Chinese-to-English translations that embody related translation concepts. We discovered that human language experts possess superior abductive reasoning skills compared to GPT-4. Therefore, it is crucial for humans to employ abductive reasoning to craft more detailed instructions and infuse additional logic into exemplary prompts, a step essential for guiding a large language model effectively, in contrast to the more intuitive understanding a human expert might have. This approach would make the prompt engineering process more complicated and less human-like. Emphasizing domain-specific abductive reasoning stands out as a crucial aspect of human-like learning that AI/ML systems based on large language models should aim to replicate. Ming Qian Chuiqing Kong Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 579 581 10.1609/aaaiss.v3i1.31283 Human-Like Learning of Social Reasoning via Analogy https://ojs.aaai.org/index.php/AAAI-SS/article/view/31284 Neurotypical adult humans are impeccably good social reasoners. Despite the occasional faux pas, we know how to interact in most social settings and how to consider others' points of view. Young children, on the other hand, do not. Social reasoning, like many of our most important skills, is learned. Much like human children, AI agents are not good social reasoners. While some algorithms can perform some aspects of social reasoning, we are a ways off from AI that can interact naturally and appropriately in the broad range of settings that people can. In this talk, I will argue that learning social reasoning via the same processes used by people will help AI agents reason--and interact--more like people do. Specifically, I will argue that children learn social reasoning via analogy, and that AI agents should, too. I will present evidence from cognitive modeling experiments demonstrating the former and AI experiments demonstrating the latter. I will also propose future directions for social reasoning research that both demonstrate the need for robust, human-like social reasoning in AI and test the utility of common approaches. Irina Rabkina Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 582 582 10.1609/aaaiss.v3i1.31284 Algorithmic Decision-Making in Difficult Scenarios https://ojs.aaai.org/index.php/AAAI-SS/article/view/31285 We present an approach to algorithmic decision-making that emulates key facets of human decision-making, particularly in scenarios marked by expert disagreement and ambiguity. Our system employs a case-based reasoning framework, integrating learned experiences, contextual factors, probabilistic reasoning, domain-specific knowledge, and the personal traits of decision-makers. A primary aim of the system is to articulate algorithmic decision-making as a human-comprehensible reasoning process, complete with justifications for selected actions. Christopher B. Rauch Ursula Addison Michael Floyd Prateek Goel Justin Karneeb Ray Kulhanek Othalia Larue David Ménager Mallika Mainali Matthew Molineaux Adam Pease Anik Sen Jt Turner Rosina Weber Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 583 585 10.1609/aaaiss.v3i1.31285 Turtle-like Geometry Learning: How Humans and Machines Differ in Learning Turtle Geometry https://ojs.aaai.org/index.php/AAAI-SS/article/view/31286 While object recognition is one of the prevalent affordances of humans' perceptual systems, even human infants can prioritize a place system over the object recognition system, that is used when navigating. This ability, combined with active learning strategies can make humans fast learners of Turtle Geometry, a notion introduced about four decades ago. We contrast humans' performances and learning strategies with large visual language models (LVLMs) and as we show, LVLMs fall short of humans in solving Turtle Geometry tasks. We outline different characteristics of human-like learning in the domain of Turtle Geometry that are fundamentally unparalleled in state-of-the-art deep neural networks and can inform future research directions in the field of artificial intelligence. Sina Rismanchian Shayan Doroudi Yasaman Razeghi Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 586 587 10.1609/aaaiss.v3i1.31286 Do Large Language Models Learn to Human-Like Learn? https://ojs.aaai.org/index.php/AAAI-SS/article/view/31287 Human-like learning refers to the learning done in the lifetime of the individual. However, the architecture of the human brain has been developed over millennia and represents a long process of evolutionary learning which could be viewed as a form of pre-training. Large language models (LLMs), after pre-training on large amounts of data, exhibit a form of learning referred to as in-context learning (ICL). Consistent with human-like learning, LLMs are able to use ICL to perform novel tasks with few examples and to interpret the examples through the lens of their prior experience. I examine the constraints which typify human-like learning and propose that LLMs may learn to exhibit human-like learning simply by training on human generated text. Jesse Roberts Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 588 591 10.1609/aaaiss.v3i1.31287 An Exploring Study on Building Affective Artificial Intelligence by Neural-Symbolic Computing (Extended Abstract) https://ojs.aaai.org/index.php/AAAI-SS/article/view/31288 This short paper is the status report of a project in progress. We aim to model human-like agents' decision-making behaviors under risks with neural-symbolic approach. Our model integrates the learning, reasoning, and emotional aspects of an agent and takes the dual process thinking into consideration when the agent is making a decision. The model construction is based on real behavioral and brain imaging data collected in a lottery gambling experiment. We present the model architecture including its main modules and the interactions between them. Jonathan C.H. Tong Yung-Fong Hsu Churn-Jung Liau Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 592 593 10.1609/aaaiss.v3i1.31288 Decomposed Inductive Procedure Learning: Learning Academic Tasks with Human-Like Data Efficiency https://ojs.aaai.org/index.php/AAAI-SS/article/view/31289 Human brains have many differently functioning regions which play specialized roles in learning. By contrast, methods for training artificial neural networks, such as reinforcement-learning, typically learn exclusively via a single mechanism: gradient descent. This raises the question: might human learners’ advantage in learning efficiency over deep-learning be attributed to the interplay between multiple specialized mechanisms of learning? In this work we review a series of simulated learner systems which have been built with the aim of modeling human student’s inductive learning as they practice STEM procedural tasks. By comparison to modern deep-learning based methods which train on thousands to millions of examples to acquire passing performance capabilities, these simulated learners match human performance curves---achieving passing levels of performance within about a dozen practice opportunities. We investigate this impressive learning efficiency via an ablation analysis. Beginning with end-to-end reinforcement learning (1-mechanism), we decompose learning systems incrementally to construct the 3-mechanism inductive learning characteristic of prior simulated learners such as Sierra, SimStudent and the Apprentice Learner Architecture. Our analysis shows that learning decomposition plays a significant role in achieving data-efficient learning on par with human learners---a greater role even than simple distinctions between symbolic/subsymbolic learning. Finally we highlight how this breakdown in learning mechanisms can flexibly incorporate diverse forms of natural language and interface grounded instruction, and discuss opportunities for using these flexible learning capabilities in interactive task learning systems that learn directly from a user’s natural instruction. Daniel Weitekamp Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 594 594 10.1609/aaaiss.v3i1.31289 FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design https://ojs.aaai.org/index.php/AAAI-SS/article/view/31290 Recent advancements in Large Language Models (LLMs) have exhibited notable efficacy in question-answering (QA) tasks across diverse domains. Their prowess in integrating extensive web knowledge has fueled interest in developing LLM-based autonomous agents. While LLMs are efficient in decoding human instructions and deriving solutions by holistically processing historical inputs, transitioning to purpose-driven agents requires a supplementary rational architecture to process multi-source information, establish reasoning chains, and prioritize critical tasks. Addressing this, we introduce FinMem, a novel LLM-based agent framework devised for financial decision-making. It encompasses three core modules: Profiling, to customize the agent's characteristics; Memory, with layered message processing, to aid the agent in assimilating hierarchical financial data; and Decision-making, to convert insights gained from memories into investment decisions. Notably, FinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability and real-time tuning. Its adjustable cognitive span allows for the retention of critical information beyond human perceptual limits, thereby enhancing trading outcomes. This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions in the volatile financial environment. We first compare FinMem with various algorithmic agents on a scalable real-world financial dataset, underscoring its leading trading performance in stocks. We then fine-tuned the agent's perceptual span and character setting to achieve a significantly enhanced trading performance. Collectively, FinMem presents a cutting-edge LLM agent framework for automated trading, boosting cumulative investment returns. Yangyang Yu Haohang Li Zhi Chen Yuechen Jiang Yang Li Denghui Zhang Rong Liu Jordan W. Suchow Khaldoun Khashanah Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 595 597 10.1609/aaaiss.v3i1.31290 Comparing Human Behavior to an Optimal Policy for Innovation https://ojs.aaai.org/index.php/AAAI-SS/article/view/31291 Human learning does not stop at solving a single problem. Instead, we seek new challenges, define new goals, and come up with new ideas. Unlike the classic explore-exploit trade-off between known and unknown options, making new tools or generating new ideas is not about collecting data from existing unknown options, but rather about create new options out of what is currently available. We introduce a discovery game designed to study how rational agents make decisions about pursuing innovations, where discovering new ideas is a process of combining existing ideas in an open-ended compositional space. We derive optimal policies of this decision problem formalized as a Markov decision process, and compare people's behaviors to the model predictions in an online behavioral experiment. We found evidence that people both innovate rationally, guided by potential returns in this discovery game, and under- and over-explore systematically in different settings. Bonan Zhao Natalia Vélez Thomas L. Griffiths Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 598 599 10.1609/aaaiss.v3i1.31291 Constructing Deep Concepts through Shallow Search https://ojs.aaai.org/index.php/AAAI-SS/article/view/31292 We propose bootstrap learning as a computational account for why human learning is modular and incremental, and identify key components of bootstrap learning that allow artificial systems to learn more like people. Originated from developmental psychology, bootstrap learning refers to people's ability to extend and repurpose existing knowledge to create new and more powerful ideas. We view bootstrap learning as a solution of how cognitively-bounded reasoners grasp complex environmental dynamics that are far beyond their initial capacity, by searching ‘locally’ and recursively to extend their existing knowledge. Drawing from techniques of Bayesian library learning and resource rational analysis, we propose a computational modeling framework that achieves human-like bootstrap learning performance in inductive conceptual inference. In addition, we demonstrate modeling and behavioral evidence that highlights the double-edged sword of bootstrap learning, such that people processing the same information in different batch orders could induce drastically different causal conclusions and generalizations, as a result of the different sub-concepts they construct in earlier stages of learning. Bonan Zhao Christopher G. Lucas Neil R. Bramley Copyright (c) 2024 Association for the Advancement of Artificial Intelligence 2024-05-20 2024-05-20 3 1 600 602 10.1609/aaaiss.v3i1.31292