Advances in AI for Safety, Equity, and Well-Being on Web and Social Media: Detection, Robustness, Attribution, and Mitigation
Keywords:New Faculty Highlights
AbstractIn the talk, I shall describe my lab’s recent advances in AI, applied machine learning, and data mining to combat malicious actors (sockpuppets, ban evaders, etc.) and dangerous content (misinformation, hate, etc.) on web and social media platforms. My vision is to create a trustworthy online ecosystem for everyone and create the next generation of socially-aware methods that promote health, equity, and safety. Broadly, in my research, I have created novel graph, content (NLP, multimodality), and adversarial machine learning methods leveraging terabytes of data to detect, predict, and mitigate online threats. I shall describe the advancements made in my group across four key thrusts: (1) Detection of harmful content and malicious actors across platforms, languages, and modalities, (2) Robustifying detection models against adversarial actors by predicting future malicious activities, (3) Attributing the impact of harmful content and the role of recommender systems, and (4) Developing mitigation techniques to counter misinformation by professionals and the crowd.
How to Cite
Kumar, S. (2023). Advances in AI for Safety, Equity, and Well-Being on Web and Social Media: Detection, Robustness, Attribution, and Mitigation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15444-15444. https://doi.org/10.1609/aaai.v37i13.26811
New Faculty Highlights