| Working Group | Organizer | Description |
| Neurosymbolic Reasoning and the Foundations of Intelligence | Jeffrey Mu, Dept. of Cognitive and Psychological Sciences, Applied Math-Computer Science (jeffrey_mu@brown.edu) | Visual and linguistic artifacts are intensely productive as a communicative form. It is a prominent question of neurosymbolic computer science and cognitive science how newer tools like machine learning perform remarkably well, yet do not build or externalize reasoning like humans so effectively adapt from a young age. Large language and vision-language models have demonstrated strong few-shot learning abilities, but often struggle with abstraction and reasoning. This working group explores how mathematical structure, symbolic reasoning, logic, and learning-based methods can offer strong frameworks for advancing SOTA methods and models.
We plan to read foundational papers (e.g., circuit-style transformers, neurosymbolic systems), replicate at least one small interpretability or neurosymbolic experiment using open-source tooling, and present concepts from algebra, category theory, logic, optimization, and theoretical economics and their connection to modern training objectives. We will maintain a set of worked examples across our different disciplines.
We will also run open discussions/seminars on philosophical and historical perspectives on cognition, for example using musical/classical structure as a complementary lens to language/vision compositionality. We hope to identify a small number of concrete research directions/relevant mathematical formalisms that can serve as the foundation of neurosymbolic systems. |
| Physics-Informed Statistical and AI Modeling for Global Healthcare Systems | Daniela Grijalva Moreno, School of Public Health (daniela_grijalva_moreno@brown.edu) | First, statistical techniques and AI tools have developed tremendously over the past decade, reshaping and redefining many research fields, including physics and other data-intensive sciences. As high-dimensional and complex real-world data often contain information that has not yet been fully understood, advanced statistical approaches and AI methods provide powerful tools to extract patterns, model uncertainty, and generate deeper insight into underlying mechanisms.
In particular, statistical frameworks such as probabilistic modeling, alongside modern AI systems, enable us to interpret complex datasets better and understand what is happening beneath observable phenomena. These capabilities are especially critical in noisy, heterogeneous, and large-scale data environments.
Healthcare represents one of the most important domains where such integration is urgently needed. While healthcare systems are rich in data — including population health statistics, mental health surveys, and disease records related to conditions such as cancer and HIV — actionable insight remains limited. Many of these diseases continue to affect millions of people worldwide, and the complexity of their causes, progression, and treatment responses demands more sophisticated analytical tools.
By leveraging advances in statistical modeling and AI, we aim to transform healthcare data into actionable insights that support early detection, improve patient outcomes, and ultimately enhance human health and well-being. |
| Defining Opportunities for Data Science and Machine Learning in Experimental Chemical Sciences | Benjamin McDonald, Chemistry (benjamin_mcdonald1@brown.edu) | There is significant excitement about AI “revolutionizing” science. Yet from the perspective of experimental chemical scientists, a gap often remains between the promise of these tools and day-to-day laboratory reality. This Department of Chemistry working group aims to clarify what data science (DS) and machine learning (ML) can currently do in experimental chemistry, what is emerging, and where the most actionable opportunities lie at Brown. |
| Scientific ML and Interdisciplinary AI/ML Community | Karianne Bergen, Data Science & Earth, Evinronmental, and Planetary Sciences (karianne_bergen@brown.edu) | The motivation of this group is to bring together graduate students from multiple research groups and subfields in the physical sciences who are developing and/or applying machine learning (ML) / AI tools to solve challenging technical problems in their scientific domain as a part of their research. Many graduate students are using advanced ML tools to tackle domain-specific problems, but mostly interact with graduate students in their own discipline who are often unfamiliar with their research tools and methodology. This group provides a venue for students to regularly interact with students in other research groups, fields or departments who are using similar ML/AI tools and facing similar technical challenges when working with scientific data. The group is also an opportunity for students to practice scientific communication across domains, and to discuss developments and trends in AI for Science.
We are meeting weekly, alternating between sessions focused on student research updates and brainstorming sessions and sessions focused on discussing topics of shared interest in AI for Science (see examples below).
The focus is on the use of ML/AI as tools for analysing and extracting insights from scientific data (rather than LLMs and AI agents). Students at all phases of their Ph.D. research, including the early stages (e.g. exploring novel directions in their research with ML) are welcome to participate in the group. The group is mostly students working on ML/AI in the physical sciences, but students working in other related application domains are also welcome to participate. |
| Computational methods to understand gene regulatory networks & neuronal circuits in relation to memory formation | Esra Taner, Therapeutic Sciences (esra_taner@brown.edu) | This working group is motivated by a cross-departmental collaboration between computational methodology development and experimental neuroscience. As a PhD student in Therapeutic Sciences with a doctoral certificate in Data Science, my research focuses on statistical and computational tool development. Members of the Kaun Lab, whose work centers on understanding the circuit and molecular mechanisms underlying memory for natural and drug reward, have expressed interest in deepening their understanding of computational approaches and meaningfully integrating these tools into their research. The goal of this group is to create a focused space to examine how modern computational methods can be applied to gene regulatory networks and circuit-level datasets. By combining computational methodology expertise with domain knowledge in neuronal circuits and high throughput sequencing data, this working group aims to foster a “computational-first” perspective in which experimental design, data analysis, and model development inform one another. Emphasis will be placed not only on using existing tools, but on understanding their assumptions, limitations, and adaptability to specific biological questions. Ultimately, this collaboration seeks to translate complex gene regulatory networks and circuit data into mechanistic, biologically grounded insights. |
| Equitable AI Implenetation in Healthcare Delivery | Nick Seymour, Warren Alpert Medical School (nicholas_seymour@brown.edu) | As the consequences of the AI revolution for human well-being, meaning, and employment are being anxiously debated, healthcare is distinctively positioned as a field with enormous potential for AI-driven innovation but also sustained workforce growth. While industry is particularly bullish about the potential for AI to expand access and quality of care, many voices in academia and the AI Safety and Ethics communities have expressed significant concerns about the adverse effects of this technology on compassionate care, patient privacy, sustainability, and more.
The purpose of this group is to explore, critically evaluate, and debate the potential harms and opportunities of AI’s growing role in healthcare using current empirical evidence and frameworks from history of science, philosophy, and the social sciences. |
| Humanoid Robotics Beyond the Lab: Data, AI, Emotion, and Interaction | Evangelos Adrianos Botsios (evangelos_botsios@brown.edu) | Our working group is motivated by a shared interest in understanding how humanoid robotics can move from preliminary prototypes to mainstream real world adoption, primarily in everyday home environments. We initially came up with this idea in collaboration with Professor Rick Fleeter as a part of the course ENGN 120 (Crossing the Chasm), and we aim to study what it takes to take this technology from early adopters to a mainstream technology that addresses everyday human needs. In this process, we are particularly interested in how data science and machine learning can be used to adapt to user behavior and make context aware decisions in dynamic home settings.
Our group possesses interdisciplinary interests and skills, aligning perfectly with investigating humanoid robotics. We bring together students and researchers across engineering, computer science, linguistics, cognitive science, design, and the humanities. The robotics industry is experiencing a major global boom, and billions of dollars are not only being invested in humanoid robots, but also assistive technologies. As robots move from industrial environments like Amazon Warehouses into homes and social spaces, it is crucial to take a more holistic approach to their development and implementation. Humanoid robotics is not only a technical challenge, but also a problem of interaction, language, emotion, and daily human experience. However, current conversations remain fragmented across these various disciplines. Through this working group, we aim to create a shared space for interdisciplinary work and build a step to getting students across disciplines interested in robotics.
A central focus of the group will be examining the intersection between the data required to train intelligent robotic systems and the broader disciplines that shape how these systems are built and deployed. We will explore how data science, artificial intelligence, and machine learning intersect with engineering, linguistics, cognitive science, and design to determine what kinds of data are collected and how models should be trained and evaluated. Unlike purely digital AI systems, robots must interpret multimodal inputs, language and visual cues, physical context, and emotional signals. We will discuss these challenges in constructing diverse and representative datasets that reflect the dynamic nature of home environments. By grouping these discussions in both computational and human centered perspectives, this group aims to better understand how data practices can shape the capabilities and limitations of future humanoid systems. |
| Frontiers of AI, Data science, and Machine Learning in HIV/Infectious Disease Research and Clinical Care | Jana Jarolimova, Department of Medicine (jana_jarolimova@brown.edu) | We propose a working group focused on an exploration of boundary-pushing applications of AI, data science, and machine learning for research- and clinical-based purposes in the fields of HIV and infectious diseases (ID). The rapid growth of large clinical, surveillance, cohort, and implementation datasets in HIV and ID presents a major opportunity to improve inference, prediction, and translation—if we develop shared expertise in AI, data science, and machine learning (ML). A focused working group would create a space for physicians and researchers to critically evaluate how ML methods (such as predictive modeling, natural language processing of EHRs, etc) can generate new insights from existing datasets, while preserving methodological rigor. At the same time, AI-driven tools are increasingly entering clinical workflows throughout all fields of medicine, however, there are specific concerns for the field of HIV and ID specifically. These include ethical questions about bias (particularly for marginalized populations disproportionately affected by HIV), data provenance and consent, privacy in high-stigma conditions, algorithmic transparency, automation bias, and the potential for predictive models to exacerbate disparities in access to testing, treatment, and prevention. A small interdisciplinary working group would allow us to build shared standards for responsible model development and evaluation, align analytic approaches with clinically meaningful questions, and proactively address the ethical and equity implications of AI-enabled clinical decision support in HIV and infectious disease care. This group will benefit from cross-disciplinary input from other disciplines, including biostatistics and bioethics. |
| AI Otherwise: Power, Peril, and Possibility for Social Movements and “Rest of World” | Geri Augusto, Watson School of International and Public Affairs (geri_augusto@brown.edu) | The impact of AI and other digital technologies on social movements in the USA and on public policy in Global South countries, whether harmful, disruptive, or potentially transformative, is an increasingly relevant question in public affairs. What does the balancing act look like between reach and agility, between positively influencing the public discourse and surveillance, misinformation, and a drain on local labor, water, and energy? How might we think about some of the challenges and critical issues involved, beyond the economic, business, or data science aspects of AI usage, from the vantage points of marginalized groups in the U.S. and of the less powerful in the “Rest of World”?
This study group will explore the conceptualization and use of digital technologies, and some of the social, cultural, ethical, and policymaking questions involved, in two moments (cases). The first draws on over a decade of work by a group of U.S. civil rights movement veterans (including Prof. Augusto) to successfully build their own living digital history archive, in conjunction with a university library. One continuing challenge has been to think about how emerging technologies, and especially AI, might (or should not) be used to transcribe, archive, and transmit such histories and organizing traditions, much evidence for which is oral, or sits in multiple scattered personal libraries or “down in the basements and up in the attic.” While the goal is to make them more accessible to broader publics and new generations, there is also a powerful new drive to erase these histories.
After we broadly delve into some of the current dilemmas facing Global South countries, such as Brazil, with respect to AI use, the study group’s second case will discuss an earlier example of digital technologies’ use (in particular, online social media) in activism in the Middle East, during the early 2000s. Readings will provide critical but not excessively technical analyses of data science and AI’s differential impacts on communities and groups, and raise deeper questions about judgment, social equity, and cultural values with respect to AI, primarily from the perspective of marginalized or exploited communities in the U.S. and in the Global South. We will end by drawing out a few policy lessons from the juxtaposition of these real-world experiences. |
| AI Auditing in Practice: Machinery, Ecosystems, and Futures | Anish Mitagar, Department of Computer Science (anish_mitagar@brown.edu) | AI auditing comes up constantly in AI accountability conversations, but almost always in vague, conceptual terms. We rarely get into how specific auditing systems actually work: what they technically check, what they miss, who pays for them, and what happens when they're deployed in contexts very different from where they were designed. Meanwhile, real auditing regimes are being built across radically different settings: the EU's conformity assessments, China's algorithm registry, India's voluntary frameworks layered onto Aadhaar infrastructure, Brazil's debates over algorithmic accountability in public services. And in much of the Global South, where AI systems are increasingly deployed in welfare, credit scoring, and surveillance, there's often no audit infrastructure at all.
This group grows partly out of our own research: some of us have been exploring algorithmic fairness and labor rights in the platform economy, investigating how algorithmic management affects gig worker well-being and what auditing frameworks centered on worker wellness might look like. That work made clear how much any audit's effectiveness depends on the specific technical, institutional, and political conditions where it operates. So this working group takes that question seriously across jurisdictions and domains, drawing on CS, law, policy, and STS to examine AI auditing not as an abstract ideal but as a set of concrete, evolving, and geographically uneven practices with real constraints and real stakes. |
| Artificial intelligence in Humanitarian Response | Theo Illarionov, School of Public Health & Watson School (theo_illarionov@brown.edu) | As humanitarian crises grow in scale and complexity, AI has the potential to improve disaster prediction, damage assessment, resource allocation, and aid delivery. This working group will explore how AI can strengthen humanitarian response while critically examining challenges such as bias, data privacy, and equity. By bringing together participants from the School of Public Health and Watson, we aim to test AI in specific tasks, including mission assignment tasks management, public information sharing and emergency logistics locating. |
| BEACON: Brown's Efficient AI Compression Network | Matt LeBlanc, Physics (matt_leblanc@brown.edu) | Science has reached a computational inflection point: modern datasets are growing at a rate that is fundamentally challenging our ability to efficiently process them. Without aggressive R&D, national scientific strategic plans and major scientific collaborations forecast shortfalls in the computational and storage resources required to make discoveries with this abundance of data.
We aim to bring together researchers from diverse scientific disciplines and experiments to discuss a common need -- the delivery of efficient, reduced data representations to substantially improve data storage overheads and processing speeds. |
| ML for pathogen genomics | Aley Abdel-Ghaffar (aley_abdel-ghaffar@brown.edu) and Jacob Marglous (jacob_marglous@brown.edu), Center for Computational Molecular Biology | We are a diverse group of researchers at Brown broadly connected by our interest in the evolution of humans and pathogens. Our working group will explore how recent advances in AI and machine learning can transform our understanding of viral evolution and pathogenesis, specifically focusing on LASV RNA, protein structure analysis, and malaria drug- and diagnostic-resistance surveillance. We hope to leverage these complementary backgrounds to understand the current state of the art in deep learning and artificial intelligence in genomics, and to explore how these methods can be applied to solve challenges we face in our own research |
| Causal Inference Working Group (CIWG) | Nina Joyce, Epidemiology (nina_joyce@brown.edu) | Causal inference is an inherently interdisciplinary field. The causal inference working group (CIWG) will capitalize on and bring together researchers who rarely share a single scientific space but whose combined expertise is necessary for addressing current and emerging public health challenges. |