Orply.

Surya Ganguli

Surya Ganguli is an associate professor of applied physics at Stanford University and a senior fellow and associate director at Stanford HAI. His research spans AI, physics, and neuroscience, with a focus on understanding and improving how biological and artificial neural networks learn.

AI Is Pushing Science Beyond the Paper as Its Core Artifact

In closing remarks from an AI and science meeting, Risa Wechsler argued that AI is reshaping scientific fields unevenly, depending on their data, theory and modes of inquiry, and that scientists should use the moment to choose structures aligned with human values. Surya Ganguli pushed the question toward scientific communication itself, suggesting that papers may be too narrow an artifact for AI-assisted science and that richer institutional records of research could better transfer knowledge. Both framed AI for science as a design problem around human purposes, not just faster automation.

Stanford HAIMay 15, 20265 min read

Stanford Merges AI and Data Science Institutes Around Open Scientific Discovery

Stanford’s AI+Science Conference opened with James Landay announcing that the university is merging the Human-Centered AI Institute and Stanford Data Science into a single institute for AI and data science across Stanford. Landay, president Jonathan Levin, Surya Ganguli and Risa Wechsler framed the move around a common argument: AI is becoming a scientific instrument, but one that will require open research, domain-specific rigor, uncertainty-aware methods and human judgment about which questions matter.

Stanford HAIMay 15, 202612 min read

AI-for-Science Advances Depend on Evaluation, Not Just Generation

In a Stanford AI+Science lightning-talk session introduced by Surya Ganguli, four young researchers made a common case: AI-for-science is useful only when paired with rigorous evaluation. Aishwarya Mandyam, Amar Venugopal, Steven Dillmann and Alda Elfarsdóttir each treated AI systems or outputs as claims to be tested — through uncertainty estimates for clinical policies, causal checks on generated text, executable benchmarks for scientific agents, and empirical links between corporate climate language and later emissions.

Stanford HAIMay 15, 20267 min read