
Steven Feng
Steven Feng is a Stanford Computer Science PhD student and NSERC PGS-D scholar working with the Stanford AI Lab and Stanford NLP Group. He is a lead/co-instructor for Stanford CS25: Transformers United, and his research focuses on foundation models, language models, reasoning, generalization, efficiency, and cognitively inspired AI methods.
Reasoning Gains Persist When Models Learn Them During Pretraining
Shrimai Prabhumoye of Mistral AI used a Stanford CS25 seminar to argue that large-language-model pretraining is becoming less a matter of adding tokens and more a question of training strategy. Drawing on studies of curriculum ordering, early reasoning data, and reinforcement as a pretraining objective, she said base models improve when they see broad data before high-quality data, encounter reasoning traces during pretraining rather than only post-training, and are rewarded for intermediate thoughts that improve prediction.
Ultra-Scale Training Depends on Memory Sharding and Communication Overlap
Nouamane Tazi of Hugging Face uses a Stanford CS25 seminar to argue that ultra-scale model training is less a question of adding GPUs than of managing memory, communication, batch size, and hardware topology. His central case is that 5D parallelism—data, tensor, pipeline, context, and expert parallelism—lets training runs span massive clusters only when each axis is chosen for a specific bottleneck. The practical rule, he says, is conservative: shard only as much as the workload requires, because every added parallelism dimension buys scale by spending communication, complexity, or both.