嘉宾及内容简介:
宁雪妃,Research-Track Assistant Professor(opening), Introduction of the NICS-EFC lab and the EffAlg Group
袁之航,Researcher,(NeurIPS'24) DiTFastAttn: Attention Compression for Diffusion Transformer Models
赵天辰,Ph.D. Student,ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
刘恩庶,Master Student:
Distilling Autoregressive Models into Few Steps for Image Generation
Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models
傅天予,Ph.D. Student,MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression
李师尧,Ph.D. Student,(NeurIPS'24) Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study
郭立栋,Ph.D. Student,(NeurIPS'24) Rad-NeRF: Ray-decoupled Training of Neural Radiance Field