From Representation to Workflow: Computational Tools for Human Visual Creativity

Chenxi Liu - University of Toronto

Oct. 10, 2025, 2:30 p.m. - Oct. 10, 2025, 3:30 p.m.

ENGMD 279

Hosted by: Paul Kry


Tools for visual creation have progressed from traditional media to digital tablets and, more recently, to text-to-image generation. These advances broaden access but remain limited for artistic and stylized images, which differ from photorealistic ones and challenge standard graphics methods. Professional usage of generative systems also faces obstacles: creators need interactive workflows, not one-click outputs, and the lack of attribution for training data has sparked protests and lawsuits. How can we design representations that capture the richness of visual creation? How can we make generative tools both accessible and accountable? In this talk, I will present research on precise visual representations, from geometric vector formats to generative model weights, and on workflow-aware systems that align with human creative practice. I will conclude with future directions toward AI systems that empower human visual creativity.

Chenxi Liu is a postdoctoral researcher in the Dynamic Graphics Project at the University of Toronto, working with Alec Jacobson. Chenxi's research focuses on computational methods for understanding and assisting visual creation, including recent work on LoRA-based style analysis, 2D neural fields, and sketch processing. Chenxi holds a Ph.D. from the University of British Columbia under the supervision of Alla Sheffer, and has interned at Adobe Research and Disney Research.