Understanding Design Ideation with Vision-Language Models and Video-Based Design 

More Info
expand_more

Abstract

 This study explores integration of Large Language Models (LLMs) and Vision-Language Models (VLMs) into design ideation of industrial design. Conducted at TU Delft, the research involved brainwriting and video-based design (VBD) methodologies. The primary aim was to legitimize and valiate context-injected LLMs and VLMs in supporting designers' search for inspiration through development of experiment framework. The study measured workload, user experience, acceptance of technology, divergent thinking capabilites and attitudes towards AI. It also preliminary analysed the results, focusing on qualitative insights.

Data was collected through surveys and interviews, but although eye-tracking data was also gathered, it was excluded from the analysis. The study found that while AI tools support ideation by generating diverse ideas and handling repetitive tasks, they need improvement for contextually relevant and accurate information. Designers expressed cautious optimism about AI's potential, emphasizing the need for human oversight to retain creativity and ensure context-aware assistance. 

The research highlighted optimistic leaning opinions on AI integration, noting that current AI capabilities are not yet sufficient for design ideation demands. It emphasized the necessity for AI to act as a collaborative partner, preserving the designer's critical role in the creative process.