What is AI?
AI is the ability of a machine to perform cognitive functions that we associate with humans: perceiving, reasoning, learning, interacting with the environment, problem solving, decision-making or demonstrating creativity.
What are the different disciplines of AI?
Natural language processing → communicate successfully in a human language
Knowledge representation → store what it knows or hears
Automated reasoning → answer questions and draw new conclusions
Machine learning → adapt to new circumstances and detect patterns
Computer vision and speech recognition → perceive the world
Robotics → manipulate objects and move about
What are LLMs? How are they trained?
LLMs are transformer-based neural networks that predict the next token in a sequence based on large training corpora.
Training steps:
Pre-Training: self-supervised on massive text corpora → they learn broad language patterns
Instruction Fine-Tuning: they are then fine-tuned on smaller, curated datasets with inputs and desired responses → improves prompt following and helpfulness on user tasks.
Reinforcement Learning with Human Feedback: humans rate and rank the model outputs, this trains a reward model, and then the model optimizes the policy towards the preferred behavior.
What is in-context learning/prompting? What methods are there?
Fine-tuning is about optimizing the weights of a pre-trained model for a specific task by training on thousands of supervised labels.
It’s a long-term change to the model.
And it needs large datasets with many input/output pairs to train.
In-context learning/prompting is about providing instructions or examples to improve the model results (shot-term). Methods:
Few-Shot: give the x examples/demonstrations of a task (with context and desired outcome). The last example has the context but no outcome. The model is expected to complete the example.
One-Shot: same as few-shot but with k = 1.
Zero-Shot: give the model a natural language description of the task, instead of giving it examples.
Chain-of-thought prompting: ask the model to provide step-by-step reasoning examples. Zero-Shot CoT can work as well with a simple “let’s think step by step”. CoT is especially helpful for analytical tasks.
What is a token? What is an LLM’s context window?
LLMs process text by slicing it into tokens that are mapped to numbers via a tokenizer/dictionary (e.g. o200k_harmony).
→ Tokens are therefore the basic text unit that LLMs process and can be words, parts of words, punctuation…
The Context Window is how much tokens the model can consider/remember at once. It usually includes the user prompts, the system messages and prior dialogue turns.
What is Retrieval Augmented Generation (RAG)? What is the problem they are trying to solve?
Problem: LLMs store a lot of implicit knowledge, but they can’t expand their memory, provide insights into their predictions, and they can hallucinate.
RAG: is an LLM architecture that combines a parametric memory LLM with a non-parametric memory.
They use a prompt/query to retrieve relevant documents from an external corpus (non-parametric memory) and then feed the retrieved text into the generative model.
External sources can be a company’s database or design guidelines or a wiki/intranet…
What methods can be used to improve AI-supported design and code generation outcomes?
Use zero-shot when we can describe the task clearly in natural language (fast, no examples).
Use one/few-shots when we want the model to match a format/style/pattern (provide k examples).
Use CoT for analytical tasks. Add step-by-step reasoning examples or trigget zero-shot CoT (let’s think step by step).
Use RAG when the model needs up-to-date/project specific/factual grounding (e.g. design guidelines, GUI datasets…). Be careful, more examples do not always lead to better results in GUI prototyping (relationship may not be linear).
→ A good structure and iterations can outperform throwing more data at the model:
Use prompt decomposition (features → components → layout) for better structure/clarity.
Use self-critique loops (generate → critique → improve) to improve features, visuals or overall satisfaction and results.
How can AI support and augment human creativity during the design process?
Co-creation (ideation + exploration with humans “in the loop”)
Persona generation (careful with ****stereotypes)
Exploring alternatives / avoiding anchoring: track design decisions and make implicit decisions visible to support exploring alternative solutions.
Prototyping (speed + iteration)
The step from requirements to mockups is traditionally slow
AI-augmented workflows enable rapid generation and faster iteration cycles.
Testing and usability tools (feedback loops + evaluation assistance)
AI-supported evaluation can provide real-time feedback and help designers see what attracts attention (heatmaps).
Style-guide support: LLM-based plugins can turn guideline checks into constructive suggestions; it can catch subtle issues but “won’t replace human judgment.”
What are the strengths and limitations of AI-generated code?
Strengths :
Productivity gains: faster iteration cycles, rapid generation from high-level input, lower entry barriers.
Quality gains: Tools and plugings that integrate AI can increase the output quality as well as the productivity (GUIDE Figma plugin).
Limitations / risks
Hidden assumptions & implicit decisions: AI can introduce assumptions that must be made visible and managed.
Control/steering problems: It’s challenged to steer the AI towards the intended goals and to interpret it’s outputs. And the ownership of design decisions is often unclear.
Quality is not guaranteed: Even with RAG more examples don’t necessarily yield better results; you often need structure + feedback loops.
Dependence on data + prompting setup: prompt results depend on the pre-training data (bias) and quality. Demos help, but behavior isn’t perfectly predictable.
Give examples of AI-based design and development tools?
Github Copilot
Cursor
Figma Make
What works well with AI? What not so much?
Works well:
Writing independent components, functions, or starting a new project
Creating mockups for inspiration
Doesn’t work well:
Quality of output not good enough: not functional
Integration of AI-based code into old, big projects
What is vibe coding? What did it change in programming?
Vibe coding is a new software development method where humans collaborate with genAI to co-create software through natural language dialogue.
Changes:
Describe what, not how
Conversational feedback loops
Working code without full understanding
Requirements evolve during interaction
Co-creative flow states → human & AI “in sync”
What is intent mediation?
The process of translating human goals into representations a computer can execute.
This is the connection from traditional coding, declarative UIs, DSLs/low-code, to now prompt-based generation — each raises the abstraction level further.
What is GUIDE?
GUIDE is a requirements-driven pipeline for GUI creation that solves the problem that high-level descriptions are underspecified for direct UI generation → they need systematic decomposition.
GUIDE’s decomposes a high-level GUI/app description into fine-grained requirements and then generate UI artifacts stepwise, rather than “one-shot”.
What prompting strategies are there for GUI Generation?
Prompt Decomposition (PDGG): introduce intermediate representations, then generate code (GUIDE)
Retrieval-Augmented GUI Generation (RAGG): retrieve similar existing GUIs, then condition generation on retrieved examples
Self-Critique (SCGG): generate → critique → revise in a loop (explicitly foregrounded as a feedback mechanism)
Reported as the most effective of the 3 (also yields insights into defect types in generated GUIs).
It can be helpful to structure prompts with context (platform, design system…), functional requirements, non-functional constraints (accessibility, consistency…) and acceptance criterias.
What are opportunities of vibe coding?
Lowers entry barriers by enabling natural-language intent expression
Shifts human effort from low-level implementation to problem framing and design and therefore frees cognitive resources for strategic, creative, and conceptual work
Enables rapid prototyping through dialogue-based iteration
Positions developers as orchestrators rather than sole implementers
Small teams can achieve outputs previously requiring large engineering teams
What are threats of vibe coding?
Deskilling: Reduced programming expertise through lower engagement with syntax, algorithms, and architecture → risk of shallow understanding and overreliance on AI
Lower Code Quality & Maintainability Risks: Functional correctness does not guarantee security or robustness
Responsibility Gaps: Blurred authorship between human and AI → difficulty assigning responsibility for errors or failures
Black-Box Effects: Developers may not fully understand system behavior → challenges for explainability, compliance, and verification
Strategic & Organizational Vulnerabilities: Dependence on proprietary AI tools and ecosystems
What is responsibility? What is responsible design? Why is it important?
Responsibility is the state of being responsible, answerable or accountable for something within one’s power, control or management.
E.g. parental responsibility, ethical responsibility, social responsibility, environmental responsibility…
Responsible design is = responsibility + design
responsibility here is the ability to respond to the needs and challenges faced by the society
design is both the process (designing) and the outcome of that process (the designed artifact)
Last changed14 days ago