More Ways to Scale the Building
Figma Make
“l am creating a kiosk style app experience for a museum. It is going to run on an iPad - full screen. The museum is based in the UAE. The app will specifically focus on the history of Pearl merchants and Gulf Trade Networks. It will have a minimum of 6 screens that can be navigated around using custom navigation in the UI.”
When I introduce new tools to my students, I often describe them as additions to a creative “utility belt.” Designers today carry far more than a sketchbook and a few familiar applications. There are research tools, generative image systems, coding assistants, prototyping platforms, and an expanding set of AI systems that help explore ideas from different angles. I sometimes ask students to picture Batman’s belt. Most of those gadgets sit unused most of the time. A grappling hook is not something you normally need during an ordinary day. But when the moment comes where you need to scale a building, you are very glad it is there.
In my New Media Design course, students are currently developing an interactive kiosk application intended for a museum environment. The interface is designed for an iPad placed in a gallery or exhibition space and introduces visitors to aspects of UAE cultural heritage. The assignment asks students to research a specific subject drawn from the history, traditions, environment, or living culture of the Emirates and translate that material into a clear interactive experience that a visitor can understand immediately. The work begins with research and concept development and eventually becomes a fully interactive prototype built in Figma.
Early in the design process, before typography becomes precious and before layouts start to feel fixed, I encourage students to experiment with ways of generating structure. Designers often spend a surprising amount of time staring at a blank frame trying to decide where to begin. New tools are beginning to change that moment. Instead of starting from nothing, designers can now prompt systems to generate possible interface directions and then respond to them. The designer is still making the decisions. The difference is that the first spark does not have to come only from an empty page.
To illustrate this shift, I showed the class two tools that approach the same problem from very different directions. The first was Figma Make, which works directly inside the Figma design environment. The second was Claude Code, which generates functioning interface code from a written prompt. Both systems were given exactly the same description of a simple museum kiosk experience.
The outcomes were noticeably different.
Figma Make behaves almost like a design assistant sitting on the canvas with you. You describe the interface and the system proposes layout structures, interface components, and navigation patterns that can be refined immediately. It is particularly useful in the early ideation stage where designers want to explore multiple structural possibilities quickly. Some of the suggestions are awkward. Some are surprisingly strong. What matters most is that students can react to something visible rather than wrestling with a completely blank frame.
Claude Code
Claude Code approaches the problem from the opposite side. Instead of generating design elements in a layout tool, it produces working interface logic. From a short prompt it begins writing code that creates a functioning interface. Buttons appear. Screens change. Navigation starts to work. What appears on screen behaves more like a small application than a static design prototype.
For students watching this unfold in real time, it is striking. They can see the system reason through the problem, generate code, check language choices, produce icons and imagery, and assemble interface components step by step. Small micro animations begin to appear. Transitions between screens start to take shape. All of this emerges from a prompt that is intentionally minimal.
That detail matters.
This demonstration was conducted live in class, and the prompt itself was deliberately sparse. The goal was not to craft a perfect instruction but to see what the system would fill in on its own. A few keywords about a heritage subject, a rough description of a kiosk interface, and the AI begins constructing a solution. Watching that process happen in front of them helps students understand both the strengths and the limits of the technology.
What they see is not perfection. The layout decisions are sometimes questionable. The visual hierarchy may need refinement. The historical details still require careful verification. But even with those limitations, the result is a working guide. A starting point. Something tangible that designers can react to, improve, and reshape.
For students entering the field today, this changes an important part of the creative process. Designers have always had to confront the blank page. Now there are tools that help push past that moment of hesitation. They can generate structures, propose flows, and even assemble rough working interfaces in seconds.
The designer’s role does not disappear in that shift. If anything, it becomes more important. Someone still has to evaluate what works, what does not, and what respects the cultural material being presented. Someone still needs to shape the experience so that a museum visitor can understand it quickly and intuitively.
What these tools offer is momentum.
They give students a glimpse of what an AI system can produce from very little input, and they begin to understand how expectations for creative workflows are changing. Not perfect in any sense. But undeniably useful.
In other words, we no longer have to remain stuck when confronted with a blank piece of paper. There are now more ways to scale the building.