Blog PostGenerative UI in Tourism: When Interfaces Start Listening

Sandra Costa

/

Most digital experiences are still built around fixed layouts and predefined flows. You open them, navigate them, and try to find what you need.

Generative UI (Gen UI) changes that. It enables interfaces that adapt in real time to what users say, need, and discover, shifting the interaction from navigation to conversation.

In this article, we explore what this shift really means in practice. You’ll see how Generative UI works, how it changes the way we design digital products, and what it takes to build an experience in which the interface is no longer predefined but generated.

Wayfinder, a Gen UI tourism kiosk, is a concrete example of this approach. It combines conversational AI, real-time data, and a self-evolving interface to turn a simple kiosk into a voice-driven concierge that people can interact with naturally.


From static kiosks to conversational guides

Traditional tourism kiosks are usually built around menus, buttons, and a fixed navigation tree. Visitors must understand the interface before the interface understands them. That becomes a problem when:

  • Audiences include a wide range of ages and levels of digital literacy
  • Tourism information changes constantly (events, schedules, weather)
  • Visitors arrive with vague intentions: “We like nature and food, what should we do?”

WayFinder flips this model. Instead of forcing the user to adapt to the UI, the kiosk listens first. Visitors simply walk up and talk. The system greets them, asks a few focused questions, and starts building an itinerary in real time—on the screen, in their language, and in a format they can take home.



What makes Gen UI different?

Gen UI is more than a typical AI chatbot layered onto an interface. It creates a continuous interaction in which the system doesn’t just respond but actively shapes the experience in real time.

In WayFinder, this idea shows up in three key ways:

  • The layout is not predetermined for every step of the conversation. Cards, maps, galleries, and routes appear when the AI decides they are relevant.
  • The UI behaves like a visual memory of the dialogue, turning each answer into something the visitor can see and scan at a glance.
  • No two sessions look the same, because no two conversations are the same.

Instead of designing a single, linear flow, the team designed a system of components and rules that the AI can orchestrate in real time. The result is an interface that feels less like a website and more like a visual narrative that grows with the visitor’s curiosity.

And this is not just a conceptual shift, it’s how the experience works in practice.



Designing for conversation, not clicks

The first step was not technical; it was about conversation design.

The team designed the interaction model around how a real local expert would talk to a visitor: short openers, gentle probing questions, and a gradual narrowing toward a clear plan. The kiosk senses when someone approaches, initiates the dialogue with simple prompts, and quickly guides the conversation toward meaningful categories like culture, nature, gastronomy, or architecture.

At the end of each session, the AI generates a tailored itinerary that goes beyond the exact words exchanged. It adds context, recommendations, and explanations, turning a few minutes of conversation into a multi‑stop plan that actually makes sense for that visitor.

From a usability perspective, the experience is designed to be voice-first from beginning to end. There are no complex menus to navigate, no need for prior digital experience, and the flow adapts naturally, whether the visitor is decisive or simply exploring options.



Under the hood: AI integration and knowledge design

Behind this seemingly simple experience is a stack designed for real‑time, multimodal interaction.

  • A live conversational model processes audio directly, without needing intermediate text transcription.
  • A knowledge layer blends two streams: a curated RAG dataset of local information and live web search for dynamic data such as events or weather.
  • A set of tools handles actions like retrieving points of interest, fetching events, rendering maps, checking conditions, and plotting routes.

A crucial part of the project was the knowledge curation itself. Instead of throwing raw data at the model, the team used multiple models to clean, normalise, and refine the dataset, discarding around 80% of the initial material to keep only what was accurate, structured, and truly useful. That discipline paid off: a learner's knowledge base resulted in more precise, reliable, and “local‑sounding” responses.



Gen UI: when AI drives the screen

The most distinctive layer of WayFinder is the Gen UI itself.

As the model listens and reasons, it dynamically determines when to:

  • Show a destination card with images and highlights
  • Pull up a gallery of nearby points of interest
  • Render a map with a suggested route
  • Introduce event cards or logistics details
  • Close the session and wrap everything into a final itinerary

Each of these decisions triggers tool calls that dynamically update the interface in parallel with the conversation. It is a live consequence of what has been said and what the model has inferred. The interface evolves based on the continuous state of the conversation, functioning as a visual timeline that users can follow naturally.

This approach also meant that some of the most interesting UI patterns were not part of the original storyboard. They emerged as a natural extension of the project’s core idea: if the AI is leading the experience, the interface should be free to reshape itself around that leadership.



From kiosk to takeaway: closing the loop

The experience does not end at the screen. Throughout the conversation, visitors can scan a QR code that opens a short form. With just a few details, they receive their personalised itinerary by email: routes, stops, and contextual information they can use on their phone as they explore the region.

This is where the same Generative UI approach extends beyond the kiosk. During the interaction, Gen UI helps illustrate the conversation in real time, turning dialogue into visual elements that guide the user. After the interaction, that same logic is applied to generate a personalised itinerary: a dynamic, visually structured page built specifically for each visitor.

Instead of a static summary, each user receives an interface shaped by their own conversation - relevant, contextual, and ready to use.

This final step is crucial for impact. The kiosk becomes more than an on-site interaction; it becomes the starting point for a journey that continues in the visitor’s pocket, anchored in the same conversation they had minutes earlier.



Why this matters for the future of UX

WayFinder offers a glimpse of what Gen UI can mean for physical and digital experiences:

  • Interfaces that listen first, instead of forcing users to learn them
  • Knowledge bases are treated as products in their own right, carefully curated rather than unthinkingly aggregated
  • Visual layers that are no longer fixed canvases, but outcomes of real‑time reasoning

For product teams, this implies a shift. It is not enough to design screens; we now design systems that can compose screens dynamically, guided by AI. It is not enough to wireframe flows; we must model conversations, contexts, and decision trees.

Gen UI does not replace UX design; it raises the bar. It asks designers, developers, and AI specialists to build together, starting from a clear problem and an even clearer ambition: to make technology feel more like a dialogue, and less like a manual.

WayFinder is one example of that ambition in action: a tourism kiosk that stopped being just a screen in a lobby and became a system that listens, responds, and evolves - turning interaction into something closer to a real conversation.

👉 Explore the full case study


FAQ

Generative UI refers to interfaces that adapt in real time based on user input, context, and AI reasoning. Instead of fixed layouts, the interface is dynamically composed to match each interaction.

A chatbot only generates text. Generative UI goes further, it generates the interface itself. Layouts, components, and visual elements change dynamically based on the conversation.

Generative UI can be applied to: - onboarding flows - internal tools with complex workflows - customer support experiences - multi-location or service-heavy platforms Anywhere users face complexity, Gen UI can simplify the interaction.

No. That’s the point. Generative UI reduces the need to learn interfaces by adapting to natural language and user intent.

It requires a combination of: - conversational AI - structured knowledge systems - dynamic UI architecture But when designed properly, it significantly reduces complexity for end users.