Today, we're announcing Contextual View, a new low-code component of our Google Maps AI Kit, a collection of familiar and trusted UI components from Google Maps Platform designed to bring your AI responses to life. It is designed to enrich your generative AI applications with trusted, interactive Google Maps information to create more engaging, informative, and helpful experiences.
For developers building the next generation of generative AI agents, providing users with clear, useful, and engaging information is more important than ever. But text-based responses often fall short. Users experience "text wall fatigue" and lack the visual context needed to understand what places are really like or how they relate to one another.
Contextual View is a low-code, out-of-the-box solution that can be implemented with just a few lines of code. It allows the Large Language Model (LLM), e.g. Gemini via Vertex AI, to dynamically surface specific Google Maps data and render key UI elements directly within your AI chat experience, all based on user queries and context.
Enable LLMs to display interactive maps and Places data directly within AI chat using a few lines of code with Contextual View, a Google Maps AI Kit component.
Build richer generative AI experiences
With the Contextual View component, you can ground your AI chat experiences in rich, real-world information from Google Maps. The component gives your AI agent tools to access Google’s trusted data and visualize it for your users. Based on user intent, the LLM can control and display rich Places data, interactive 2D and 3D maps with markers, and provide contextual information like photos and user-generated content (e.g. reviews).
Transform conversations into confident decisions
You can move beyond frustrating text-walls and allow users to see what a place is really like. By providing rich, visual context and intuitive map-based interactions, you can minimize ambiguity and enable users to make faster, more confident decisions, all within the natural flow of a conversation.
For example, a travel company can use the Contextual View to allow its AI to surface relevant visuals and information about places, helping travelers with questions about an area instead of just providing a text response.
Develop faster with more predictability
The Contextual View component helps you go from concept to production quickly with just a few lines of code, significantly reducing development and maintenance overhead. This also allows you to focus on other parts of the AI chat experience without having to worry about training or building an agent to handle maps-related conversations.
The way the component works is, you make a call to Vertex AI API which will then return a Google Maps token. Once the token is received, the contextual widget starts rendering and displays it to the end user in chat.
Showcase a variety of places for a Chicago trip with Contextual View, Grounding with Google Maps, 3D Maps, and Gemini.
Get started
Contextual View is available today in Experimental using the Gemini API or Vertex AI API. To get started, check out the documentation and demo. We can't wait to see what you build!