Anna Demianenko Photo
Anna Demianenko
Design Lead
Articles
15
min read

AI innovation process in design: designing use scenarios, not mockups

Summary

Discover our AI innovation in product design: user research and tech discovery creates dynamic personalized interfaces based on contextual user data.

Step One: User Research

The user research phase at Lazarev follows a systematic approach akin to what many design agencies and product companies employ. We begin with the standard procedures of product discovery and requirement analysis, focusing on answering several crucial questions:

  • Who is our customer?
  • What are we trying to build for them?
  • What metric are we aiming to affect?
  • What changes do we expect post-innovation?

Answering these questions establishes the foundation for successful AI innovation ensuring we know what we are doing, why we are doing it, and for whom.

Next, we conduct thorough user research, shaping and outlining job stories, which we may then place on a customer journey map. This helps us understand how customer needs evolve over time, how customers navigate their daily routines, and which parts of their day, journey or lifecycle could be enhanced by our product.

Research deliverable example
It is crucial to note that at this stage, we do not discuss features, solutions, or any specific AI applications. Approaching innovation with a vague intent to "implement AI" without a clear user need will lead to failure. Thus, our focus here is on identifying customer problems, areas for improvement in their lives and within existing products, and how our product can influence their journey.

With a clear understanding of product requirements and customers, we conduct competitor analysis and reference research to understand the market, identify solutions to customer problems, and assess current AI solutions. This lays the groundwork for our AI innovation process.

Referencing Don Norman's pillars of product design — technology, user, and business — is crucial here. We ensure the user and business aspects are thoroughly covered before exploring the technology aspect.

Step Two: Technical Discovery & ideation

While the first step may span from two to four weeks, the second step, technical discovery, is more concise but equally essential. This stage involves answering fundamental questions about the technical possibilities for development:

  • What is the expertise of our development team?
  • What budget is allocated for development and data acquisition?
  • Will we train a language model in-house or use an existing solution?
  • Do suitable solutions exist?
  • Do we have the data to train our own model?
  • Can we acquire or generate the necessary data?
  • What frameworks are accessible and affordable for us to use (e.g., machine learning, NLP, computer vision)?

During the technical discovery stage, we also address questions about the capabilities of natural language processing (NLP), continuous improvement of language and machine learning models, data security, and gathering user feedback on AI performance.

That’s where we can finally start thinking of solutions, so we together with the development team lock ourselves up for a few sessions of intense brainstorming to find the best way to address customer needs while also staying within what’s technically possible and within budget.

Design and development teams in discussion of AI integration

For instance, through thorough technical discovery & ideation, we developed this news aggregator. We identified our technical strength in collecting, analyzing, and summarizing news from multiple sources, making a single news post to be a quintessence of multiple articles and discussions online to leverage diverse opinions and sources. This AI-driven solution addresses the user need for reliable, fact-checked information online, demonstrating how technical discovery aligns with user research to create impactful products.

That's how AI reinvented transparent journalism.
AI-powered news aggregator interface designed by Lazarev.

For Rhea, we discovered its language model can be tailored for financial analysis and trained on ten years of historical data, which has already been a property of Accern. This enables market analysis, trend prediction, and informed decision-making based both on historical and open-source data.

The interface not only collects relevant information, but also automatically generates charts and widgets, accelerating analysts’ work. Our team developed a strategy for the interface to generate itself from a design system based on user queries, demonstrating effective use of data and visualizations to enhance analytical processes.

Our DragonGC project has a machine learning algorithm that can analyze company profiles and generate legally accurate reports because the language model has been trained on a vast amount of data and well-crafted reports.

It not only offers intelligent search and summarization, but also the generation of legally accurate report templates based on user company profile and peer group example. 

Legal research assistant conversational interface

Step Three: Use Scenarios & Testing Feasibility

The next step in our project is to design possible use scenarios and user flows for the features within our interface. This approach differs significantly from the standard design process. Typically, designers work with a predefined layout and static features. In contrast, our method involves anticipating the needs users might encounter within the context or industry in which our product operates and encompasses all potential use cases that we researched through user interviews. By combining various contexts, and using a decision tree, AI can generate multiple variations of how the interface constructs itself, utilizing a modular design system with a set of pretty simple predefined rules.

Contextual information, such as time of day, location, and previous searches, helps machine learning interpret user intent, analyze behavior, and provide better-personalized outputs.

Then 😤

Consider a traditional search engine, it processes input based on:

  • keywords, 
  • semantics, 
  • and query expansion. 

then delivers a rich output of links, images, and advertisements sorted by SEO metrics and relevance to the keyword. This method lacks an understanding of the underlying user needs, throwing tons of pages at the user like an evil genie granting literal wishes without grasping the user’s true needs.


Now 🔥

With an utilization of AI, it is made possible to understand user intention by classifying use cases and scenarios pre-populated in our machine learning model. By analyzing natural language, sentiment, and user context, we can classify and refine user inquiries to provide highly personalized outputs as a ready-made solution rather than a bunch of search results.

Simplified visualization of use scenario tree of variations

In one of our projects, Pika AI For instance, if a user types “best places to eat near me,” our system can classify this as a planning scenario. Within planning, we have various subcategories like hobbies or spending time outside. Under spending-time-outside category, we might have subcategories such as traveling or being in my location. For evening plans, we consider factors like weather, attire, place ratings, pricing, and trending spots to generate a tailored evening date planner.

Another use scenario might involve getting directions for business logistics. Here, the interface would consider the best route, weather, time of day, and destination context (leisure vs. business). This information allows us to provide users with more relevant suggestions, making the interface look entirely different from one designed for a date planner.

Hyper personalized AI search engine

For Rhea we designed use scenarios based on tasks that out main persona performs as part of their job for finance research. Each task translates to the interface element as a widget, as well as is covered by ML and conversational part of the AI-powered interface:

  • Evaluate industry
  • Evaluate region
  • Compare peers
  • Analyze trends
  • Monitor product reviews
  • Monitor news, forums, discussions
  • Assess startup potential
  • Make informed decisions
  • Communicate findings to the team
Left: Rhea's main workflow; right: set of appropriate widgets

Additionally, in V2, Rhea will have diversified “bundles” (called “Lenses” for Rhea’s branding) of use cases tailored for different jobs. Each “Lens”  is equipped with a corresponding set of use cases, rules, constraints and design system and is backed by LLM based on corresponding data and capabilities.

  • Venture Capital
  • Equity Research
  • Product Research
  • and more on the way

Same way as virtual assistants of Suits AI had different Language Models trained on datasets with predefined sets or rules and use cases for such jobs:

  • Business Assistant
  • Design
  • Marketing Research
  • Sales
  • and more
AI assistant app designed before it was cool

When mapping out scenarios and ideating solutions, the development team runs a series of feasibility tests and proofs of concepts to understand if the planned scenario is possible to be covered by AI, whether AI provides the expected outcome and is it possible to scale and launch within budget.

Step Four: Discovering challenges users may face when interacting with the AI-powered interface

At this stage, we focus on making sure that users feel comfortable, safe, and confident in using AI within our products. Our goal is to maximize the usability and usefulness of artificial intelligence for our users. Here are some common challenges we address in our products, along with specific examples of how we resolve them:

  • How can we communicate the model’s level of confidence?
  • Can we provide the output resource and reference?
  • How might we overcome the Articulation Barrier?
  • How might we utilize the use of context?
  • How might we minimize the risk of mismatching output?
  • How can we increase the accuracy of output?
Keep in mind to focus on the final goal of the user, not on the process. Ideate the ways toremove all the steps that can be automated in the user's journey.

One of the primary challenges is overcoming the articulation barrier. In our product Rhea AI, the interface helps users by asking clarifying questions using contextual clues and tags. This approach not only provides more context but also hints at advanced functionalities within the product, such as advanced search and contextual actions within a conversational interface.

Rhea clarifies context important for efficient output

Another challenge is showing the confidence level of the model and the reliability of the information provided. We tackle this by displaying the number of sources from which the information was gathered. This transparency ensures that users understand the trustworthiness of the information and see the actual sources utilized.

Interface displays the amount of sources backing up the news

In the next AI ​​integration example, we show snippets of documents used in chatbot replies, indicating which ones are currently held in the “memory buffer”, or context, of the conversation.

We also address the need for user control and flexibility. By allowing users to change the “memory buffer” of the AI model, we enable them to adjust the context of their conversation. This feature gives users more freedom to shape the interaction according to their needs.

The interface shows data sources and snippets and allows adjustments

Step Five: UI Design, Modular System and Usability Testing

We return back to a traditional approach to interface design but with a modern twist. This phase involves more scripting as well as prototyping the general layout of the product. We carefully plan the information architecture and conduct usability tests to ensure overall usability and address core scenarios. 

The most crucial part of this step is creating a design system with a set of rules and a modular grid. The ML is trained to understand how to use widgets and populate the grid from the design system based on the current user scenario. 

Pre-made interface layouts will become a thing of the past. With the power of Generative AI and Machine Learning we can make the apps rebuild themselves for each specific task. 

Designer’s job is to teach the AI what are the user needs for each product and how widgets correlate to the user's prompts in the most usable way.
Left: examples of design system elements; right: modular grid column priority breakdown

Step Six: Polishing User Onboarding, More Tests and Handoff

In the final stage, we ensure that all features are accessible and easily discoverable. We make sure users always have access to help, support, and documentation. We evaluate and test to confirm that users understand how to use the AI, are comfortable with its output, and comprehend how it works. This thorough preparation readies us for the design handoff. What follows after that is another chapter entirely.

Zach Roy Photo
Zach Roy
Royalty Apparel Team
Marlena Stablein Photo
Marlena Stablein
Director of Operations, Blavity
Valeriia Mashyro Photo
Valeriia Mashyro
Project Manager
Viktoria Levchuk Photo
Viktoria Levchuk
Content Manager
Kyrylo Lopushynskyi Photo
Kyrylo Lopushynskyi
3D Designer
Konstiantyn Potapov Photo
Konstiantyn Potapov
Project Manager
Oleksandr Golovko Photo
Oleksandr Golovko
UX Researcher
Anna Hvozdiar Photo
Anna Hvozdiar
Director at Prytula Foundation
Ostap Oshurko Photo
Ostap Oshurko
Design Lead
Nielsen Norman Photo
Nielsen Norman
Stas Tsekhan Photo
Stas Tsekhan
Head of HR
Anastasiia Balakonenko Photo
Anastasiia Balakonenko
Design Lead
Oleksii Skyba Photo
Oleksii Skyba
Webflow Development Lead
Isaac Horowitz Photo
Isaac Horowitz
Founder at Blockbeat
Danylo Dubrovsky Photo
Danylo Dubrovsky
Senior UX/UI designer
Hanna Hvozdiar Photo
Hanna Hvozdiar
Director at Prytula Foundation
Oleksandr Holovko Photo
Oleksandr Holovko
UX Designer At Lazarev.
Emily Thorn Photo
Emily Thorn
CEO at Thorn Associates
Oliver Hajjar Photo
Oliver Hajjar
CEO & Co-Founder at ShopSwap
Kenneth Shen Photo
Kenneth Shen
Managing Partner at Half Past Nine
Safwan Al Turk Photo
Safwan Al Turk
CEO at Conscious Baboon
James Crane-Baker Photo
James Crane-Baker
CEO at Gigworkers
Laith Masarweh Photo
Laith Masarweh
CEO at Assistantly
Katie Wadsworth Photo
Katie Wadsworth
Head of Product at WellSet
Matt Hannam Photo
Matt Hannam
Executive Director at Kin
Jorden Beatty Photo
Jorden Beatty
Co-founder at DASH
Kumesh Aroomogan Photo
Kumesh Aroomogan
Founder at Accern
Andrey Gaday Photo
Andrey Gaday
Head of Design at Lazarev.
Aman Kansal Photo
Aman Kansal
Co-Founder at Encyro Inc
Nicolas Grasset Photo
Nicolas Grasset
CEO at Peel Insights, Inc
Jens Mathiasson Photo
Jens Mathiasson
CPO & Co-founder at Fieldstream
Josh Allen Photo
Josh Allen
CEO & Founder at Tratta
Kenneth Shen Photo
Kenneth Shen
Co-founder at Riptide
Tommy Duek Photo
Tommy Duek
Founder of Teachchain 
Maxence Bouvier Photo
Maxence Bouvier
CEO at Mappn
Ibrahim Hasani Photo
Ibrahim Hasani
Co-Founder & Head of Engineering at Metastaq
Boyd Hobbs Photo
Boyd Hobbs
President & Owner, NODO Film Systems
Nick Chapman Photo
Nick Chapman
Founder at Pika AI
Anna Demianenko Photo
Anna Demianenko
Design Lead
Oleksandr Koshytskyi Photo
Oleksandr Koshytskyi
Design Lead
Kseniia Shyshkova Photo
Kseniia Shyshkova
Head of PM
Volodymyr Khliupin Photo
Volodymyr Khliupin
Head of UX
Yurii Shepta Photo
Yurii Shepta
Head of Marketing
Kyrylo Lazariev Photo
Kyrylo Lazariev
CEO and Founder
No items found.

Don’t
miss
Anything

00 FPS