Industry

AI Startup / Data Analytics / Business Operations

Client

Logic AI

Designing Human-Centered AI for Operational Scale

Overview

In 2024, I partnered with Logic to help define and design an MVP aligned with their mission to democratize AI — enabling people across all levels of an organization to harness generative AI through natural language, without requiring technical expertise. The product needed to feel approachable on the surface, while handling significant complexity behind the scenes. At its core, Logic aims to increase operational efficiency by leveraging what AI does best: consuming, reviewing, and acting on large volumes of information at scale. The system operates on behalf of users using predefined rules, with human oversight built in. This required designing for repeatability and reliability—ensuring that automated decisions could be trusted over time through regression testing, historical evaluations, and clearly defined input and output constraints.

In 2024, I partnered with Logic to help define and design an MVP aligned with their mission to democratize AI — enabling people across all levels of an organization to harness generative AI through natural language, without requiring technical expertise. The product needed to feel approachable on the surface, while handling significant complexity behind the scenes. At its core, Logic aims to increase operational efficiency by leveraging what AI does best: consuming, reviewing, and acting on large volumes of information at scale. The system operates on behalf of users using predefined rules, with human oversight built in. This required designing for repeatability and reliability—ensuring that automated decisions could be trusted over time through regression testing, historical evaluations, and clearly defined input and output constraints.

Large Project Gallery Image #2
Large Project Gallery Image #2
Large Project Gallery Image #2

The Challenge

A key design challenge was balancing this invisible infrastructure with a generative AI interface that felt intuitive and flexible. While non-technical users interacted with the system using natural human language on the front end, the underlying product needed rigorous safeguards to prevent over-reliance on automation. This meant designing a system where human judgment remained essential—reinforced through regression testing, historical evaluations, and clearly defined constraints. The result was a human-in-the-loop system where AI enhanced decision-making without obscuring accountability—supporting scale, consistency, and trust across evolving organizational workflows.

A key design challenge was balancing this invisible infrastructure with a generative AI interface that felt intuitive and flexible. While non-technical users interacted with the system using natural human language on the front end, the underlying product needed rigorous safeguards to prevent over-reliance on automation. This meant designing a system where human judgment remained essential—reinforced through regression testing, historical evaluations, and clearly defined constraints. The result was a human-in-the-loop system where AI enhanced decision-making without obscuring accountability—supporting scale, consistency, and trust across evolving organizational workflows.

How might we design AI systems that operate reliably at scale while preserving human judgment and accountability?

Defining the MVP

To move from broad ambition to a focused MVP, I facilitated a collaborative workshop with the Logic team. The goal was to clarify user needs, identify opportunities, and prioritize features that balanced feasibility with market impact. Through this process, we narrowed the initial scope to a single, high-impact use case: moderating abusive messages between delivery drivers and customers, using Doordash as an example. This allowed the team to ground abstract ideas like schema definition, regression testing, and historical evaluations into a concrete, real-world workflow. With insights from the workshop, I began wireframing despite limited clarity around how the system would ultimately function. Rather than waiting for perfect information, we explored two distinct directions for the document flow. These early explorations helped the team reason through tradeoffs in information architecture and interaction design, and surfaced downstream implications that might otherwise have been missed. A simple conceptual map emerged as a key thinking tool—one that treated the system like a continuous loop, where different types of tests and evaluations needed to be incorporated as new information became available.

To move from broad ambition to a focused MVP, I facilitated a collaborative workshop with the Logic team. The goal was to clarify user needs, identify opportunities, and prioritize features that balanced feasibility with market impact. Through this process, we narrowed the initial scope to a single, high-impact use case: moderating abusive messages between delivery drivers and customers, using Doordash as an example. This allowed the team to ground abstract ideas like schema definition, regression testing, and historical evaluations into a concrete, real-world workflow. With insights from the workshop, I began wireframing despite limited clarity around how the system would ultimately function. Rather than waiting for perfect information, we explored two distinct directions for the document flow. These early explorations helped the team reason through tradeoffs in information architecture and interaction design, and surfaced downstream implications that might otherwise have been missed. A simple conceptual map emerged as a key thinking tool—one that treated the system like a continuous loop, where different types of tests and evaluations needed to be incorporated as new information became available.

Large Project Gallery Image #4
Large Project Gallery Image #4
Large Project Gallery Image #4

Information Architecture

At the core of Logic’s product was a set of deeply interrelated concepts that needed to be understandable to non-technical users: • Schemas defined the input and output constraints—what the system should look for and how results should be structured. • Tests allowed users to enter sample inputs and outputs in context, validating assumptions before deployment. • Historical evaluations reflected real-world data running against live documents, showing how the system performed over time. Because these elements were so tightly connected, treating them as separate destinations created unnecessary cognitive overhead. I instead combined schemas, tests, and historical evaluations into a guided, step-by-step linear path. This approach provided clarity and momentum, helping users understand not just what to do, but why each step mattered.

At the core of Logic’s product was a set of deeply interrelated concepts that needed to be understandable to non-technical users: • Schemas defined the input and output constraints—what the system should look for and how results should be structured. • Tests allowed users to enter sample inputs and outputs in context, validating assumptions before deployment. • Historical evaluations reflected real-world data running against live documents, showing how the system performed over time. Because these elements were so tightly connected, treating them as separate destinations created unnecessary cognitive overhead. I instead combined schemas, tests, and historical evaluations into a guided, step-by-step linear path. This approach provided clarity and momentum, helping users understand not just what to do, but why each step mattered.

Designing for Change and Accountability

As Logic’s engineering team continued development in parallel, new requirements surfaced—most notably the need to review call logs and analytics between publishing changes. This introduced a new layer of complexity around iteration, accountability, and trust. In response, we designed a view-and-edit environment with versioning control. This allowed users to compare changes over time, evaluate performance before and after edits, and maintain confidence that AI-driven decisions were grounded in measurable outcomes—not guesswork. A recurring insight throughout the project was that context-aware responses significantly improved relevance and trust. We explored patterns such as side panels that surfaced suggested edits versus fully integrated, in-line edits that appeared directly within the user’s workflow. Rather than forcing users to interpret AI outputs in isolation, the product surfaced suggestions where decisions were actually being made—reinforcing the idea that AI was a collaborator, not an authority.

As Logic’s engineering team continued development in parallel, new requirements surfaced—most notably the need to review call logs and analytics between publishing changes. This introduced a new layer of complexity around iteration, accountability, and trust. In response, we designed a view-and-edit environment with versioning control. This allowed users to compare changes over time, evaluate performance before and after edits, and maintain confidence that AI-driven decisions were grounded in measurable outcomes—not guesswork. A recurring insight throughout the project was that context-aware responses significantly improved relevance and trust. We explored patterns such as side panels that surfaced suggested edits versus fully integrated, in-line edits that appeared directly within the user’s workflow. Rather than forcing users to interpret AI outputs in isolation, the product surfaced suggestions where decisions were actually being made—reinforcing the idea that AI was a collaborator, not an authority.

Outcome

This engagement helped define and shape Logic’s product from initial concept through a functional MVP, establishing a foundation that could scale with future use cases. Beyond the product itself, we also created a distinctive brand presence within the AI space—one that emphasized clarity, future forward thinking, and human-centric design. Within the limited timeframe, we: • Defined the product vision and MVP scope • Designed the core information architecture and interaction model • Created a robust, extensible product experience from the ground up More importantly, the work established a clear design philosophy for Logic: automation should amplify human judgment, not replace it, ensuring people remain accountable, informed, and in control.

This engagement helped define and shape Logic’s product from initial concept through a functional MVP, establishing a foundation that could scale with future use cases. Beyond the product itself, we also created a distinctive brand presence within the AI space—one that emphasized clarity, future forward thinking, and human-centric design. Within the limited timeframe, we: • Defined the product vision and MVP scope • Designed the core information architecture and interaction model • Created a robust, extensible product experience from the ground up More importantly, the work established a clear design philosophy for Logic: automation should amplify human judgment, not replace it, ensuring people remain accountable, informed, and in control.