Designing in Dialogue
How the Logic team uses semantic design tokens to build features at scale
At Logic, we’re a small team—just a handful of engineers building a complex product. We don’t have dedicated designers, design systems specialists, or a team to create pixel-perfect mockups. What we do have is a clear vision for what we want to build and a constant stream of feature ideas.
But that raises an interesting question: without a design team, how do you maintain visual consistency? How do you ensure that every new feature feels like it belongs in the same product? How do you build functional, intentional, and useful design patterns at scale when you’re moving fast?
Designing at the speed of thought
Over the past week, while refining our design token system and component library, I’ve started to notice some fun results while working with coding agents like claude and codex. With a set of well-defined tokens and UI components, I can now toss an ASCII layout to an AI like Claude, and it just 👋 gets it 👋.
Ok, there’s a little bit more to it than that, but not that much more.
Here’s my current workflow:
Open Claude CLI (usually Opus for specs).
Run a custom command to co-write a Product Requirements Document that describes multiple layout variations, along with architectural considerations, data flow, error states, edge cases, etc.
Revise that doc with a new instance of Opus
Pass the doc to a few instances of claude and codex to implement across different git worktrees.
In about a half hour, I’m examining working prototypes populated with real data that are semantically precise and fall into our design language with ease.
No more divs to organize. No more boxes to push around in Figma. Just a structured conversation that produces functional code built on top of a foundation of atomic components.
This creates an interesting constraint: our capacity for feature development is only limited by our ability to conceive of features, not by our ability to implement them. The bottleneck isn’t “can we build this?”—it’s “what should we build next?”
The traditional answer is to slow down, create detailed specs, hire designers, or compromise on quality. We chose a different path: invest heavily in a robust design token system that encodes our design decisions once, then leverage AI to apply those decisions consistently across every feature.
This approach transforms a constraint (no design team) into an advantage. Instead of waiting for mockups, we can have conversations about intent. Instead of debating pixel values, we reference semantic tokens. Instead of one person designing in isolation, the entire team can contribute to the product’s visual language through structured dialogue.
The result? We ship features at the speed of thought, and every feature inherits the same thoughtful design patterns. Our small team punches well above our weight class—not by working harder, but by working at the right layer of abstraction.
What This Looks Like in Practice
While designing a new sidebar navigation, I gave Claude this layout:
[logic logo](/documents)- - -[close button](:onClose)- - -[new doc button](/new)
[filter](?filter=${filter})
[search](/documents_search)
[new simplified autodoc form](:createWithAutoDoc)
[documents list](/documents)
[[doc title]- - -[<kebab_menu/>](:onOpenKebab)]
This ASCII-esque syntax communicates everything the LLM needs to know:
Spatial relationships: Horizontal rows, vertical stacking, and indentation for hierarchy.
Component references: `<kebab_menu/>`, filter, and search.
Actions and routing: `:onClose`, `:createWithAutoDoc`, `/documents`, and `?filter=`.
Data flow: `${filter}` and title binding.
Claude understood the intent, but that was just the beginning of our conversation. We pontificate further:
“What should happen when the filter is cleared?”
“Should the search be debounced?”
“What’s the loading state for the documents list?”
“How should keyboard navigation work?”
By including instructions for the LLM to research and consider architectural impacts between questions, the output improves dramatically. Through this dialogue, we wrote a complete PRD covering multiple layout variations, interaction patterns, error states, and edge cases, then implemented them to test their viability.
The Secret Sauce: Tokens and Components
This workflow is only possible because we invested in building a semantically precise design token system. I learned that the quality of your design tokens (much like the quality of your platform code) directly impacts how well an LLM understands your intent.
Consider these two ways to specify a border:
Option 1: Arbitrary Value
className=”border-primary-200”
Option 2: Semantic Token
className=”border-subtle”
The second option is better for at least three reasons:
Human Comprehension: I know what “subtle” means within our design language.
AI Comprehension: An LLM knows when to use `border-subtle` versus `border-strong`.
Constraint-Based Development: Tokens limit the available options, which reduces LLM flights of fancy.
Now, when I casually say “use a subtle border with small icons on an elevated surface,” to an LLM, it knows exactly what that means: `border-subtle`, `icon-slot-sm`, and `bg-surface-elevated`.
Fewer Guardrails, More Constraints
Semantic design tokens enable constraint-based development rather than guardrail-heavy development.
Without semantic tokens, you’re stuck with overly specific examples in your CLAUDE.md or AGENTS.md files:
“Use exactly #ababab for borders.”
“Icons must be 16px.”
“The background should be rgb(247, 245, 242).”
With semantic tokens, you provide flexible constraints in your prompts (or bake them into your components) and you free up space in context to do more with less:
“Use subtle borders.”
“Use small icons.”
“Use an elevated surface.”
The LLM can operate within a well-defined design language, producing predictable results without needing to be micromanaged. The tokens act as a shared vocabulary between the two of us.
The Dialogue Layer of Abstraction
Software developers love a good argument, so why not make that the primary way we build features? This is what I mean by working at the dialogue layer of abstraction.
Traditional Workflow:
Think → Design → Specify → Implement → Debug
Dialogue Workflow:
Converse → Clarify → Implement
The conversation *is* the specification. The back-and-forth naturally surfaces edge cases, accessibility concerns, and interaction patterns that might be missed in a static mockup.
This works best with strong constraints, not weak ones. Well-defined components and semantic design tokens give the dialogue structure. Without them, you’re stuck rehashing the same basic questions: “What color should this be?” or “How much spacing?” With them, you can focus on more interesting problems, like graceful degradation, keyboard navigation flows, and mobile responsiveness.
The PRD as Executable Documentation
Because the PRD describes layouts using real component names, design tokens, and route patterns, it stays in sync with the codebase. It’s a living document, not a stale artifact.
This isn’t just about making LLMs better at generating code. It’s about finding the right layer of abstraction for collaboration in product development, no matter with whom you collaborate.
The dialogue layer works because it mirrors how we naturally think about design problems. We don’t think in hex codes and div positions; we think in components and their purpose. Semantic design tokens and well-composed components give structure to that natural language, turning our words into working software.


