Pages and components.

A Houston app is built out of pages. Each page is a list of components plus a small amount of text that tells the AI what the page is for.

The atomic unit is a page

Every Houston app is a set of pages. Each page does one job.

The client detail page shows a client. The invoices page shows invoices. The dashboard shows a summary.

A page is how a user thinks about part of the app. It's also how the AI thinks about it.

What a page actually is

A page in Houston is three files:

pages/client-detail/ page.json # the list of components on the page prompt.seed.md # what the page is for (you write this) prompt.local.md # rules the user has taught (starts empty)

That's it. No routing framework. No template syntax. Just JSON and markdown.

The page.json lists components and what data they're bound to. It looks like this:

{
  "components": [
    { "type": "Header", "props": { "title": "Client" } },
    { "type": "Card",   "bind": "clients/{id}.json" },
    { "type": "Table",  "bind": "invoices/",
      "filter": "client = {id}",
      "columns": ["number", "amount", "status"] }
  ]
}

The page renderer reads the file, mounts the components, and shows them. You don't write routes. You don't write a layout. You list what you want, and the framework renders it.

Components come from a registry

You can't drop any React component onto a page. Components have to be registered.

Houston ships with a base set of primitives that cover most assistant-style apps:

If you need something these don't cover, you build it. You write a normal React component and register it. Chapter 07 walks through exactly how.

Closed vocabulary by design

The AI can only use components that are in the registry. It can't invent a component at runtime. This is the boundary that keeps Houston apps predictable and secure.

How the AI sees the page

Here's the part that's easy to miss: the AI doesn't read your component source code to understand what's on screen.

Every registered component self-reports. When it renders, it tells Houston what it looks like and what data it's showing right now.

Houston bundles these self-reports into a structured description of the page. The AI fetches it with one tool call:

get_page("client-detail")
→ {
    structure: { /* what's on the page */ },
    intent:    "seed + learned rules"
  }

You don't write this description. You don't maintain it. You don't keep it in sync.

It's generated at render time by the components themselves. It can't drift from reality because it is reality.

Why this matters

In most AI apps, the LLM's knowledge of the UI is whatever the engineer stuffed into the system prompt. It goes stale fast. The UI changes, the prompt doesn't, and now the AI is talking about buttons that don't exist.

In Houston, the AI's knowledge of the UI is always the current UI. You never write it, you never update it, it's just there.

You write React components. Houston handles the rest.