Showing, not driving.

Houston gives the AI a way to point at things on screen. When it wants the user to look at something — a row in a table, a field in a form, a whole page — it can light it up with a soft pulse.

This is new. Traditional AI apps can't do it. Their AI sits behind an API, and the UI is on the other side of a wall. The AI can describe what it did; it can't actually point.

Why this is possible in Houston

In Houston, the AI and the UI operate on the same workspace. Pages are files. Elements have stable IDs derived from data.

When the AI says "the client I just updated," it's naming something the UI already knows how to render. The framework takes that reference and draws a ring around the real element on the user's screen.

This isn't a feature bolted onto a chat bubble. It's what becomes possible once the AI and the UI stop being separated by a wall.

How it works

The AI can attach action chips to its replies. A chip is a small button that targets either a page or a specific element on a page.

The chip is an offer. The AI never takes the user anywhere without a click.

I updated John Doe's email address to the new one.
Show me the client Open clients page

If the AI wants to point at multiple things, it just emits multiple chips. The user clicks through them at their own pace. No tour mode, no forced sequence.