AI is now every UI: generative user interfaces explained

Ai is the new UI

Is AI really the new UI, or is that just another tech buzzphrase? Or … is AI actually EVERY user interface now?

In this episode of TechFirst, host John Koetsier sits down with Mark Vange, CEO & founder of Automate.ly and former CTO at Electronic Arts, to unpack what happens when interfaces stop being fixed and start being generated on the fly.

They explore:

  • Why generative AI makes it cheaper to create custom interfaces per user
  •  How conversational, auditory, and adaptive experiences redefine “UI”
  • When consistency still matters (cars, safety systems, frontline work)
  • Why AI doesn’t replace workers — but radically reshapes workflows
  • Whether browsers should become AI-native or stay neutral canvases
  • The unresolved risks around AI agents, payments, and control

From hospitals using AI to speak Haitian Creole, to compliance forms that drop from hours to minutes, this conversation shows how every experience can become intelligent, contextual, and helpful.

And, watch our conversation here:

Transcript: AI is now every UI

Note: this is a partially AI-generated transcript. It may not be 100% correct. Check the video for exact quotations.

John Koetsier

Is AI the new UI? And what does that even mean? Hello and welcome to Tech First. My name is John Koetsier. We’ve heard “AI is the new UI” for a while now. It’s kind of a reference to the fact that conversational interfaces that LLMs and other AI engines have are conversational. It’s not a traditional UI.

But as I wrote in Forbes a few weeks ago, the launch of Gemini 3 kind of reignited this conversation because Gemini 3 uses generative AI to invent interfaces on the fly inside its own wrapper for specific queries. So maybe AI is not just a new UI. Maybe it’s every new UI.

To unpack what this means and check if it’s even a thing, we have Mark Vange. He’s the CEO and founder of Automate.ly. I’m guessing that’s how you say it. He’s nodding, so I think that’s right. He’s a former CTO at Electronic Arts. He’s a game builder, an investor, a serial entrepreneur. He’s now hard at work destroying traditional user interfaces to let people engage via natural language intent and automation. Hello Mark, how are you doing?

Mark Vange

I’m doing great. Thanks for having me. How are you today?

John Koetsier

I am doing well. It’s my third interview of the day. I’m stacking and racking them because I’m going to be off for a little while. So I’m hoping that I’m high energy for you as well. Let’s dive in. What does it mean: AI as a new UI?

Mark Vange

You know, we’re used to having such a high upfront cost for creating UIs that it was impossible to customize UIs, right? So if you were Microsoft, you needed to create Word for a billion users. And so it’s not the Word that you need or the Word that I need, it’s the Word that all of us need.

And with that comes complexity, and with that come trade-offs, and with that comes limitations that are born out of the visual language of the UI. In some cases, the audible language or other modes as well, but primarily visual.

And now that the cost of generating a new UI is very low in terms of effort or time, we suddenly have this crazy ability to generate the exactly right user interface. Maybe the word “user interface” isn’t even appropriate anymore, but the right experience for you right now in how you interact with this solution.

And I really want to stress that that goes way beyond just the visual — what we traditionally think of as the UI.

My favorite example that I did recently is: we were working with a hospital chain in South Florida, and 5% of their users are Haitian Creole speakers. And so we created a Haitian Creole speech synthesizer — a conversational agent, essentially — for allowing them to be greeted in their language when they contact their local hospital. And to me, that’s a UI, right? It’s no different than kind of the pixel UI that we’re thinking of from before.

But when we have the flexibility to create exactly the right user experience for the experience, we remove the distinction between the UI and the experience. It’s now the experience — and how we interact with the experience is part of it.

Traditionally, we’ve expected that from games. We’ve expected that from rides. We’ve expected that from sports. But not from our software, not from our business applications especially.

And now suddenly, because the time and kind of calendar cost is so low, we can literally create just the right experience with just the right buttons, with just the right feedback for any use and for any time in that use. They can mold — we can literally just change them on the fly, right? Adaptive UIs.

It really feels like a very exciting time because suddenly we have this much broader palette. We kind of had the hint of it in video games, but now that palette’s available to us in all of our technologies.

John Koetsier

You literally just said to me as we were prepping for this conversation that it’s quicker and easier for you to just invoke a new user interface for your dashboards than to have a standard interface for them. And that makes a ton of sense, because you might have different things you want at different times, and you’re focused on different KPIs or different pieces of information.

It’s funny — when I think of this, I have to think of Steve Jobs. He’s introducing the iPhone. He’s showing the old phones. He says, “You know, the problem isn’t in the top half. The problem is in the bottom half.” And that’s where all the interface was hardcoded in buttons, right? Plastic buttons.

And I guess I almost wonder if that’s how we’ll view traditional software someday. It’s hardcoded. It’s antique.

Mark Vange

It’s the difference between a mass-produced good and an artisan-crafted good, right? It’s that simple. And we understand that in other objects — the chair that was handmade by a craftsman versus the chair that was pumped out through a factory. Very functional, might even be more comfortable, but it’s a different overall experience, right?

John Koetsier

When is a consistent interface still a good thing? I think of driving a Tesla, for instance, and there’s not a lot of buttons for stuff. You can use voice for things. You can look at a screen while you’re driving, right?

You remember the old experience of getting in a car, and you knew how to punch the radio on and turn the volume up and other stuff like that. I’m wondering about the business equivalent of that or the daily life equivalent of that. If you’re a frontline worker and you’re entering data on a factory system, or taking orders or something like that, you might want a consistent user experience. Am I right?

Mark Vange

Absolutely. And I think it’s important to also keep in mind that there are safety concerns around this. We have conventions that are in our DNA now around red buttons meaning stop and danger, and green buttons meaning go. Those sorts of conventions are very useful and actually very important from a safety perspective.

You get into a rental car and you can’t find the start button. It’s frustrating. Yes, you’ll find it eventually. But also, when you need to, in an accident, shift it into neutral because you’re skidding and every car has a different way to do it, that’s probably not the best convention, right?

So there are some limitations around this. There is kind of a common language of experience, particularly around exogenous events.

At the same time, we don’t have to have the general — we don’t need to have the hundred buttons to have the right five buttons now, even though we may know where the hundred buttons are always.

John Koetsier

Right. I always kind of judged people when they used traditional software — whether it was Word, or just a web browser, or Mac OS, or something — and they had all the chrome visible, right? All the chrome visible, and the working space was like 40% of the screen or something like that. But it was silent judgment. I was polite.

What changes about work when UIs become generative the way you’re talking about?

Mark Vange

Look, I think that comes back to what changes about work with the introduction of AI in general, right?

There’s this kind of opinion in some circles — the “AI is going to replace me” mindset — and I don’t think that’s really the right lens through which to view this.

I gave a speech to a bunch of students at ASU two weeks ago, and what I basically told them is: if you think that AI will replace you, it will, and if you think that AI will empower you, it will.

To me, the right interfaces today are ones where the AI is working with you on a common surface. And for me, this kind of helpful, cooperative AI is where a lot of the lift is going to happen.

That has both to do with the fact that it makes us more effective, but also means we can get to value much quicker, right? Trying to make an autonomous AI agent that is 100% right all the time is super hard, super expensive.

But we can get a lot of value out of something that’s helping us — that’s 75% of the work or 80% or 90% of the work — and leaves us to do the thing that really sets us apart, the thing that really differentiates us, right?

So for me, the right interfaces are the ones that support those kinds of workflows.

For example, we have solutions for compliance paperwork — people who maybe once in their life will fill out this application form. Some of the stuff they know: their name, their address, right? And then they come to: “Do you have a DBA?” And they’re like, “What the hell is a DBA?” Right?

So the fact that they can then just ask the page, “What is a DBA?” and the page can help them determine if this pertains to them or not — for me, that’s the right sort of interface.

And that ability doesn’t appear — or doesn’t need to be there — until you hit that, you know, until you stumble over that mushroom. And then the little gnome pops up and says, “Hey, I’m here. Let me help you with this.” Right?

And it becomes a much more effective and much more productive experience for you. That’s the kind of mindset that I always approach these AIs with right now: what does the user of this thing need today, right now, in performing this function?

John Koetsier

It strikes me as you were saying that — somebody filling out a form and it being helpful about filling out the form — that not only does AI become every UI in some sense, it makes everything an application.

You don’t have to have a dumb form anymore. You don’t have to have a dumb page anymore. You don’t have to have a dumb brand or “about me” anymore. It can be smart. It can be intelligent. And it can actually provide what somebody needs in the moment for them. Correct?

Mark Vange

Absolutely. So we have this thing that we do. There’s something called A2P 10DLC, which is a form that you fill out before you can send SMS messages by anything other than typing it on your phone. So it doesn’t matter if it’s 2FA or if it’s marketing — whatever.

And most people will only fill it out once in their life. And it’s full of technical language that pertains to SMS messaging. And so you put even well-educated people in front of this thing and they’re just like, eyes crossing, right?

What better use? And literally, we’ve taken what was a four- to six-hour ordeal for people — for real — and turned it into a 10-minute process.

And we’ve taken what was an 8% to 20% first-time submission success rate to like a 95% first-time submission success rate, just by giving a little bit of help.

It’s you filling out your tax return versus you sitting down next to an accountant and filling out your tax return. You’re not hiring the accountant necessarily, but she can help you by just giving you a few hints here and there to kind of get through the bumps and bruises, and the “what the heck does this mean?” head-scratching moments.

That’s what we do for all sorts of processes.

John Koetsier

Love it. Everything becomes an application, and applications become extremely, extremely helpful.

I think about the browser, and that’s kind of a revolution that we’ve quietly gone through in, I don’t know — maybe you tell me when it was — but it seems like sometime in the past 15 years, the browser became the place where you spent 95% of your time when working, when accomplishing something on a traditional computer, right?

And in some way, therefore, the browser is almost like the uber interface — the wrapper, the chrome — around applications that are constantly changing, depending on what website, whether Google Docs or some online version of Photoshop or whatever, that you happen to be working on.

That’s interesting. I want you to comment on that. But also, in that scenario, does the browser need to become smart — sort of like OpenAI’s Atlas or Opera’s Neon — where the browser and the AI are one?

Mark Vange

Wow. Okay, so those are two pretty big questions, right?

I think the browser as kind of your smart window to the world has been very empowering because it creates sort of an even playing field, and it really eroded the power of the manufacturer, right?

What it has really accomplished is it’s taken the power out of Windows, it’s taken the power out of Apple, and it’s put the power more towards the creator of the application by democratizing that access.

I no longer need the permission of Microsoft to get access to your microphone or to your speaker or to your windows or whatever. And so it became much easier, much more economically advantageous for everybody to leverage that surface.

And now the business people in my company all use Chromebooks because they’re disposable. At the end of the day, if somebody drops their Chromebook in the lake — you know, not happy — but I haven’t dropped a MacBook Pro in the lake, right? So I’m happy.

For those kinds of functions, it’s almost no longer necessary to have anything but a browser.

Now, are there applications that haven’t made the jump across to that surface? Absolutely. But it is because they have so much technical debt that it’s hard for them to commit. It’s not because that surface couldn’t handle it, right? So that’s kind of the browser piece of it.

Does every browser need to be smart? I think the risk with that is that then you’re actually swinging the pendulum too far. And now instead of being beholden to Microsoft, you’re beholden to OpenAI, right? And now OpenAI is suddenly the barrier between you and the things that you want to accomplish.

So for me, I would rather maintain the separation between the browser and the tools that I elect to run in it.

But I’m in the business of creating tools. If I’m OpenAI, of course I want you to have a smart browser because as long as that smart browser asks me what the answer is, I’m in a good place, right?

I think from the perspective of all of us as consumers — leaving aside my perspective as a provider of something — it behooves all of us to empower neutral canvases as opposed to support opinionated canvases.

John Koetsier

Yeah. I think it’ll be incredibly interesting to see what Microsoft will continue to launch in terms of their onboard AI, and what Siri might eventually become if Apple ever gets their AI game on point.

One of the challenges is: whether it’s in the browser or in other apps, you’ve got sort of point solutions — something smart here, something smart there. And I kind of want something smart that knows all my stuff and can help with all my stuff. So my calendar is going nuts here, and it can see my documents, and it can see it.

Of course, that is such a critical point of failure, right?

And that’s one of the challenges that OpenAI had with Atlas as well. They released it. And then they also released some e-commerce capability — like you could buy stuff on Walmart within OpenAI — and people found, shoot, this is not secure. My financial information was getting released incorrectly.

So it’s a really challenging thing. And who controls that AI is also a big question.

There’s a lot to figure out here as we go forward in this revolution.

Mark Vange

Yeah, the AI payment space is a whole other conversation, right? So just put very simply — because I don’t know if this is our subject for discussion today. Maybe this is another call.

If my AI agent buys from your AI agent, which one of us is actually taking the responsibility for anything, right? I haven’t signed it. You haven’t authorized it. Where does that money actually go?

If I say, “Well, what the heck? I didn’t want that trip to Bora Bora,” you’re like, “Well, but your AI bought it.”

John Koetsier

It was a good deal and you needed a break, and your agent decided you needed a vacation.

Mark Vange

Yeah. Exactly, exactly right.

So I think that there’s going to be a lot of both law and case law around understanding the foundations of this.

What’s interesting is that this is the first thing in the AI era that has not been agreed to as an open standard.

If you think about everything else in AI, we’ve just been moving so fast. People just adopt whatever: MCP, sure. A2P, sure. This, sure. Responses API, sure. Just use it.

And then we hit payment, and suddenly we have Beta versus VHS, right? Now we have these two different camps of payments.

And this is the first thing that’s happened because it is, A, so valuable, but also, B, so unclear and dangerous in understanding where the controls are.

Understanding what actually aligns with the law even is not trivial and will take some time to sort out.

John Koetsier

Yeah. It is super interesting though, because like with our browsers, we’ve put our financial information in there, right? And then it can autofill largely — not 100%, but largely. And so I’m sure we’ll figure that out in some way, shape, or form.

And yeah, it’s a… totally going down a rabbit hole here, but one of my predictions in a previous call is that we’ll have headless e-commerce at some point — no interface, no UI, no UX — my agent talking to their agent, and just: hey, there’s a warehouse, there’s fulfillment, there’s logistics, and there you go, right?

So it’s a crazy world we’re moving into. It’s off-topic for the UI conversation that we…

Mark Vange

Well, if we circle back, it is and it isn’t. So for me, UI goes out beyond just the screen and the pixels on the screen, right?

For me, the conversational UI is also a UI. The auditory UI — when you’re backing up and your car starts beeping — that’s also UI, right?

UI writ large touches on all of these things because we have some expectations around how payments work and how authorization works and how money leaves our bank account. And those processes are also UI if you kind of squint a little and think in bigger terms.

So I think the challenge we’re facing is that our universe is definitely becoming more multimodal, right?

We’ve had 30, 40 years of: everything comes out of this screen in front of us. And now suddenly, I walk my dogs talking to ChatGPT because I’m working on architecture and I kind of ideate with it. That’s not a user interface that would have been practicable three years ago.

How is it not a user interface, right? But there’s really no pixels involved.

John Koetsier

True. Yeah, yeah, exactly. It’s an amazing frontier and universe that we’re going into.

And I want to thank you for taking some time to explore some of the implications of it, some of the things that we’re going to face, and some things we don’t even know we’re going to face down the road. I appreciate your time.

Mark Vange

It’s a pleasure.

Subscribe to my Substack