The agentic enterprise of tomorrow

agentic enterprise

What does the agentic enterprise of tomorrow look like? What happens when AI can build software in hours and agents can run entire business processes?

In this episode of TechFirst, John Koetsier sits down with UiPath CEO Daniel Dines and CMO Michael Atalla to unpack one of the biggest shifts in enterprise technology: the rise of the agentic enterprise.

This episode of TechFirst is sponsored by KindBody Fitness: AI-powered fitness for all the health and none of the gym bro nonsense. Check out KindBody Fitness today.

Watch our conversation here:

We explore whether software is becoming disposable, why AI agents are fundamentally different from traditional automation, and what really happens to jobs as companies adopt these systems. Along the way, we dig into process orchestration, trust, judgment, and why human “taste” may become more valuable—not less—in an AI-driven world.

This is a deep, practical look at how AI is reshaping work inside real companies as they become agentic enterprises. This isn’t just hype, but what’s actually changing right now and what’s coming next.

Transcript: the agentic enterprise of tomorrow

Daniel Dines: We are running, right now, a very interesting thought exercise: start this company again with two people, one business seller profile and the builder technical type of profile.

When these two people hire the third person, what would be the role?

John Koetsier:  Is AI really making software disposable, and is agentic AI making people disposable? Hello and welcome to TechFirst. My name is John Koetsier. It seems like AI can make anything now, and agentic systems can do insane amounts of work.

John Koetsier: As we evolve, our AI and our agents feel like software is getting more disposable: build, use, dispose, build something new. What’s really changing as we bring AI and agents into the enterprise? Jack Dorsey’s Block says they need half the people because of AI.

Is that true? To chat, we have the co-founder and CEO of UiPath, Daniel Dines, and CMO Michael Atalla. Welcome, guys.

Michael Atalla: Thanks for having us, John.

John Koetsier: What does an agentic enterprise look like?

Michael Atalla: Long-running processes in companies are hard.

Every business has them. These are the hard things. They’ve been contending with them for years. We now sit in front of a technology that can actually put us in a position to affect change to those processes, make them more efficient, more reliable, and deliver better outcomes at lower costs. There are all sorts of benefits to that.

So that’s what’s agentic. Enterprise taps into this entirety.

John Koetsier: In a former life, I built software for a company that had a manufacturing component, and we started doing something coming out of the Toyota Production System called lean manufacturing.

What sticks with me there is that you have no fricking idea how complex a process is until you decompose it, until you spend literally a day putting sticky notes up on the wall or using some digital system to understand what all happens there. We mask complexity because we have intelligent agents, humans working at every stage and making decisions about what to do or what not to do. But if you want to actually reduce that to rules in robotic process automation, or give that over to an electronic agent in terms of AI and agent AI, that’s a complex thing.

Daniel Dines: I totally agree. This is why, for us, it’s important that we make distinctions, even in terms of terminology. There is an AI agent. It can be a task agent. That is what a human does, like taking the temperature of a patient. This is one task. I can have an agent that takes the temperature. I don’t necessarily need to have a human do this. Then there are stage agents that understand the context of a much more dynamic set of tasks that have to be done in a certain stage.

There is an even higher agent, that process-level agent. We call it a case agent. It understands all the transitions and the entire context. Of course, you’ll have humans in the loop in all stages. You can have a human at each stage, and you’ll have someone overseeing the entire process. But this is an agentic process. It’s an AI agent at the process level. You have task-level agents. You have process-level agents.

That contrasts a bit with the idea of agentic orchestration as the industry is moving. Agentic orchestration in the industry is more toward, “I have a swarm of agents. They are being given a goal.”

John Koetsier: Sounds so organized. Swarm.

Daniel Dines: A self-organized swarm, yeah. I am giving them a goal, like, “Help the patient. Make sure the patient gets out of the process.”

Michael Atalla: Keep the patients alive.

Daniel Dines: Keep the patient alive. I think maybe in certain instances it might work. I don’t think right now, first of all, that the technology is ready yet, and I don’t think enterprises are ready yet to adopt this one.

I think for a long time, we and AI have to have a common medium to interact and to describe what we want and what processes we want. Think about software. AI might generate binary code directly, but no human can review binary code. This is why we prefer AI to generate a high-level language that people can still review and reason about.

I think it’s going to take time until we get really convinced to trust whatever outcome comes from AI.

John Koetsier: That actually suggests something pretty interesting, because we’re rushing into the agent enterprise. A ton of people are promising the agent enterprise and attempting to deliver the agent enterprise. Maybe Block is one of them. We’ll talk about that later. I don’t know.

But that suggests that some are going to rush into this and they’re going to fall on their face, and disaster is going to ensue as things fall apart. Let’s take a step back. I asked, what does an agentic enterprise look like?

Think ahead 18 months, 24 months, 36, maybe longer. I’m not sure when you think we’ll arrive at something that we can consider the agent enterprise. What does that look like, big picture?

Daniel Dines: I think we’ve always said that, in the big picture of the agentic enterprise, people will interact with systems mostly via text and via vocal commands.

They will mostly be reviewing exceptions and monitoring how the system works. This is going to be, I think, the level. I’d rather speak at the level of what we specialize in, which is the level of big enterprise processes.

John Koetsier: Mm-hmm. Mm-hmm.

Daniel Dines: I think the goal is always to reduce human input into a process.

Even you, coming from the lean management side, it was always kind of the same goal: reduce errors. I think it stays the same. We just have different tools at our disposal.

John Koetsier: Isn’t that fascinating, though? Isn’t that fascinating? We have to have a conversation about this. We have to have a conversation about human potential and agent AI potential, because you’re right: most of business process tries to stop variability, variation.

Daniel Dines: Yes.

John Koetsier: There’s a way. There’s a path, and that’s how you go. That works for certain things where certain steps must be followed. It doesn’t work for other things that are more dynamic, like, “Build a marketing plan for this new potential product that we’ve never had before,” right?

AI can help on both sides of this, but it’s interesting because are we just trying to curtail what the human can do? Or can we use AI and agents to expand what the human can do, because a human does it best?

Michael Atalla: AI cannot convey trust. It cannot. We talked about it.

It doesn’t have taste. It doesn’t have taste. It doesn’t have judgment in the same way. So there are aspects of what makes a human human that actually, I think, get amplified in the context of AI and in the transformation that we—

Daniel Dines: I had a very long discussion with Gemini about taste and AI, because I’m trying to write a book using LLMs, and I think the style is very bland. It’s the “yes” style. They don’t have taste.

I think we, both I and Gemini, agree that in order to have taste, you need to have a body. You need to grow through all of these experiences to develop a personality, because LLMs, in a way, are the average of everything.

John Koetsier: Mm-hmm.

Daniel Dines: They don’t have a personality.

John Koetsier: Yeah. Mm-hmm.

Daniel Dines: Emulating some other personality is not your style, so it’s impossible for them to create their own style and taste.

John Koetsier: My mom was big in fashion for her career, and the biggest insult she could give was, “All your taste is in your mouth.” I’m not sure what the AI equivalent is.

Michael Atalla: The thing is, if we extrapolate this idea about taste and judgment and the things that humans are uniquely capable of doing, and you start bridging that into the enterprise, you start thinking about humans who have to navigate political dynamics in a company when they’re thinking about a decision. You think about the tension between organizations that never really have had to interact in unstructured ways.

John Koetsier: Mm-hmm.

Michael Atalla: You think about the security dynamics surrounding a company’s risk profile, which change week to week or month to month and can’t be determined by doing even a really sophisticated web search that an agent could do.

There is a need for humanity to be involved in, if you want to oversimplify it, QA-ing AI.

John Koetsier: Mm-hmm.

Michael Atalla: That is not going to go away simply because AI gets better. In fact, to some degree, I think the security and governance requirements of an AI that is being given more and more autonomous control over the way the business runs become even more important.

Now you have to get to a point where you’re finding AI tooling that you can trust yourself as a human. Only a human is going to be able to decide whether that’s trustworthy AI, because only a human has the taste and judgment to actually evaluate it that way. AI will always tell you that it can be trusted.

John Koetsier: The kind of dream around the agent enterprise that you see from some is that you have the one-person corporation, right?

Michael Atalla: The one—

Daniel Dines: It is very possible to have a one-person corporation.

John, actually, we are running right now a very interesting thought exercise, and this is a segue to your discussion about Block and the future of enterprise. We are running the exercise of, let’s say we start this company again with two people: one business seller profile and the builder technical type of profile.

When these two people hire the third person, what would be the role? What’s the fourth person, and so on? I think it’s got to be a fascinating exercise to do in many groups, in many enterprises.

John Koetsier: Mm-hmm. Mm-hmm.

Michael Atalla: We can’t start over, but to run the thought exercise of what it would look like to start over and then look at your own organizational dynamics and think about what that means to the growth of your company and the future of your business, I think it’s fascinating.

We spent the last couple of days, it was a topic of conversation with Daniel and our leadership team. It’s a fascinating exercise because we are all contending with the reality that AI is going to touch just about everything in some way, shape, or form.

I’m a marketing guy. What’s my job description in five years? I have to think about—

John Koetsier: Do you have taste?

Michael Atalla: I get there. I think I have taste. Judgment, questionable taste, I’ve got.

John Koetsier: Jack Dorsey’s answer to this question was to fire half the company. We can talk about, realistically, he overhired in the pandemic. He built two company organizations rather than one because he kind of merged a few things. He had a $60 million party for the company’s anniversary just a few months ago in San Francisco, where he brought 8,000 people into San Francisco. So, you know, and had Jay-Z there.

Michael Atalla: A judgment problem there as well.

John Koetsier: Perhaps there are other issues rather than AI.

But the question remains. When you have AI and you have this orchestration that you’re talking about, AI handling some tasks and some processes, what does that do to the people?

Michael Atalla: We should acknowledge that every role will be redefined in some way based on AI, certainly in enterprise software technology.

But it doesn’t mean that half the company goes away overnight. I think that is an impractical way to approach anything.

John Koetsier: Mm-hmm.

Michael Atalla: But I do think we have to acknowledge that every role will be transformed in some way with AI. We have to rethink what the role of a human is in every function.

John Koetsier: We’ve been looking at this top-down, like the agentic enterprise, what happens and all that. What if we look at it bottom-up, at the individual person?

We see people using AI right now. There have been a number of studies on that. Some of the interesting things we see are that people become more multidimensional. Somebody is in product and they start building. They’re a product manager and they start delivering code and features and stuff like that. Others start to do marketing because they can do marketing now because they have AI, whether it’s good or not, whether it has taste or not.

People do more things. We also saw one of the latest studies that came out just last week saying, “I’m tired all the time because I’m doing so much more. I’m producing a whack load, but I feel like I have to keep on this treadmill.”

It’s an interesting thing. As individuals, how are you dealing with that?

Michael Atalla: Personally, it was actually the way you went down the path of engineers, product managers delivering code. We were in conversation earlier with Ragu, who is a peer of mine who runs the engineering team.

We were talking about how many product designers are checking in code. They are, in fact, incredibly enthusiastic about their ability to actually check in code right now.

John Koetsier: I suspect the developers are slightly less enthusiastic.

Michael Atalla: But some developers are productive at a level they have never been able to be before.

I think there’s an interesting moment. Daniel, you and I both spend an enormous amount of our own time digging into the technology out there, evaluating the tools, and we have different goals in using tools. I’m looking at things like image generation, video creation, and campaign generation. What are the things that I can speed up, do more rapidly, and do at higher scale? I can do A/B testing more quickly because I don’t have the weeks-long timeframe to generate new campaign assets.

The woman who leads my campaigns team might be one of the most enthusiastic adopters of AI technology right now. She’s energized by it because of the possibility of moving faster and refining plans faster.

But I do think there’s—well, I’ve had a few moments with my wife, admittedly, where I’ve had to be like, “I think I have to stop talking to Claude now.”

The conversation that we were having the other day about the evolution of the builder, the idea that there may not just be engineers, there may not be product managers, there may not be designers, they may just be builders now—and it’s not a question of volume, it’s actually a question of role. I don’t know. You riffed on that for a bit.

Daniel Dines: Absolutely. I’m trying to think about what the roles in the company will be in the future. I think it’s clearly the sellers. That’s a clear role. The builder is becoming a role. I think the critic, in a sense—

John Koetsier: Maybe that’s marketing.

Michael Atalla: I get to be the critic.

Daniel Dines: It’s important to have a critic.

Michael Atalla: Yeah, I agree. There is a different role, though. There’s a different capacity that we will build in this moment, which I think is energizing to rethink. It doesn’t happen too often that you get to do this and go through this.

John Koetsier: It’s super energizing. It’s insane to be in product management right now, because you can execute your vision. You can actually execute your vision, and then you can look at it and ask, “Does that work? Does that fit?”

I do want to talk about how, as a company starts adopting AI and starts using it more and more, it changes your technology profile. It changes your cost profile as well. Computationally, automation is cheap. It’s super fast. It’s also deterministic. It’s always going to do the same thing, as long as you set up your rules pretty well, right?

AI is computationally expensive. It’s probabilistic. It may decide to do something a little different. We’ve seen examples where a chatbot has decided to give somebody a refund it was not authorized to give. How do you solve for that?

Daniel Dines: I don’t think there is a dilemma here. We don’t solve for anything. AI is an extremely helpful tool in the designing phase.

John Koetsier: Mm-hmm.

Daniel Dines: AI is not meant to be a transactional type of technology, but it can be very good for ad hoc, research-type tasks. If I want to multiply two big numbers, I’m not going to go to an LLM.

John Koetsier: Good call.

Daniel Dines: Nobody would expect that. Even if an LLM is capable of doing this, nobody will use the tokens to do this, and maybe one out of a million times they will get a completely wrong answer, which is very possible. You’ll just have a simpler function that will multiply two numbers.

I think it’s the same here. Yes, you’ll use AI coding agents to create a lot of artifacts that are required to run your business, but those artifacts will be deterministic. They will run deterministically. You can partially involve agentic AI for judgment calls, but you have to limit these to very specific tasks.

John Koetsier: So, guardrails.

Daniel Dines: Some guardrails, yes.

Michael Atalla: Think about where the places are where non-deterministic challenges occur that are high-cost for an organization. You’re talking about cost, but I’m talking about replacing high-cost elements of a process with a thing that can do that high-cost thing at scale, over and over again, 24/7, without ever getting bored of it.

John Koetsier: Mm-hmm.

Michael Atalla: The handoffs between the person sitting at the reception desk at the hospital, using Daniel’s analogy, and the nurse who is responsible for intake in the pre-surgical ward, to the doctor—those are expensive handoffs. They’re error-prone, and they actually are non-deterministic because you don’t know what the intake is going to look like until it happens.

John Koetsier: Somebody goes the wrong way sometimes.

Michael Atalla: Those are places where AI can do wonders for a business. It can lower costs. It can create higher-scale outcomes, more reliable outcomes.

We have to acknowledge that while one out of 10 million times wrong on a multiplication problem might be true for AI, I bet it’s one out of 100 million in just a few more months. The accuracy levels are getting incredible.

That’s where AI can be selectively applied to have a real impact on business and process.

John Koetsier: Michael, let me throw this one at you. I’ve been wondering for like a month why companies—maybe yours is one example—don’t have a website LLM that understands the company, its products, its clients, what it does, and all that stuff, and I come to their website and say, “Hey, this is me. This is what I do. Here’s what I have. What do you have for me? Why should I buy it?”

I haven’t seen that yet.

Daniel Dines: That’s kind—

John Koetsier: It’s pissing me off, because I want something easy.

Daniel Dines: When I hired Michael, I think we had our first discussion like six months ago. That was one of the first things that he told me: “I would replace the entire website with just a chatbot interface.”

John Koetsier: That’s a little extreme, and I’m sure he’s just making the point.

Michael Atalla: Yeah, but you’re giving an example of what I would call a complicated handoff.

John Koetsier: A salesperson, right? There needs to be a lot of knowledge there.

Michael Atalla: One salesperson is optimistic, actually. Someone visits our website—there are actually multiple handoffs across that. They’re navigating this website, filling out a form. There’s probably a BDR involved. There might be a salesperson eventually. If I can shorten that cycle and put them in front of a seller faster, that’s incredible.

That’s an incredibly complex handoff that agent AI can actually make more efficient and more productive in usage. That’s a great example.

John Koetsier: The super interesting thing is that I’ll be super honest about what I am, what I do, and what I need. If the LLM can be super honest about the capabilities and what it does, and “Hey, this is how it fits and here’s about how much it would cost,” it could really be interesting before you have a handoff to a human seller there as well.

It’s going to be a different world. I assume we’re going to see some websites like that in the future, and I guess yours is going to be one of them.

Michael Atalla: Sounds like I just got that assignment. Appreciate that, John. Thank you.

John Koetsier: Hey, you signed up for it.

Maybe let’s go here. You guys wanted to look a little bit into the future: what you’re developing, what you’re building, what your vision is, what this all looks like in the future. Tell me what the future looks like. Look into your crystal ball.

Daniel Dines: Let me give you more of a sense of the near future for us, and then we can go into the crystal ball for years from now.

Right now, I think the coding agents are changing completely the landscape and the buy-versus-build equation for any company. If the implementation cost of software is trending toward zero, how does it affect many industries?

In our case, the number one priority for us right now is to enable our platform for the use of coding agents. I believe that all software platforms that will be built or improved will target coding agents rather than people as the ones building, designing, and writing code on those platforms.

Michael Atalla: Yeah.

Daniel Dines: What does it mean for us? It means that the time to value, from when a person makes a decision, “I want to automate this process,” to running it in production, is going to shrink to days instead of the months that we have today.

All the stages will be assisted and, hopefully, even autonomously performed by AI agents. In the case of an enterprise process, we’ll have an agent that can interview the subject matter experts of the process and can even take recordings of how they use the process. Then an agent will create a process description document that humans can review: this is the existing process. The agent will come with improvements, maybe for the entire process, and can even draw diagrams that help people reason about the process.

Once they agree, then a coding agent will take the document and create an entire solution to automate the process.

John Koetsier: Mm-hmm. Mm-hmm.

Daniel Dines: That solution can have applications for the user interface. It will have workflows that navigate from person to person and department to department. It will have RPA scripts. It will have API scripts. Creation is going to be automated. Testing is going to be fully, or kind of fully, automated. Continuous improvement will be automated. Deployment will be automated. Monitoring of the process will be automated, and even exception handling, because infrastructure changes and application changes happen without the knowledge of the process itself.

So the process will run, an exception will occur, and an agent will analyze the exception, propose a fix, and can deploy the fix.

This is not six months ago. That would have been crystal-ball future. Now it’s near future.

Michael Atalla: Yeah.

Daniel Dines: We are delivering this in the next couple of months.

Michael Atalla: This is what I would—

John Koetsier: It just makes perfect sense.

Michael Atalla: It sounds like a crystal-ball story. As I’m listening to Daniel, I’m thinking in my head exactly what you just said. It’s actually insane that that’s now.

John Koetsier: It would sound insane a year ago. It would sound insane two years ago. Now it just sounds rational and normal, like why isn’t it there already?

Implementing enterprise software is a disaster. That’s one of the reasons why implementing enterprise software is such a dangerous, difficult, long, expensive process. Now you tell it what you do, it learns what you do, and you say, “Okay, I need this. I don’t need that.” It can adjust the software package.

You have a kind of interesting marriage between vibe coding and enterprise software, because vibe coding is wonderful, but let’s not pretend that it’s secure. Let’s not pretend that it’s optimized. Let’s not pretend that it does the right thing. Let’s not pretend that it doesn’t create all kinds of rabbit holes and dead ends in your code and all that stuff, which smart people can clean up.

But if they can use that base of functionality and then customize an exact solution to this particular enterprise so that they just see what they need to see and don’t see what they don’t need to see, and it already works for their processes, that’s huge.

I’m vibe coding apps right now, and I’m going to Apple’s App Store Connect and I’m like, why isn’t there a fricking LLM here to help me do this? Why do I have to go to various tabs and pages and screens and fill in bits of information and have to think? Tell me what you need. I’ll give you what you need. Walk me through it.

Every piece of software, website, or something like that will eventually have this kind of capability. You’re talking about delivering it fairly soon.

Michael Atalla: Yeah. There’s still, though, something important to note. There is an incredible amount of trust required to hand the keys to the kinds of process orchestration that Daniel is describing over to AI.

I do think companies are going to have to choose who to trust in this moment.

John Koetsier: Mm-hmm.

Michael Atalla: Who they can trust, who they already trust—companies like ours, to be very direct, companies that have spent a decade building a trusted relationship with businesses that have incredibly complex, long-running processes and highly regulated environments.

Let’s be clear: regulation is not going to transform overnight, no matter how good Anthropic’s or OpenAI’s or Google’s models get.

There is a need right now, more than ever, to have a trusted ecosystem of players who can do this in a responsible way. That’s one of the things that we’re proud to be part of, frankly.

John Koetsier: Cool. Very cool. You mentioned a crystal ball, Daniel. Did you have more in the tank? Crystal—

Michael Atalla: Ball guy? I’m not going to be the crystal-ball guy. You be.

Daniel Dines: To me, what I am jokingly telling my sales leaders is that, in an ideal future for me, people will not buy software. Agents are going to buy software. They will run very unbiased evaluations of different platforms for particular use, and they will make a very informed decision.

I think we can probably get there sooner rather than later. It’s probably a giant waste of money and time in the decision-making process.

John Koetsier: Yeah. It’s interesting, though, because our economy is built on inefficiencies, right? People have jobs because of inefficiencies.

Daniel Dines: I think that’s an amazing point, actually. I haven’t thought of this, but what happens in an economy where there are no inefficiencies? Maybe it becomes dry.

John Koetsier: I don’t know. Everything should be cheaper, yeah. But what does that look like? What does that feel like? It’s a wonderful, wacky, wild world.

Daniel Dines: Do you think there will be deflation? To me, in my mind, it can be a risk.

John Koetsier: A hundred percent.

You’ll have deflation because on the knowledge-work side, you’ve got AI getting better and better, agent systems getting better and better, and doing more and more work cheaper and cheaper. On the physical side, you’ve got robots, humanoid and others, that are driving the cost of labor down toward zero. Not zero, obviously, but toward zero.

That’s maybe the silver lining in the AI apocalypse paper that’s been going around from that research company that sent it out like two weeks ago or so. If we can make everything easier, simpler, and cheaper, yes, we kill a lot of jobs, but everything else gets cheaper. So you don’t need to work as much to earn a living. Hopefully, we can figure that out.

It takes some wisdom to do so. I’m not sure that we own and possess that wisdom in sufficient quantity, or that it’s spread sufficiently far across our societies and political ecosystems. But I pray and hope that it is.

Michael Atalla: Yeah, we talk about this a lot. One of the interesting things about the age of AI is that it brings a conversation about transformation—global transformation, societal transformation, economic transformation—into just about every business decision these days.

John Koetsier: Yeah.

Michael Atalla: It is a little bit different, but I think the world that we’re excited to have a seat at this table in—there are companies out there that will be along for this ride, and there are companies out there that are going to drive the transformation that’s about to happen. We certainly see ourselves as a company that’s going to drive this transformation.

We’re going to be driving the—I don’t know, I think I’ve referred to you, maybe not directly yet, but I’ve talked about the AI train. I think I’ve said to a couple of people that Daniel is definitely at the front of the train. He might be driving the actual train.

John Koetsier: He’s the conductor.

Michael Atalla: Yeah. You might be the conductor of the train. We see ourselves as leading this transformation.

John Koetsier: It’s funny because trains have conductors, and so do orchestras. Orchestras have many different pieces that need to sing to the same song sheet.

I think this has been a super fascinating discussion. I think this is a great place to end. It is a wonderful, weird, wacky, scary, frightening world that we’re entering. But yeah, we’re all on the train. It’s not stopping.

Michael Atalla: That is fair. A hundred percent.

Daniel Dines: Yeah. I told my team, this uncertainty is the new normal. I don’t think we can promise anything to anyone.

We have to embrace uncertainty, and we need to be prepared to shift our strategy on a dime.

Michael Atalla: The solution for anxiety is action.

Daniel Dines: Yeah.

John Koetsier: Steve Jobs said the best way to predict the future is to invent it.

Michael Atalla: Yes.

John Koetsier: Drive the train.

Michael Atalla: Yeah, drive the train. Yeah, totally.

John Koetsier: Thank you so much for this time, guys.

Michael Atalla: It’s been fun to talk to you, John. Yeah.

Daniel Dines: Thanks, John.

Subscribe to my Substack