Is AI our replacement … or a bionic arm that makes us smarter, faster, better

AI-work-jobs-bionic-future

Will AI replace us, or make us better?

According to AI expert Slater Victoroff, AI is a bionic arm that’s going to make us better and faster, whether we’re a doctor, lawyer, data scientist, builder, carpenter, or care aide.

In this episode of TechFirst with John Koetsier, I also chat with Victoroff, the CTO and co-founder of Indico, about what work will look like in the future, how much white collar and blue collar work will get taken over by AI and robotics, and how we’re using AI now for work … and will be in the future.


Support TechFirst: Become a $SMRT stakeholder, or check out $SMRT rewards


Keep scrolling for full video, a transcript, and to subscribe to TechFirst.

Or read my story on Forbes …

AI: a bionic arm to make us better, faster, smarter?

(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)

Will AI replace us? Or will it just make us better?

 

Full transcript: AI, jobs, the future of work … and the black boxes in human work

(This transcript has been lightly edited for length and clarity.)

John Koetsier:  What will work look like in a decade? How much white collar work can AI do, and how will people engage and interact with AI? We’re chatting with Slater Victorov who’s the founder and CTO of Indico, which does enterprise AI. Welcome, Slater! 

Slater Victoroff: Thank you so much for having me. 

John Koetsier: Hope I said your company name right. 

Slater Victoroff: You know, close enough … Indico, Indico … you know, potato/potahto. [laughter]

John Koetsier: Excellent. What do you spend your days doing? 

Slater Victoroff: CTO and co-founder at Indico

So I’m the CTO at Indico, and as you’ve said, we focus primarily on this enterprise AI space. You know, Indico is the unstructured data company, so when we talk about the things that I’m really doing on the day-to-day thinking about document automation, right? Thinking about video use cases, text, image, stuff along those lines — that’s a lot of what I’m doing day-to-day. There’s a lot of more specific things down further in the weeds on the research, but if we start talking about that I’ll be going off for hours.

John Koetsier: And we wouldn’t want that [laughter].

Let’s talk about AI assistance for our daily jobs. I mean, all of us, whether we realize it or not, we have AI assistance right now. I do a web search ’cause I’m doing research, and guess what? AI is involved. 

Slater Victoroff: Absolutely. 

John Koetsier: I need to add 10 numbers and I quickly pull up my phone and invoke Siri or Alexa or Google or whatever, I’m using some level of AI.

What else is happening today? 

Slater Victoroff: So, I think the first thing obviously that you’ve said is that there’s a huge proliferation of AI within working systems, right?

And I think that to a certain extent, there are even certain features that we’ve just started assuming are kind of usability features when we talk about basic recommendations, when we talk about voice notes, stuff along those lines, right? And they’re little intelligent kind of features, but the sum total of them actually builds up very very quickly.

Google obviously being a really obvious example of something that powers the backbone of productivity for a huge number of knowledge workers today, and also they’re kind of the most sophisticated AI engine out there. I think one of the things that’s very interesting, if we talk maybe specifically about automation for a second, because sort of the interface of AI and work it’s huge.

So, let’s just talk about automation because that’s an important piece.

The way that I see it, there’s sort of two competing thought processes out there in terms of how is AI going to impact work. How do we work together and what does that interface look like?

And I think one mode is what I would call the Android mode of thinking, which is an AI worker is going to come in and sit next to me and it does some portion of work of a human worker or something along those lines, right? And I think that stands in pretty stark opposition to what I personally prefer, which is the bionic arm notion of AI. It was the idea that this is fundamentally a tool like any other, and you can lift 10 times as much, right, you can do your work 10 times better. But it’s still fundamentally you doing the work.

And again, I think that there’s … I think that folks in both camps have their own views. I think we’ve all got our own biases that we bring to the table, but personally, I’m very much in that bionic arm camp. You know, I’m a peanut-butter-and-jelly-better-together kind of person. 

John Koetsier: And actually that might be a great way to put it with the peanut butter and the jelly, because both might be true.

Slater Victoroff: You know, it’s totally, yeah, totally fair. Time will tell, right? 

John Koetsier: Absolutely. It’s interesting actually, I mean, you mentioned Google and their — I was going to say autocorrect — their autosuggest is getting scary good. 

Slater Victoroff: Oh, yeah.

John Koetsier: I mean, like sometimes I’m writing an article and I could just tab, tab, tab. 

Slater Victoroff: Oh, absolutely. Absolutely. And actually, I don’t know if I mentioned this, but actually my original co-founder at Indico, Alec Radford, he’s the lead author of GPT which is sort of one of the really key models behind that stuff, right? 

John Koetsier: GPT-3 is pretty impressive, almost scary good. 

Slater Victoroff: There was a really cool article that actually came out today from the Allen Institute that’s — actually GPT-3 is so good that untrained human reviewers cannot tell the difference between GPT-3 text and human text, better than random chance. It came out at exactly random chance. They have no idea.

John Koetsier: I can’t tell you how scary that is in an era of — in a post-fact era. 

Slater Victoroff: That is the thing, right? I think that that’s the … you know, on the one hand it’s really fun. I think there’s obviously a lot of really positive implications of these kinds of technologies, but I mean, that risk is there, right?

John Koetsier: Yeah. 

Slater Victoroff: It’s very much there, and it’s not something we’ve got a great answer to, in my estimation.

John Koetsier: Let’s look into the crystal ball a little bit and maybe look out five, 10 years. How do you see knowledge work happening with an AI — with AI assistance, let me put it that way — and that’s assistance with a ‘ce’ at the end, not a ‘ts.’ It could be both.

Slater Victoroff: Ah, sure, sure, sure. 

John Koetsier: How do you see it working?

Slater Victoroff: So, it’s a good question and I’ll first give my politician non-answer, which is that it’s going to look different in every industry.

John Koetsier: Yes. 

Slater Victoroff: But more than that, I think that the most important thing, especially for folks that aren’t necessarily directly in those white collar jobs today, is to understand how little of a typical knowledge worker’s job today is actual knowledge work. Right? And I think that that’s true in the vast majority of places.

It doesn’t matter if you are a lawyer, or an accountant, or a software engineer, or a data scientist, so much of what you’re doing is the paper process on either side. You know, there’s a lot of this stuff that is in some way, shape, or form automatable, right?

Obviously a lot of that in my space has to do with unstructured data — not all of it, but there’s a lot of opportunity to really, I think, free people up to do more of that knowledge work. I think that one of the other things that’s going to become a lot more common, and we’ve already started to see this, is expertise around programming AI. And I don’t mean that everyone has to be a researcher. I don’t mean that everyone has to be a software developer, but I’m talking Excel scripts level of coding. I’m talking big fuzzy animal pictures of AI …

John Koetsier: Low code, no code. 

Slater Victoroff: Yeah. I think having that level of understanding, I think it’s going to stop being an option. And plus, I think it’s going to start being very, very common and something that we feel is actually really critical to do our jobs, even if today it feels like the furthest thing in the world from AI. 

John Koetsier: It’s a really interesting thing to think about, and I’ve got to double check — I confess I haven’t checked, and maybe you know — it’d be very interesting to see if somebody’s done a little hardcore study of what percentage of a knowledge worker’s day is actually value-add work and what percentage is finding stuff, sending stuff, extracting stuff, transforming stuff, all [those] other things that you could typically be automating. 

Slater Victoroff: So they’ve done this in really localized, specific areas, right. It’s part of why I used the example of a data scientist, is there’s pretty good studies there and something like 80% of a data scientist’s day is this automatable munging in some way, shape, or form. But I think actually the broader answer is that we don’t know. Right? Everyone agrees the number is very, very high, and everyone’s kind of exhausted by it.

But one of the other things that has kind of become clear to me on the other side is that these are, you know, we talked about black boxes in AI, we don’t talk as much about black boxes in human processes … which is actually kind of what these things are, right? They’re highly non-transparent where, you know, take a loan approval process … we all want to believe this is some incredibly well-oiled machine, it’s completely objective, right? And now obviously nothing could be further from the truth.

And when you look inside of these organizations, what’s happening in sort of truth, in reality, is that they see, okay, loan application packet 552 showed up on Bob’s desk. Bob says it’s good to go, right? And that’s all of the information that they’ve really got. 

John Koetsier: Yeah. Yeah. I do look forward to the day when I have a personal AI assistant, but I don’t feel like that’s the way things are going.

The way things are going is Apple has an AI assistant. Amazon has one. Google has one. Samsung kind of has one. Various different cloud tools that I might use and other tools that I might use have AI components within them, but I don’t have ‘Ask Jeeves.’ You know, I don’t have Jarvis.

Do you see Jarvis coming in the future? 

Slater Victoroff: No. Not at all. 

John Koetsier: That is sad news. [laughing]

Slater Victoroff: I disagree. I think that ’cause I think that the reason why not is maybe different than what you’re expecting … because it’s actually, it’s not a technical limitation.

I think the problem is that what people have internalized in a lot of cases as AI, I think it’s … it’s more magic than technology, right?

John Koetsier: Yes.

Slater Victoroff: ‘Cause when you look at Jarvis and you think about the things that’s really exciting about what Jarvis does, it’s actually — it’s nothing to do with intelligence or reasoning or anything like that, it’s the fact that he can magically read Tony Stark’s mind, right? That he can do novel research and create novel materials at the drop of a hat, right?

And you know, I think that in a lot of our conceptions of AI, that you kind of peel back the onion a couple layers, you have those similar sort of impacts.

And so I absolutely get the desire. I mean, who doesn’t want an omnipotent genie that can solve all knowledge problems, right?

John Koetsier: Doesn’t even need to be omnipotent, just somewhat powerful. 

Slater Victoroff: You know, I think that that slope is a lot slipperier than people recognize. So that’s my view. It’s not that I think that there’s a fundamental technical limitation. It’s that I think that — and it goes back to this bionic arm analogy — I think that really purpose-built tools that are very, very effective, I think that makes a lot of sense. I think we’re going to see a lot of those, even more so than we’ve seen them already. 

John Koetsier: So I understand that it’s easier to have verticalized AI. Do you think that’s the future … at least the near term?

Slater Victoroff: I’m not even saying verticalized, right, and I actually don’t believe in that generically. I’m making a point about it being purpose-driven. Because again, what I’ll always say is that we get the pitch from our customers all the time, we want you to scrape the internet and do magic.  That’s what I’ll always say.

And again, I think people have to understand that AI is fundamentally, you know, it is a mirror. We are training it to do a particular task, and then it’s learning to repeat that task again and again and again, very quickly. There’s no magic, right? There’s no like new novel thought processes that are happening. That’s just not what AI is, right?

AI is, in my view, best conceptualized as a mirror. And you know, it’s not a perfect mirror; it’s a mirror that’s got a hundred different pockmarks on it.

So for whatever you show it, and you show it the world in terms of data, right, it’s going to take parts and it’s going to ignore bits and it’s going to amplify other bits and come up with these tweaks and whatnot. But it’s not a thinking thing, right? It is not a human. And again, that’s not a technical limitation, it’s just that that’s just never been what AI is. And in fact, I’d argue that it’s something that only exists as a sci-fi concept, right?

And if we even kind of draw it to its logical extension, even if you just — this is kind of what I think every book Isaac Asimov has ever written is about, is the concept of AI is actually not well reified. And there’s no way we could put it together such that it makes sense or even leads to a consistent world, right? 

John Koetsier: [laughing] Oh, this is very sad. I do hope for some level of general AI at some point …

Slater Victoroff: We do have general AI, right? But general AI is not a magic genie. That’s the point that I’m making. I mean, GPT-3, that is general AI. It’s completely not purpose-driven, you know, whatsoever, right? It doesn’t care whether this is a law task, or sentiment analysis, or marketing, or medicine. It can do it all, right? So we kind of solved that from the general idea of this is generic AI. Again, the problem is that we have to be able to say what we want.

John Koetsier: Is that really general AI? ‘Cause it’s general AI in a certain field, right? I mean, I can’t ask GPT-3 to design a house for me can I? 

Slater Victoroff: I mean, you actually, you probably could in specific areas. Yeah. I mean …

John Koetsier: Wow!

Slater Victoroff: … certainly there’s plenty of techniques out there that you could use to help you generate houses, even older techniques than GPT-3.

John Koetsier: Wow. 

Slater Victoroff: And that actually, I would say that’s a great example. Is that if you decide you want an AI that will help you to generate houses, yeah, you know, that that’s doable. Like that is the thing that you can do, right? But it’s distinct from a magic genie that can do random-thing-I-think-of, right?

John Koetsier: Let’s talk about that a little bit then, because we do see that AI is invading pretty much every occupation and pretty much every area, right?

Today, what, 0.1% of people can afford to hire an architect and design a home that they would love to be in, but we start seeing tools that are coming out right now that you can go through a bit of a process and something could be designed. You know, accountants, there’s no reason why we can’t have AI accountants — there are some on the market right now I believe. Lawyers, we’ve seen some efforts to make lawyers that are AI lawyers.

And nobody’s going to say that you want an AI lawyer representing you in an actual court case, but there are uses for them, correct? 

Slater Victoroff: Well, you know, and I think this is where that analogy comes back, because when we’re talking about an AI lawyer, it’s important that we’re not actually talking about an AI lawyer, right? 

John Koetsier: Yes.

Slater Victoroff: What we’re talking about is a series of very, very powerful tools that allow one lawyer to serve a hundred people in a really cursory way, as opposed to three people in a really detailed way, right?

And I mean, that is powerful. Like, don’t get me wrong. And I think that’s the thing that we often miss, right? I’m not saying, hey, oh, we don’t have an AI lawyer and that’s bad. I’m saying no, no, no … we don’t have an AI lawyer in the way that we’ve thought about it in the sci-fi world. What we have, I would argue, is actually something that’s much more powerful, right? 

John Koetsier: Yeah.

Slater Victoroff: Because you think, hey, you know, I’ve got some lawyer that sort of tries to replicate what a human lawyer does, but it does a crappy job. ‘Cause AI, you know, it doesn’t matter how good it gets … if the job is mimicking a human, it’s never going to be as good as a human, right?

And you know, whatever, call it 50% the quality, which would even be a very, very good machine — it’s not that useful compared to something that makes your one human 100 times more effective, right? And so that’s why I think that access is just so much more powerful. And again, when you look at the robo advisors out there; when you look at like AI powered accountants, right; when you look at the AI powered lawyers, I think there is that path to just radically increase access to these services that are cost-prohibitive to folks today, right? 

John Koetsier: Let’s talk about that bionic arm then. Where do you see that bionic arm being most effective in the next few years? 

Slater Victoroff: So I think that the cost is still very, very high. And I think getting a successful sort of implementation together, you know, a really high kind of industrial efficacy bionic arm for a use case, it takes real effort, right? And to that point, I said where people are going to need this basic understanding of AI, many organizations have embraced that. You definitely have started to see it, but still very, very early days.

And so that is really, and I think that’s going to remain true for at least the next five years, let’s call it, right?

The expense is going to be high. I mean, the ROI is still very, very high. I think we’ve recently gotten to the point where people can have success, people are having success. They’re being very, very public about it and suddenly people would realize, okay, there is actually a pot of gold at the end of the rainbow.

And, you know, there probably wasn’t, or that was a question five years ago, right? 

John Koetsier: Mm-hmm.

Slater Victoroff: But I still think you’re going to look at larger enterprises adopting this more aggressively. I think that there’s going to be a couple of smaller companies that are willing to say this is going to be a key competency of ours and kind of forward invest and they’re going to win out.

And I think there are going to be a lot of people that sort of wake up in five years and realize that they’ve missed the boat. That’s how I kind of see things shaking out. 

John Koetsier: Yep. Being who I am and what I do — I have my own small business, worked for myself for a decade — I’m partial to those forms of AI that anybody can use. That I can log into Canva and do something cool that I wouldn’t be able to do, or I can use Pixelmator Pro and boom, I can reconstruct an image in high quality that wasn’t there before, other stuff like that. I assume that’s growing significantly as well. 

Slater Victoroff: It’s growing. I would say that if you look across at AI, it’s probably by significant margins, sort of the slowest growing and like least impactful area today, right? I think when I look at that space, I mean, the problem just frankly is that the technology is incredibly expensive to use and run, and the unit economics are really hard to work out.

I mean, again, you mentioned GPT-3, right? You’re talking millions of dollars in just compute power to train this thing a single time. And then, you know, it just means that fundamentally folks are going to be a little bit more reticent to take these really big consumer risks.

Again, I mean, there’s Google and Facebook [are] very obvious exceptions to that, but there’s going to be things breaking through. What I’m super excited about, and Adobe — and actually, I think a lot of the examples you used were actually in this domain, ’cause I think this is actually going to be the big exception — is the connection with a kind of creative software. I think Adobe is in an awesome spot to kind of take advantage of this.

Other companies too, I mean, Canva I’m sure is certainly using this in pretty impactful ways, but you know, we did DCGAN back in the day, and I just think that the stuff with neural style transfer as well, I mean, it’s incredible, right? And I think it’s a perfect example of that sort of peanut butter and jelly where I think 100% computer generated, like give me static, random noise or whatever — that’s not very interesting, right. But you know where these things have worked their way into artists’ toolkits to give them new ways of generating art, I mean, you just see some beautiful, beautiful stuff. 

John Koetsier: Yeah, absolutely. Absolutely. I know a college professor who has a generative art design program and shares some of that on social from time to time. Very, very cool stuff. Let’s talk about jobs. That’s always the worry. There’s increased automation, we know that, physical automation in manufacturing which uses AI in a lot of cases, obviously, as well as increased automation in white collar or knowledge worker jobs.

Are we going to decrease the number of available jobs for humans? Are we going to increase the amount of work that we can do? 

Slater Victoroff: Yeah. And you know, again, I’m in the second camp here, right? I think that the arc of history on this one is very, very clear, right, is generally what you see with automation is an increase in the amount of work that you’re able to do. I think what people also kind of assume, and it is actually turning out not to be the case, is that people often have this assumption that you used manufacturing as an example.

And again, automating manufacturing is actually a lot harder than some of the more traditional software automation that we’re seeing, right? And so I think that that’s actually — so it’s actually turned out such that a lot of the kind of lowest level jobs are some of the most difficult to go after.

I’ve kind of asked this question of what is it really that’s special about human cognition in some ways, right? And actually where you see a lot of success is in helping sort of the higher, you know, sort of the lawyers and folks like that, you know, very high.

John Koetsier: Yeah, awesome. I think that’s great. I want to thank you for taking the time. 

Slater Victoroff: Thanks. I’m sorry about the sound and the video and whatnot, but hopefully it was still okay.

John Koetsier: Literally could not tell that there was anything else going on, the audio is perfectly fine so don’t even worry about that. And I kind of find the image fading in and out a little bit artistic, [crosstalk] maybe it’s the revenge of AI.

Slater Victoroff: That’s degenerative art, right? [laughter]

John Koetsier: Excellent. Have a great day. 

Slater Victoroff: Yeah, thank you so much. Thanks for taking the time. 

John Koetsier: You bet.

Interested in AI and the future of work? So am I

Made it all the way down here? Who are you?!? 🙂

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe on your podcast platform of choice:

 


Want weekly updates? Of course you do …