Avi Bar-Zeev has been working on the metaverse for 30 years. He’s one of the key people behind Microsoft’s Hololens, co-founded Keyhole which became Google Earth, helped define Second Life’s technology, and has worked with Apple on, presumably, its rumored smartglasses.
Support TechFirst: Become a $SMRT stakeholder
In this episode of TechFirst with John Koetsier, I chat with Ari about the metaverse, what it is, what it isn’t, and how AR and VR connect (if they do).
We also talk about what AR should be, about how it could fail, and the very real dangers it could represent.
We discuss the one absolute necessity of the metaverse: interoperability. We ask whether a visual metaphor for the presentation of information is just unnecessary most of the time. We also we explore the future of life in an augmented reality rich environment, and the privacy implications of the metaverse.
Check out the Forbes story, or watch and read right here …
Subscribe to the TechFirst postcast
Full transcript: what is the metaverse, with Avi Bar-Zeev
(This transcript has been lightly edited for length and clarity.)
John Koetsier: What is the metaverse?
It’s a super hot concept right now. Investors are all over it. Mark Zuckerberg is fully bought in. Essentially, on one level, it’s a battle between tech giants to own the dominant computing paradigm of the future. And maybe for all the rest of us, it’s a battle about defining how we experience information, entertainment, socialization, work … and, in some versions of the metaverse, virtually all of life itself.
But what is it? And what will it be or what could it be? And, crucially, what is it not?
Today we’re chatting with Avi Bar-Zeev, who helped build HoloLens for Microsoft; worked on AR Cloud tech for Bing; also co-founded Keyhole, which became Google Earth — very cool tech; helped define Second Life‘s technology; and may have — and we’ll ask, but maybe he can’t say anything — may have worked with Apple on its rumored smart glasses. He now runs RealityPrime, his consulting company.
Avi Bar-Zeev: Hi, thanks for having me.
John Koetsier: Hey, it is a real pleasure to have you here. Let’s start right in the middle of it. What is the metaverse?
Avi Bar-Zeev: Well, you know, it kind of depends on who you ask right now. There’s a whole bunch of definitions. So to be short and complete, let’s say there’s maybe three different definitions, right?
Metaverse as escape
There’s the original one, and every article always mentions Neal Stephenson’s Snow Crash, and that one I would describe as a place to escape dystopian reality, to go become someone else anonymous. In fact, you using your real name would actually probably get you killed in real life if you did, and I think it’s, or it’s the same in Ready Player One as well.
John Koetsier: Hundred percent.
Avi Bar-Zeev: You really want to have it be a completely separate life from reality. And strangely, in both there was a little bit of an AR component but nobody ever used it, I guess, because reality sucked so much that why even care about AR because you’re all escaping to VR all the time.
So that’s definition one, that’s the sort of the original one.
Metaverse as a virtual universe
And then there’s a newer one that has come up, which is a sense of take all the information and all the virtual worlds out there and stitch them together into some virtual universe that has galaxies and planets — at least metaphorically, if not literally — and people are working on this and trying to solve the ability to move between worlds, like existing worlds, you know, like Roblox or Fortnite, or wherever else you want to go. So you want to be able to move your avatars and your loot between them.
But then for some reason, this particular definition also took on the idea that it overlays also on top of the real world, as if we’re going to want all these fantasies to spill out into reality, which is really the only part that I have an issue with, and maybe I’ll talk about that later, but there’s a lot of things that could go wrong with that AR part … and so I have some concerns.
Metaverse as the internet 2.0 … err … 3.0
And then there’s a whole other definition, which is the really broad one, which is it’s just the future internet. Just like ‘cyberspace’ was coined, it was a virtual world, but it really just became the internet and most of it was just 2D. Most of it didn’t have people, it had pictures of people, but no people.
And maybe it’s more decentralized now. It’s more like what people are talking about with Web 3. Maybe it’s Web 3D, right, but it’s more along those lines. And I think what we still have to solve is how does it become private? How does it become a safe place, unlike the web today, where your data is scraped all the time and you’re monetized all the time. How does this version become something that’s private?
And I would argue that maybe we don’t even want to call that the ‘metaverse’ anymore because there’s so much baggage with the other ones, that maybe we want to come up with a new name for this, because it is new and it’s more like what evolves naturally anyway. And yeah, it’s been here for a while, but it keeps getting better. We keep solving problems — hopefully we don’t keep recreating the same problems — and you know, 10 years from now, it’s something that’ll just be, it’ll just exist around us.
We probably may not even have a word for it, just like we don’t always refer to the ‘web’ as a thing … it’s just what we do, you know, I’m going to go to a site and do something, and I’m going to go talk to somebody. You don’t always talk about the technology that got you there, right?
John Koetsier: Absolutely. The fish doesn’t know it’s in water. We’re in an atmosphere and we don’t even think about it.
I want to understand which of those three definitions maybe aligns closest with where you think things are going or how you think things should go — which could be two different things —but maybe let’s go back to some of the things in your career.
You helped invent HoloLens. What did that teach you?
Learning about AR from Hololens
Avi Bar-Zeev: A lot of things. I mean, I think, certainly I could go into how to get a big project started inside a giant corporation. That’s [laughing] … I could not do that myself. I have to give a lot of credit to a lot of other people who helped make that happen, and I helped.
But I think the thing I learned the most from it was some of the things not to do, honestly.
Like, originally we were thinking we would be draping all these virtual worlds on top of reality and re-skinning everything, and there’s some great experiences like that — like you could imagine a really cool music visualizer that makes your room come alive and everything moves and beats with the music. That might be great, but you don’t want that all the time.
Mostly you want AR to be fairly minimalistic. You want to introduce one or two new elements into the world, and hopefully these are things that enhance your experience. You know, augmented reality is really about augmenting people, augmenting your perception of the world. So if you give us too much stuff, we’re going to be confused and we’re going to be pulled out of reality.
What we really want is something that helps us deal with the reality; helps us find the book in the bookstore that we’re looking for; helps us cook; or helps us have conversations in a way that feels more like face-to-face, so you and I don’t have to talk over video but we can feel like we’re across the same table from each other. And then we can make true eye contact instead of here, if I look at my camera, I can make eye contact but if I look at your picture, doesn’t look like I’m making eye contact. We can fix that. So that’s where it becomes really important, I think.
And I’ll say, I think just to jump on that too, I think I learned early on, just because I had started this so long ago, doing 30 years ago, I learned early on the problems with VR, right? Like a lot of people get really excited and then they do it for a while and then they’re like, yeah okay, there’s some problems there. I just had that experience earlier than most people, and I got to see what the limitations were and why we really don’t want to live full time, I think, in a virtual world. Really the value, maybe you only learn this when you start to have a family and kids and you start to value the things in real life around you, but I went through that transformation as well around the same time.
And I’m like, well, I don’t really want to escape from reality. I want to make reality better. I think there’s a lot of things we need to do to make the world better, and that’s where I want to put my focus.
The AR cloud future
John Koetsier: Let’s talk about AR, augmented reality/mixed reality, for a little bit. I actually got seed funding for a company probably about five years ago in the AR cloud space.
And I envisioned a future where everything would be plastered with augments and you would need filters. You would need to be able to accept some of those, the ones that you want. You would need to be able to reject a majority of them.
Some would need to be sort of part of a consensus reality — maybe street signs, speed limits, important civic, you know, there’s a sinkhole ahead, stop walking. And some would be others that you would subscribe to. And there might be — thinking of U.S. politics in the last four years or so — there might be a Republican filter and there might be a Democrat filter and whatever, right?
How do you see the future in augmented reality? Because it looks like the tech giants are racing for smart glasses. We are going to have something, probably initially connected to this [holding up a smartphone] in our pocket for compute and storage and connectivity and stuff like that … but eventually, probably entirely on its own, that is going to mediate what we see.
How do you look at that future? Do you shy away from the future? Do you see challenges/dangers there? Do you see great things in that future?
Dangers of an augmented future
Avi Bar-Zeev: I see great things, but I also see great dangers, and I think the key issue is who is building the future and how are they monetizing it, right?
So, I’m going to assume that your startup had really great ideas on how to monetize as well, that maybe, hopefully, didn’t rely on advertising, because I think that what we’ve seen is that advertising — ads themselves aren’t evil, right? They’re just giving you information, hopefully; maybe they’re a little shy on facts and high on emotion, but individually they’re not bad.
But in the aggregate, when we have systems that are designed to extract our personal information in order to try to pick — make the best bets on us, right? Advertising is like gambling, as far as the advertisers are concerned, they’re making bets on us. It’s a little bit like virtual Vegas that we’re the chips and the advertisers are betting on us to do things. And that’s a very corrupting system overall, it forces the wrong outcomes for what we really want, socially, for it to be the world that we want to build.
So I really hope that whatever we do, we figure out monetization models that don’t rely on advertising.
The companies that I trust more are the ones that build products that we either buy or we don’t buy, based on the quality of the product, and not on some artificially low price that happens to be potentially subsidized by advertising.
In fact, it is one of the reasons that I went to Apple.
I can’t talk about what I had worked on so much, but I can say why I went there was because it was right after the San Bernardino case where Apple was right starting to stand up for privacy and saying, ‘No, we’re not going to compromise our operating system.’ I said, ‘That’s a company I could work for,’ because I think they get it. I think they get privacy and, you know, that was very important to me because I want to help build this world.
The only way to really say what the future is going to be is to jump in and try to help build it. And so I have done my best to try and make sure that, yes, hopefully in the future we will have the right hardware, and hopefully we’ll have the right infrastructure and systems in place for the best outcomes to happen and not the worst.
And the worst could be pretty bad.
The worst could be basically virtual enslavement, where our devices are controlling what we see and implicitly controlling what we do, and we lose our freedom of thought. We lose our freedom to have our political views and whatever spectrums we want them to be. And so there’s a real danger there.
And because this technology is so much more powerful than television or the internet has been, I’d say it’s another 10X more powerful. The danger is also magnified when things go wrong.
John Koetsier: I love what you were saying there, it reminds me of the quote, I forget who said it, but “The best way to predict the future is to invent it.” I feel like that was … maybe Steve Jobs, I could be wrong on that one.
Avi Bar-Zeev: It might be Alan Kay actually …
John Koetsier: I think you’re right.
Avi Bar-Zeev: … if I remember right.
VR and AR and the future of the metaverse
John Koetsier: I think you’re right. Let’s turn to VR a little bit, because that is where people think more of the metaverse. And I know people try to connect AR and VR, and we’ll get to that topic as well — and you’ve got very definite ideas on when that should be attempted, whether that is even possible — but if we talk about VR for a bit, where do you sit on how you want a metaverse to develop?
We’ve had the web. The web was built intentionally decentralized. The web was built intentionally to reroute and to work even if big chunks fell off, even if big chunks were corrupted, held by others.
The reality is that the power principle, the Pareto principle, whatever you want to call it, has resulted in us using, what, a couple thousand websites maybe individually, probably 10 to 20 that we regularly use and regularly go to, and then hundreds that we touch occasionally here and there.
How do you see a metaverse evolving? And do you see the necessity to bridge worlds, bridge silos, bridge apps?
Avi Bar-Zeev: I do think that interoperability is the key idea that the internet exists, you know, in thousands, millions, hundreds of millions maybe, servers, because we have some standards of interoperability that were established early on by people who were very forward-thinking in terms of how do we share bits of information? How do we share packets? How do we ask for things? And how do we put content out there?
And that was a necessary component to be able to build what we call the web. And I think we do need the same thing.
It’s not strictly required that every website or every 3D world interoperate with every other one, we don’t have to mash them up. But it is important that we can share some things between them, that there are ways of fetching content that are fairly standard.
How important is it that you can have an avatar that can go between worlds or that you can take your loot from one to another? That, I think, may be less important for — I think for people who collect NFTs and all that it’s probably super important.
But at the end of the day, if I have my +1 vorpal sword from Site A and I try to take it to Site B and it’s not a magic-based world, it doesn’t make any sense in that world. It’s just a piece of art at that point. And so the rules aren’t necessarily compatible, and then there’s no real incentive for the creators — the creative people that make these worlds — to share, because they want to make self-contained worlds in general.
There’s not a lot of reasons why there’s even cross-platform play, even though the players want it, and a company like Epic will help try to make it happen that there aren’t a lot of other people who say that that’s actually worth it for them. They have other reasons for wanting people to come to their console and stay on their console and not buy the other console, right?
So it’s difficult to say that that will actually happen the way people are pushing it, that we’ll have this vast interoperability.
And I think the most difficult part that I have trouble with is I like spatial metaphors. I’m a very visual person, but does it really make sense to stitch all the information of the world into some 3D space, right? You know, we had GeoCities early on with the web. It didn’t make a ton of sense to try to put a 2D metaphor on the web, and I think making a 3D metaphor for the entire app also — it’s way more than three-dimensional, that’s the problem.
It’s very large dimensional and so links make sense. Searching makes sense. And using search engines makes sense to find things. That’s why we outgrew portals and all rely on search engines at this point to find things, because there’s just too many dimensions to be walking around. Where spatial metaphors do help is when you want to find things in your house, like having this idea of a mind palace where you’re able to put things and know where they are and find them again — that’s useful. We’ve evolved to do that, but that’s for a limited space. That’s for, that’s our personal space.
So I’m imagining that could be 3D, but I don’t know that the entire universe needs to get stitched into a poly metaphor. If someone wants to try it, more power to ’em. I just don’t, I don’t think it’s required. I used to, and that was my big mistake in 1992, was thinking that was what the internet needed. And I thought that all the way up to maybe 1998, when it’s like, okay, I guess the internet doesn’t really need that. It’s doing just fine without it. I was wrong.
And even Google Earth, we started that around 1999, wasn’t meant to be the one metaphor for everybody. It was meant to more be like, okay, this is how we can stitch earth’s content together. This is how we take stuff about the planet and build a, what we would call a trellis that you can hang your information on. Now that makes sense. If it’s geospatial information, it makes sense now to connect it spatially because they have some spatial relationships.
But even then, there’s lots of semantic relationships that don’t make sense. Like the idea that there could be a million instances of a can of soda.
They’re all semantically linked to the company that made them, the concept of soda, and maybe some health sites about the good and bad of it, but the location of the can is one more piece of information. So you could stitch the location together spatially. The other information exists on another dimension. It’s not spatial, you know, it doesn’t matter where the headquarters of the company is that much for your conversation about whether you should drink soda, right? That’s only minimally relevant.
John Koetsier: [laughing] It reminds me of an article I just read the other day, trying to apply our existing or old school metaphors to modern computing, and professors in universities are having real issues with people not being able to find files. Because professors have operated on — it’s probably you, maybe me as well, you know, I’ve got folders. I put other folders inside those. I put files inside those. I organize them, right? And they’ve got people coming in, kids coming in who are like, ‘I just search for everything.’ [laughing] Right?
‘I don’t organize. I use search.’
Avi Bar-Zeev: Yeah.
Should we marry AR and VR?
John Koetsier: Let’s go to marrying AR and VR. You wrote on Medium recently, and you’ve got a strong point of view there — which I think makes a ton of sense actually — that it’s stupid. That’s an insane thing to try and do, to marry AR and VR.
There’s a sense in which I have thought that AR is painting some of the pixels of my visual field. VR is painting all of them, and it’s just a matter of progression. We could argue that. But your opinion is very, very different.
Avi Bar-Zeev: Yeah. And just to clarify though, I do think we will have devices that do both. The same device will do both AR and VR, so in that sense, this convergence and the marrying of the devices. And I also think there’s a spectrum that you could imagine going between AR and VR on those devices and landing somewhere in the middle.
I guess the way I’ve put it though, is … as soon as you’ve added enough stuff to reality that you’re taken out of your presence in reality — like right now we’re both present in reality, we’re paying attention to each other, but we’re also aware of our physical surroundings and if anything were to happen, we would deal with it — but to the extent that the virtual starts to take over … then we’re really in VR, right?
As soon as we lose a sense of where we are in the world, we’re really in VR. And that transition can happen very quickly. We can also go back and forth.
You know, I can go from focusing 100% on you — which is essentially VR right now because I’m in the world with you, it’s not my current location — or I can look around and say, ‘Okay, I’m actually still in my office, and I’m here.’ And I can ping pong back and forth really quickly.
My main point is that the types of experiences are very different. Where you’re in a VR experience, a lot of the point is to be immersed somewhere else. It may be somewhere realistic. It might be a, you know, Google Earth VR is great. I love that and love to go travel. And I wish the data were better down at street level that we could probably collect better, higher resolution data to walk down the streets too. But it’s still really cool to fly around.
Okay, so that’s really very realistic, but it’s still a virtual world, because we’re going somewhere that we’re not in reality. And maybe the reason you’re going to do that is because maybe we can’t travel. Maybe it’s due to the pandemic or cost or inaccessibility, but that’s a cool thing to do. And it can be a completely fictional world as well. And that’s also super fun.
But how much of our time are we really going to spend doing that?
Unless that’s our job, you know, even if I said we’re going to spend 10% of our time in VR, that might even be stretching it for most people, right? It might only be 1% of our time spent in VR, apart from maybe, say, meetings.
That might happen, we might have a virtual meeting in which we both feel like we’re in some neutral location. But, so that’s VR, maybe that counts as more time and business meetings might count. So maybe some of us will spend up to a third of our day in those kinds of meetings, technically VR, but think of the proportions of VR as being like how much time do you spend currently watching movies and playing games? That will translate in the future to approximately how much time you might spend in VR. And if you have kids, maybe that’s in favor of stuff they do, not as much the stuff the adults might want to do themselves.
Whereas AR is going to be all the time. It’s going to be 18 hours a day of wearing some convenient AR device, whether it’s glasses or maybe at some point in the future it’s contact lenses or some advanced technology we can’t even imagine that gets the information to us without having to wear stuff on our face — it’s not ideal, nobody really wants to wear glasses or contact lenses, I think, but it’s just, it’s ubiquitous, right?
It’s our interface to future computers. It’s our interface to future internet. It’s our interface to IoT. It’s our way of connecting with people in holographic telepresence, as I like to call it, so that it feels like somebody we’re talking to is just right there with us whenever we want them, synchronously or asynchronously. I mean, that’s where we’re going to spend all of our time, I think.
And that’s where it needs to be subtle and comfortable and sensitive to all different kinds of people and cultures, and not prescriptive, written from the point of view of various white male authors in the nineties who had their particular views of things. I think we need to be a lot more open and inclusive about the design of it, because it will affect everybody.
John Koetsier: Is a wonderful, amazing, crazy future that we’re hurtling towards, and we get some senses of it, of what that might look like — just in my own home where my wife, my kids might have their headphones on, noise-canceling, you say something, nobody hears you, right? [laughing] [We’re in our ] own worlds, even though there’s not VR.
And yet, as you said, the devices we have today, are you going to be in them for a long time? 1%, 3%? We just aren’t.
I have an Oculus Quest 2. I, to be a hundred percent honest, I barely use it.
I just charged it yesterday. I want to use it today ’cause I want to use it. But it’s not something that you can just pick up and, boom you’re in, like a smartphone, right? You have to prepare. You might have to make a space. You might, you have to put something on your head. You have to turn it on. You have to put some things on your hands. You have to enter, it takes — there’s startup costs there. It’s a really, really challenging thing.
I think we’ll eventually have artificial eyeballs, right? [laughing] And we’ll just get stuff beamed right in. And the compute power for that is insane.
Because if we inhabit a virtual space ’cause we’re having a conversation, I need to see you in my context, maybe sitting in a chair, whatever; you need to see me in your context, and maybe there’s no chairs in your context even though I’m sitting, and so … there’s some real significant horsepower that’s going to be required and an AR Cloud that really understands how to do things, even to enable that level of VR, correct?
Avi Bar-Zeev: Mm-hmm. Absolutely I think, and I think that’s where — you know, I’m glad you said that, because I think that’s where a lot of people go wrong is that they think if they project the VR today, they kind of say, ‘Well, okay, we’re just gonna meet in a conference room.’ You know, boring. Like, that’s what we do now and we hate it. Why are we going to recreate that virtually and just meet in a gray conference room with whiteboards?
Can’t we do better once we virtualize the whole situation? Like seriously. I mean, I would pick the top of the Eiffel tower for a meeting. And I would be drawing in 3D if it were up to me, or the moon or Mars or wherever.
I mean, there’s better ways to imagine what we could be doing together. But I think you’re exactly right about this idea that most of the time it’s going to be that you pop into my space to ask me a question if we’re working together, and I pop into your space to ask you a question. Sometimes they’ll even be asynchronous like texting. You know, maybe you’re not even really there, you’re just, I’m just getting a note from you that I can respond to later so you don’t interrupt me in the middle of what I’m doing. And that level of politeness needs to come back into our interactions, right?
So, yes, I agree totally.
And I think that suggests more of what you were saying earlier about this idea that it’s a little bit more of a consensual illusion. It’s an illusion that is just enough, it’s just the necessary bits to make it work. It’s not oppressive. It’s not overwhelming. If we need to bring in that virtual chair so it makes sense that you’re in a seated posture but you’re not hovering, you know, like you had magical powers, then bring in the chair, right? Or figure out a way to map you onto the existing chair, but then we have to figure out how to bend your neck so that you’re looking at me when you mean to look at me, instead of looking in the wrong direction.
So it gets really complicated and I’ve spent a lot of time thinking about how to solve a lot of those little weird little problems like that.
But they are super important and it’s got to be casual. It’s got to be something that just, we can just pop in and out of, like texting is today. It’s very casual. You text somebody when you want to, there’s not pressure to respond right away.
That becomes, I think, something that we will lose initially in VR, because everything’s going to be about — you may send me a text and it’ll be, ‘Hey, join me in VR’ and then I’ll join you. And then why didn’t we just have the texting equivalent for this thing? Why don’t we figure out a way to do this asynchronously, right?
John Koetsier: I will have admin privileges to your reality space, so then I would just appear in your visual field and demand your attention [laughing] … no matter which way you turn your head.
Avi Bar-Zeev: Yes. You know, if you’re my boss, you might get away with that, but most people wouldn’t.
John Koetsier: No.
Protecting focus from the metaverse
Avi Bar-Zeev: So, we have to design those things thoughtfully so that, I think the most important thing is to help people actually focus on what they’re trying to do. If somebody is focusing on a document, let’s not interrupt them.
And this, I think, gets to my main critique of the way a lot of people are talking about the so-called metaverse, is they just picture these worlds around us and really the computer’s got to be smart about what do we want to do? Like, what’s our intent, what are we trying to do? What’s our personal context?
And that’s scary in some sense, because there’s certain companies I don’t want to give that information to. I’ve got Facebook blocked at the firewall level at my house, like nobody in my house runs it. So, you know, ’cause I know they’ll try to sneak in once in a while and check status of people. So I’m like, no — no Facebook.
So it’s got to be companies that we trust, and hopefully there’s more than one in the future that even talks about this stuff that I think companies will be competing on how private they can be. And truly, you know, that we can count on it to be … that information is so personal to us, it’s like health information, right? And one of the other articles that you may not have read is all about eye-tracking and it’s about, this is health information, you can use eye-tracking data to determine your health in certain dimensions. We don’t want that information going out to some cloud in a way that it’s going to be stored forever and we don’t know what happened to it.
We want that, if it’s stored at all, to only remain on our personal devices, which can then make the decisions about what do we need to see in order to achieve our goals.
That’s the only way I can see it happening and not trusting some third party to make those decisions for us. I think it’s got to be so close to home that it is really just our device that knows this.
John Koetsier: Avi, let’s get a little personal as we draw to a close here. You’ve had a crazy amazing career. What’s been the line drawn through all of that? And talk about what’s driven you to continue to want to contribute in the area of metaverse and realities and augmented reality.
Technology to tell stories …
Avi Bar-Zeev: Partly what we said before with the quote about, you know, ‘If you’re going to build the future, you have to jump in and try to do your best.’ I think, I — certainly not perfect, I made mistakes myself — but I am trying to do my best to build the future that I want my kids to have. It’s taken a whole lifetime to get to this point.
The first true AR demo was in 1968, which was the year I was born. And, so now 53 years later, we almost have glasses that we can, that we could use, right?
It takes a while and you gotta be patient with these things, because we gotta get it right. And that’s what I feel like my job is, is to help get it right. But in terms of my motivation, well, the reason I got into this in the first place is I wanted to do fun stuff. I wanted to make movies and tell stories, and the tools were so bad that I’m like, okay, I gotta go start building the tools. And the hardware’s so bad, okay, I gotta go start building the hardware.
And hopefully at some point I get to actually go start telling stories again. When all this stuff gets good enough, I will finish off whatever time I have left just having fun and telling stories.
But for now, there’s a lot of important work to do, and it’s not done. I’m spending my days advising companies, and consulting, and talking to a lot of people about what the future should be, to try to share what I’ve learned so that it all hopefully turns out right. And hopefully offsetting the people who don’t think about it. There’s a lot of people who just rush in and say, ‘There’s money to be made, and so let’s just do it and see what happens.’ And I’m like, no let’s think about this…
John Koetsier: Move fast and break things [laughing].
Avi Bar-Zeev: Exactly, exactly.
John Koetsier: Well, thank you so much for this time. Do appreciate it.
Avi Bar-Zeev: Thanks for having me.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Consider supporting TechFirst by becoming a $SMRT stakeholder, and subscribe on your podcast platform of choice: