What is the future of technology? Mobile is the main thing right now, but augmented/mixed/virtual reality via headsets and smartglasses is coming, and fast.
But what about moving the tech right onto our body … on our eyes … with a smart contact lens? What about having our augmented reality basically right in front of our faces … without any hardware hanging off our heads?
In this episode of TechFirst with John Koetsier we’re chatting with former Google and Apple exec, now SVP for Mojo Vision, Steve Sinclair about smart contact lenses.
Scroll down for full video, audio, and a complete transcript. And, here’s the story on Forbes …
Subscribe to TechFirst: smart contact lens
Watch: Mojo Vision’s smart contact lens
(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)
Read: this smart contact lens packs 14,000 pixels per inch into a micro-LED display
(This transcript has been lightly edited for length and clarity.)
John Koetsier: What is the future of technology? Mobile is it right now, but augmented/mixed/virtual reality via headsets and smart glasses is coming fast.
But what about moving the tech right onto our body? Maybe even right onto our eyes with a smart contact lens? Today, we’re chatting with a former Google and Apple executive, now senior vice president for Mojo Vision, Steve Sinclair, about smart contact lenses. Welcome, Steve!
Steve Sinclair: Thanks for having me, John.
John Koetsier: Hey, super happy to have you. And I want to get into some of the big implications, I want to talk about some of the details as well, some of the products you’re working on right now and going to be delivering fairly soon — whatever that means — as well as some of the big picture of where things are going.
Maybe let’s start here. Most of the tech world is building what they think is the next thing, right, whether that’s smart glasses or some kind of visor or something like that, HoloLens or some rumored Apple device or Oculus.
You’re building what most people think is the next, next thing: smart contact lenses. Is that correct?
Steve Sinclair: Yeah, we’re absolutely building smart contact lenses, but we don’t think of it as the next, next thing. We think of it as the thing that can actually be right around the corner.
There’s a lot of talk about needing to do smart glasses first and then you go on to contact lenses, but the market for eyewear isn’t like that. We all have contact lenses and glasses today.
And so it just makes sense for a market that size, and for customers that want that same capability but be able to look like themselves, that we build smart contact lenses.
John Koetsier: Well, I love it. I’m wearing contact lenses right now, and I mean, I would love to have the text that I’m going to be speaking — in some future auditorium when we’re actually traveling around the world again — floating in front of my eyes, rather than looking down at my phone or something like that.
Steve Sinclair: That’s right.
John Koetsier: Let’s talk about the tech that you’re building right now. Where is it? What is it capable of?
Steve Sinclair: Yeah. So we’re still in R&D mode. So obviously nothing to announce as far as a product imminently on the market. As you can imagine, building something like this is difficult and has a lot of moving pieces to make it all work. Today we’re building prototypes of Mojo Lens.
Mojo Lens is a smart contact lens that includes a small micro-LED display, as well as a broadband radio and IMUs, so that we can use it for eye tracking to understand where your eyes are moving and how they’re moving, as well as an external-facing low-resolution image sensor so that we can use it for a number of applications like computer vision, for example.
And so all of those pieces come together into a solution that allows us to project content in front of you. So wherever you’re looking, you can see information if you want to see it.
John Koetsier: Amazing. I’m assuming there’s a battery on there too, or is it just, stick an electrode into your eyeball and get something from your bloodstream? [laughing]
Steve Sinclair: Definitely not that. Yeah, there’s a thin film solid state battery that’s built into it as well. So you charge it at night while it’s cleaning and then you put it on in the morning, and then it gives us enough power to operate throughout the day.
John Koetsier: Now one of the challenges obviously that we’ve had, let’s say with the first Oculus Quest and the second version that’s out there, is resolution, right? And that’s constantly a problem. How much power can you put into these small devices and what kind of resolution do you need to make something useful in a smart contact lens?
Steve Sinclair: Yeah. And that’s a really good fundamental question for how we’re able to do what we’re able to do, and I’m glad that you brought it up. It’s that one of the first things we had to solve was what type of display and level of resolution and capability that display needed to have.
Really, there’s a lot of wasted power, a lot of wasted photons that happen when you build a big display and you put it away from the eye or over the eye, even in a pair of glasses that are relatively close to the eye. You’re lighting up pixels that your eyes aren’t looking at.
And so one of the fundamental breakthroughs that we had, and realizations, was, well, if I just make enough pixels to match the resolution biologically of the human eye, I’m not wasting any power computation to drive those photons to your retina.
And so we’ve been working on a micro-LED display that has 14,000 pixels per inch, compared to your smartphone which may have around 500 pixels per inch. We pack that all into a display that’s less than half a millimeter in diameter.
John Koetsier: Wow.
Steve Sinclair: So effectively, you know, we’re creating a set of images that can be used for text, they’re sharp enough for graphics, video, photos — anything that you would normally want to see.
And so we’ve embedded that in the center of the contact lens and it points directly to the fovea, so no matter where your eye is looking, the content can be in front of you and as high resolution as we need it to be.
John Koetsier: The other amazing thing that just comes to mind is that when you’re building something that is out here that you place on your face and wear, like a visor or something like that, you’ve got to think about field of vision, you’ve got to think about how many pixels am I going to paint around somebody’s entire face or where they can see?
You don’t really have that exact problem, do you? You’re going to paint some pixels in an area that are going to float on reality. You’re not trying to recreate reality.
Steve Sinclair: Yeah. We’re not trying to recreate reality. And I think that while that approach delivers some amazing experiences — and we see that with some of the products that are available today and that are promised to come in the next few years — we’re really focused on utility.
And so having the right information at the right time is what’s critical for us. And you’re right, we don’t worry so much about field of view because we’re always over your retina, always over your fovea, and so it’s always sharp and it’s always there where you need it.
So we have, in some ways, an unlimited field of view, because if I dart my eyes to the left or to the right there’s content there when I look there.
John Koetsier: Yes.
Steve Sinclair: So it’s freeing from a user experience standpoint to not be worried about will I hit the edge of the screen? You can’t. If you try to look at the edge of the display, it moves and it’s always on the center.
John Koetsier: Talking about ‘hit the edge,’ I have hit the edge sometimes in Oculus Quest and hit my hand on the wall because the virtual barrier there didn’t quite make it.
But one market that you’re looking at first is people with vision impairments. Talk about what they need and what this type of solution can provide.
Steve Sinclair: Yeah. One of the competitive advantages we think that we have in the long term is that we’re building a medical device. It’s a contact lens and it’s classified as a medical device, and therefore it has, you know, inherent in its design, the ability to do things that are useful for people from a medical and health perspective.
And so one of the first applications, as you mentioned, is to help people that have low vision conditions, say it’s glaucoma or macular degeneration or retinitis pigmentosa. So, conditions that either cause a sort of blurriness, or it may be a contrast sensitivity issue, or it could be tunnel vision — all of these things with AR overlays can be somewhat accommodated.
And so the idea is that we take augmented reality and we take that image sensor that’s built into the lens that’s pointing outward, and we look at the scene in front of someone who’s wearing the lens, and we add contrast enhancement in augmented reality that’s perfectly registered to the world. We add edges and lines to objects in front of you, whether it’s the edge of a curb, or a doorway, or a post, or a person, and we heighten the awareness of that object for the person who’s wearing it.
And so that can be done all on the lens. Nothing has to leave the lens to do it. We have figured out how to make that happen in real time, and so someone wearing that lens could turn that mode on and suddenly they can see things that otherwise would have been obscured to them.
And so it allows them to have more mobility, more independence, and to really hopefully change their lives.
John Koetsier: That’s amazing. That sounds almost on the edge of object recognition on a tiny computer that fits over your eye. And I’m not saying you’ve got full-on object recognition like it knows that, I don’t know, this is a cup [holding a cup] or something like that, but it’s clearly sensing edges and boundaries, and you’re probably doing that intelligently, so the things like a curb that could trip me are being highlighted, correct?
Steve Sinclair: That’s right. It could be used for that over time. That’s not something we want to jump into right away. We want to literally walk before we run with something like this.
And we wanted to keep it on the lens so that all of the content that that image sensor is seeing, is basically used in real time and then thrown away. It’s not stored anywhere.
But the idea is to be able to allow someone to quickly recognize their situation and be able to react to it. If we’re able to do that and give people more independence, that’s going to be a big boon to those people that have those conditions.
John Koetsier: So you’re talking about people with impaired vision. I have impaired vision. I use contact lenses myself, or glasses if I choose to go there. It’s not very bad, but it is ‘impaired’ to some degree. Can the contact lenses that you’re building— can they be prescription?
Steve Sinclair: Absolutely, and it’s a key part of the value proposition of what we’re building, is that it is a normal contact lens.
So even if, you know, it’s not on and it’s not showing you information, it’s correcting your vision and making vision better and enhancing it just like you would expect it to. So you put it on in the morning like a normal contact lens, you wear it all day, it works to help you see and read and interact with people and interact with the world, and it gives you extra information that enhances the world with AR.
John Koetsier: Yeah. So, I’m wearing one of your lenses, I walk around, I go up to my friend, and I start having a conversation; he looks me in the eye ’cause it’s deep and serious and everything like that, and he says, ‘Are you a cyborg?’
What does it look like when he looks in my eye?
Steve Sinclair: So, that’s a good question, and one that we still have to work out exactly what we want it to look like.
We have the ability to obscure the electronics by creating an artificial iris, basically, to cover it up so that it looks like a normal eye. Think of like colored contact lenses that people wear today.
We can do that same sort of treatment on the contact lens to hide all the electronics.
But then we have some people that come to us and say, ‘Well, I wanna look like a cyborg. I want people to know that I’m wearing this thing. I want—’
John Koetsier: [laughing] I want my eyes to look red and glowing.
Steve Sinclair: Yeah. ‘I want it to be obvious that I’ve got this special tech on.’ And so, I don’t know, I think we may make it possible to offer both options for people that you could have something that matches your blue eyes or something that, you know, makes you look like how you want to look. So I think there’s a lot of opportunity to do some interesting things with it.
John Koetsier: Amazing. So you talk a lot about what you call ‘invisible tech.’ And tech is pretty visible these days, right? You’ve got a computer that you’re working on. You’ve got a smartphone you hold in your hand. You’ve got a smartwatch that is visible and there.
And we see a move towards more ambient technology. Your Amazon Echo, sure it’s around, but you address it with your voice and at some point it could live in the wall or something like that. What does invisible computing mean to you and how are you getting there?
Steve Sinclair: Yeah, so we coined the term ‘invisible computing’ a couple years ago as we were starting to peak out of stealth. And really for us, it means, it’s a category that we hope to build and to lead, which is all about making the technology disappear.
We want to be able to have the information we want when we want it, but at the same time stay focused in the real world. So when you and I are having a conversation, there isn’t a bunch of tech on my face getting in the way between me and you.
We’re trying to build a solution that doesn’t bombard you with information. We’re not trying to start with something that is going to constantly be on, constantly be bothering you, constantly be interrupting. We want it to be invisible, to disappear when you don’t need it, but be there in the moments that you do.
So if you’re an athlete who is trying to get the best performance they can get and they need information while they’re working out, or while they’re training, or while they’re competing, we can show you that information in a way that a phone or a watch or a pair of glasses just can’t do. If you are a service tech, or you have a job say in the ER, and you don’t want to be touching things and you don’t want to be interacting with screens, but you want to be interacting with patients or working on machinery because your hands are full with tools, we give you that heads up information without blocking your peripheral vision, without being hot, without being heavy.
It works under your other safety equipment. There’s just a lot of opportunities for this type of form factor to work when you need it and to disappear when you don’t.
John Koetsier: I love that because what we’re seeing from maybe Microsoft Mesh or some of the things that we’re seeing about how the future of Apple’s rumored mixed reality device might look like, all these devices are head-mounted.
All these devices have a flat screen in front of your face — and by screen, I mean, actual plastic, something hard.
And while we see something of a future of telepresence and working together and other things like that, what we also see is like, would we ever want our families, three or four of us, two of us at home to both be wearing those in our own universes? And this is a way to have computing and intelligence and context without separation.
Talk about the future of where you see this might go in terms of that deeply augmented future, even that telepresence future.
Steve Sinclair: Yeah. I saw the Microsoft Mesh videos and [I’m] certainly impressed with the vision and think that that’s a great way to show off where this could go. And we’re building our tech — to start off — to focus on the utility, so that here in the 2020s it’s something that you can use and find value in right away. But that doesn’t mean that we couldn’t eventually build solutions that are that immersive. So I do see the industry going that direction.
I also agree with you, what you’re saying is that when you’re with people, the last thing you want to do is block yourself off from them, and so you want to be able to interact. And so, you know, starting with a form factor that puts the people and the interaction aspect of it first is where we want to start. And we’ll build up to that immersive capability, rather than trying to jump to this immersive capability that forces you because of physics to build all this big stuff and shrink it down.
We’re going to start small and build it up. And I think that’s a better approach in the long term.
John Koetsier: Yeah, maybe the look of 2020 and 2021 is this [tugging ears forward] from our masks, and the look of 2022, 2023 is massive necks from carrying all this head-mounted hardware. But I look forward past that as well.
Talk a little bit about when you will ship something. This has been four years, a four-year journey for you, but this is extremely cutting edge. This is high-level technology. Huge amount of miniaturization that you’re doing, huge amount of R&D that you’re doing.
You’ve been well-funded, but when is something going to hit the market? What’s your estimate right now?
Steve Sinclair: Yeah, we hope that we can bring something to market in just a few years. It’d be presumptuous of us to assume when the FDA is going to approve it. So, because we are a medical device, only the FDA can tell the world when something we’re building is safe and effective. We’ll give them all the data and all the information we can to prove our case, but they make that final call. And so it’s important that we don’t have a specific date and usurp their authority on that.
That being said, we have been working on this a long time. The company is five years old. The tech that we’ve been building and prototyping started out 10 years prior to that. One of our co-founders had come up with the idea for the world’s ultimate display and decided to build that, he had to put it into a contact lens.
And so we’ve been working out all the mechanisms and the physics and the science necessary to make that possible.
So we’re far along. I’ve worn the lens under supervision of our optometrists. We’ve had a number of people in the company wear it. We’ve got another prototype coming down the pike here just this summer that we’ll be wearing, that has all the things I described earlier and working.
And so, you know, progress is being made to do what we can do to make the first product work and to do the best we can to make it safe, and it’ll be the FDA that decides when that’s really ready to happen. But it’s not a 10-year thing, it’s much sooner than that, we believe.
John Koetsier: Yeah. Since the FDA is involved and it’s a medical device, what does that mean for customers? Do they need a prescription?
Steve Sinclair: Yeah, they do. And one of the things that we want people to do is we want them to take care of their eyes. And so part of our brand proposition is this idea that your eyesight is important and we want you to have healthy eyes.
And so you going to the optometrist to get a prescription and to get your eyes checked on an annual basis is a good thing, and that we want you to be able to wear our lenses for a long time, which means we want you to take care of your eyes. And so, partnering with optometrists and making sure that we’re doing the right things by people’s health and how their eyes perform, not just in looking at AR content but looking at the real world, is important too.
John Koetsier: Cool. One thing that struck me just as I’m listening to you here is what kind of operating system does this run and how does this communicate with other technology if it does?
Steve Sinclair: Yeah. So we’re building out our own OS based on some other components that are available out there. And I won’t get into those details, but we’re building a platform.
You can’t just take content off your smartphone or apps off of your watch and put them on your eyes. It’s a completely different user experience. And so you have to start with that as a baseline, is that there are going to be new ways to show people information.
I mean, we’re building a system that you control with your eyes. It’s not gesture-based. There could be some voice control, but it’s really, primarily you use your eyes to look at things and to interact with the information that’s in front of you. And so, that by itself means that we have to build out an application ecosystem that enables that kind of experience to happen, and also unlocks the creativity of all the developers that are out there.
So we’re building out that piece of it.
There is a compute device that we’re building. We, internally, we call it a relay. It’s something you wear around the neck that communicates with the lens, and that relay accessory has an application processor and a GPU and batteries, of course, and storage. And so it runs applications there and it’s streaming content to your eyes and pulling sensor data off your eyes to compute what the next frame of information you need to see is, in under, you know, just single digit milliseconds, to make it all seem fresh and performant.
So we’re building out all of those capabilities. That accessory can talk to your smartphone or it can talk to the cloud if it needs to, to get information as well. But we try and do as much as we can on that accessory so that we don’t have to talk to anything else.
John Koetsier: Interesting. Very interesting. So we have iOS and Android already. Maybe we’ll have eye-OS [pointing to eye] at some point in the future as well. Who knows? [laughter]
Steve Sinclair: Possible.
John Koetsier: I’m just trying to imagine the syntax of communication here. Is this a click, you know, is this a double click — there’s lots of things, or is it just the duration of focus? Lots of things to think about there. Very interesting world. If you project yourself out, let’s say a decade, what’s this look like? What’s the penetration?
Steve Sinclair: Well, as I said earlier, I think that this isn’t the next, next thing. There’s a world with glasses, there’s a world with contact lenses, and they’re both going to be smart. And so there isn’t an either/or type proposition here.
People are going to opt into what they would have normally opted into, as long as we can build an interesting system. So from that standpoint, we hope to grow the market for smart contact lenses along the same lines as, you know, the growth of the contact lens market. So that’s one piece of it.
The other piece of it is that what we’re building can go way beyond augmented reality.
It’s easy to focus on the heads-up display and AR content in front of you, but when you’re building a platform that’s got power and data capabilities on the eye, it opens up a whole world for medical technology and healthcare, health tech that we can build into this, because we can look into the eye and see the blood vessels in the eye. We can measure eye tracking, which can tell you a lot of information about your condition, about whether or not maybe a migraine is coming on, or you have the progression of a particular disease based on how your eye is moving, or fatigue, for example.
So there’s just a lot of information that can be gathered and gleaned, literally, from being on a part of the body like the eye. And so what we see ourselves doing over time is taking our baseline medical device capabilities of helping people with low vision, and increasing that and broadening that out to a lot of other health and wellness capabilities that go way beyond the augmented reality world that we all want to see.
John Koetsier: Super, super interesting stuff. Steve, as you move out to beyond the closed beta, I volunteer as tribute. And I look forward to trying it out and testing it and seeing what it looks like, and shooting laser beams out my eyes — no, I’m just joking about that part.
Steve Sinclair: Not on my roadmap, but we’ll, I’ll write it down.
John Koetsier: [laughter] Excellent. Steve, thank you so much for this time.
Steve Sinclair: Thank you, John.
Long read? Just subscribe already …
Made it all the way down here? Who are you?!? 🙂
The TechFirst with John Koetsier podcast is about tech that is changing the world, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Subscribe on your podcast platform of choice:
Want weekly updates? Of course you do …