Mojo Vision’s smart contact lens is basically feature complete

smart contact lenses

Mojo Vision has been working on smart contact lenses since years. Recently the company announced its most advanced prototype ever, which VP Steven Sinclair told me has “all the elements that we need … in a working system so that we can really push forward what we hope is the first product.” Which means, of course, I had to ask him when we would actually see a smart contact lens come to market.

As in, be available for purchase 🙂

Steve was a little hesitant to say, but he did talk about what the Mojo smart contact lens currently includes:

  • 14,000 pixels per inch MicroLED display
  • 5GH ultra-low latency radio to stream AR content
  • continuous eye tracking via custom-configured accelerometers, gyroscopes, and magnetometers
  • medical-grade in-lens batteries
  • eye-controlled user interface

Support TechFirst: Become a $SMRT stakeholder


In this episode of TechFirst with John Koetsier, I chat with Sinclair about the technology, uses, augmented reality, timeline to purchasable product, and the fundamental breakthroughs Mojo had to make before achieving this latest prototype. Check out the story at Forbes, or keep scrolling for full video, audio, and a transcript.

(Subscribe to my YouTube channel)

Smart contact lenses: subscribe to the TechFirst podcast

 

Mojo’s newest smart contact lens prototype is basically feature-complete

(This transcript has been lightly edited for length and clarity.)

John Koetsier: Do you want bionic eyeballs? Well, so do I. I’d love to hook into the internet, get data in my visual field, overlay insight in the ultimate version of augmented reality, remind me who that friend’s significant other is…again. Well, good luck. It doesn’t really exist and might not for decades or centuries. But there is something very cool that approximates bionic eyeballs and that is smart contact lenses, where there’s pretty much one company to talk to, and that’s Mojo Vision.

So today, we’re chatting with Steve Sinclair, a Mojo SVP. Welcome back to TechFirst, Steve. 

Steve Sinclair: Oh, hey, thanks for having us back. 

John Koetsier: [Laughing] You must have done well the first time. I don’t know, but… For anybody who wasn’t there, didn’t listen, give us the 30 seconds. What is Mojo Vision? 

Steve Sinclair: Well, I mean, you started to say it’s way out in the future, and we think it’s a lot sooner than that, we hope. But basically, it’s all about giving you super powers. We’re building the world’s first, true augmented reality smart contact lens. So, something that you could put on your eye, see content when you want to see it, have it disappear when you’re not using it, so you look like yourself and you’re engaged in the real world. So, we’re taking all the elements we need to from a technical perspective and pulling them together inside a contact lens. 

John Koetsier: Can’t wait, can’t wait. And can’t wait until you’re not wearing those glasses either. I’ve got contact lenses in right now, and maybe I’ll have smart ones soon. 

Steve Sinclair: Well, I tell people this is my Clark Kent look.

Mojo Vision’s smart contact lens

John Koetsier: Ahhh, you’re Superman with the mojo [laughing]. I get it.

Steve Sinclair: So, at some point, I’m gonna take them off. So, we’re working towards that. 

John Koetsier: Love it. Okay. Now you just released some new news. What’s new?

Steve Sinclair: Yeah. So, we’ve been working on this for a number of years now. And when we talked to you last, we had built a prototype that some of us had worn internally, but it didn’t have all the elements that we needed in what we consider to be a candidate for first product.

And so we’re finally there with something we call a “feature complete” lens. That means, basically, that all of the technical elements are pulled together into a single system that we can wear, and try, and test with. 

And so that prototype basically has our high-speed display in it, a high resolution display. It has a high-speed radio that allows us to communicate and stream content on the lens. It has eye-tracking motion sensors built into the lens, as well, so we can tell where your eye is looking, so we can place content in the right places and hold it stable in the world around you. It has a power system so that we can power everything to make it all work.

So, all the elements that we need are there together in a working system so that we can really push forward what we hope is the first product. 

John Koetsier: So, geek out for us a little bit… high-speed, high res power system. Give us some of the specs if you’re releasing some of that. 

Steve Sinclair: Yeah, well, so it all starts with the heart of the system, which is the display. And we probably talked a little bit about this last time, but when we started to work on this idea of a smart contact lens, the first thing we knew we needed to do was find a display, something that we could embed in the contact lens that could give you information that you needed to see… and it didn’t exist.

So, we built a team of microLED experts, who, basically from the ground up, invented a high density display for dynamic data that can project the information onto your eye. And that’s not just the display, it’s basically the control system that controls the individual pixels. It’s the optic that sits on top of the display and focuses the light, when it’s that close to your eye, in the lens onto the back of your retina.

All of those elements of the system had to be invented to make this work, so that you could actually see content.

John Koetsier: So, what can you do with the current version? 

Steve Sinclair: Yeah. So, right now, you can put it up to your eye and you can see content floating in space in front of you. So we were able to use it to display any kind of text or graphics that you want to see.

We show off demos right now and you’ve probably seen some of these — maybe you could show it while I’m talking, potentially — which is being able to see your sport’s biometrics as you’re riding or cycling or running. It can show you content if you’re a traveler and you’re rushing through the airport and you need to see your gate, or you need to see when is your Uber driver arriving? 

All of that is kind of in-the-moment data that we can project onto your eye while your hands are busy, while your eyes are up, while you’re trying to interact and engage in the world. We can bring you that information in the moment.

And so, it literally is content that’s floating in space around you. We can lock it. So, it could be locked in the world, so you can see some content to the right, to the left, to the bottom, up and down, wherever you want to look. And it’s up to you to decide when you bring that content up, and when you don’t want to see it anymore, it all just disappears and you’re just looking at the world. 

John Koetsier: It’s pretty interesting, right? Because, of course, if you have a smartwatch, you can have different watch faces and you can have the information that you want on your watch face.

Now, you’ve got different world faces with the information …

Steve Sinclair: That’s right.

John Koetsier: … you want on the world. Now this is driven… we talked about this last time, I’m sure, but this is driven by an app on a phone, correct?

Steve Sinclair: Well, it’s driven by data that could be on your phone. It could be in the cloud.

Most of the applications that we’re using are run on an accessory that you also are wearing on your body somewhere near the head. We call it a relay accessory. It could be built into a hat or a helmet, it could be built into a pair of safety goggles, it could be built into a neck band. And so, that’s probably the form factor we’re going to start with. It has to be relatively close to the eyes because the transmit power of the lenses is not particularly high. 

And so that relay essentially has compute, has an application processor in it, has memory in it, and a GPU so that it can stream the content to your eye. It’s, at the same time, it’s pulling eye-tracking data off of your eye, and doing that simultaneously to figure out where are you looking and where should content be placed in the world. And that’s one of the other big innovations that we’ve made with this product, is we built in eye tracking so we know exactly how to place content in space around you.

John Koetsier: Fascinating. You talked about building in a power supply. Is that a tiny battery in the contact lens, or are you harvesting ambient energy from radio waves? Are you transmitting it wirelessly from some jewelry that I’m wearing? How’s it getting there? 

Steve Sinclair: Yeah. So, we looked at so many different ways to power the lens and all of those different options that you mentioned were all, I think, on the list at one point or another. In fact, we built a prototype at one point that used magnetic, inductive coupling to power the lens wirelessly. And that didn’t work really great, so we kind of left that to the side.

Where we ended up is more conventional in so far as that we have small medical-grade batteries built into the lens that are powering the lens itself.

And so you charge it at night while you’re cleaning the lens, like you need to do for a contact lens. And when you take it out of its cleaning/charging case in the morning, you pop the lenses in and the battery is there ready to do what it needs to do. 

But we had to design our system to be very, very efficient. We don’t want to need a lot of power, and so, we…every component that’s built into the system has been optimized in one way or another for the fact that it’s going in such a small form factor. And so those batteries, like I said, they’re medical-grade batteries that have been on the market in one form or another for years in other implantable devices like pacemakers and such. So they’ve gone through a level of rigorous testing. We have to do more of that because it’s going on your eye, obviously, but it’s not something brand new to the world as far as this type of battery in a medical application like what we’re doing. 

John Koetsier: I feel like I should have asked you this last time — and maybe I did, but I forget — there are hard contact lenses and there are soft contact lenses, which is this?

Steve Sinclair: So, you’re right. Most people are familiar with soft hydrogel contact lenses like the daily disposables, or weekly or monthly disposables. So they get done at the end of the day. They throw it away, they don’t have to worry about it, they pull another one out. That’s one type of lens. There’s lots of other types of lenses in the world. There are hard lenses, as people call them, which are generally corneal lenses. So, they sit and they rest on the cornea of your eye where you have lots of nerve endings and they can be quite uncomfortable for people. And so that exists in the world.

What we’ve done is we’ve taken a platform called a scleral lens. And a scleral lens platform is different than both of those. It is rigid, some might call it hard, but it’s basically a piece of plastic that vaults over the cornea.

So if you imagine this is your eyeball, it vaults over it. It doesn’t actually touch the cornea. It rests on the white of your eye, the sclera.

And the white of your eye doesn’t have a lot of nerve endings. You can actually touch the white of your eye — you can go ahead and do that, or your listeners can go ahead and touch the white of their eye [John laughs] — you can do that. 

And so, we measure your eye when you come into your optometrist and we get the shape of that eyeball. It’s not perfectly round. It’s got lots of ridges and bumps and such, and we cut the inside of the lens to match the shape of your eye so it rests very comfortably on. So, it is a little bit bigger than what people think of when they think of either corneal hard lenses or daily disposable soft lenses.

But it’s quite comfortable on the eye because it’s been cut like a puzzle piece to match on your eye. 

And that’s important for a couple reasons. One is, we don’t want it rotating when you’re wearing it, because we want the display pointed exactly where it needs to be pointed, which is at the sharpest center of your eye or your retina, called the fovea. And we also don’t want it to sag or to move around.

We want it to not touch the cornea, which is the sensitive part.

So, this type of lens has been around for decades. It’s been used for people with irregularly shaped corneas, they have a medical condition that requires them to basically have an artificial cornea, which is a scleral lens. Or they have really severe dry eye, and so they would get prescribed this type of lens if, say, they had dry eye or they’re getting up in age and they want to continue to wear contact lenses for people’s eyes to dry out a little bit and then they would give this type of lens. So it’s been around for a while and it’s a great platform for us to build this type of solution.

John Koetsier: I’m torn between saying, “Don’t try this at home,” in terms of touching your eye, or just doing a little quick demo. I wear contact lenses, so I’m familiar with touching my eye. It’s no big deal whatsoever. It’s not hard at all.

Steve Sinclair: That’s right.

John Koetsier: It doesn’t hurt [touches eye with fingertip], no issue whatsoever. But for people who don’t have contact lenses, some of them don’t do that.

I want to talk about the fundamental advances that you had to make just in this last generation here. Maybe even before we get there, talk about the uses that you’re seeing right now with this generation. We talked a little bit about seeing things floating in your field of vision, finding your gate, other things like that, augmenting the world. You’ve also kind of hinted at athletes, maybe your lap time or maybe your heart rate or whatever other data that you might need to tailor your performance to your capabilities and come out with the best possible end result.

You also have explored sort of therapeutic uses, right? People with low vision highlighting features in the world, like there’s a street here, there’s a drop-off here, there’s a sidewalk here, those sorts of things. Is that still on the menu?

Steve Sinclair: Yeah. A lot of that is still on the menu, and it’s both fun and frustrating when you’re working on a product like this, there’s… like you could do everything. Like it’s got so many different possibilities and how do you narrow it down? And so part of the job that myself and my team as product managers are responsible for, is trying to make those trade-offs and trying to decide how do we get down to something more focused to have success. Ultimately, to build a platform like we’re trying to build, we have to have our sights set at all consumers.

Anyone who wants to wear a contact lens that wants it to be smart, you know, should be part of our ultimate target market for what we’re doing here. 

But you also know you don’t, as a small startup, you don’t jump straight to trying to sell to all consumers all along. It’s like, that’s just not … we’re not Apple. We’re not, you know, Meta yet. So, we’re not going to be doing that in the short term, so we had to take very pragmatic steps to get there.

So one of those you mentioned, is helping people with low vision and vision impairments to be able to see and navigate the world better. And so that’s an important element for us for a couple of reasons. One is, augmented reality is a great tool to help people with low vision. So, we’re excited about how we can use edge detection and zooming in and out of pictures and texts that we can bring up to people’s eyes so that they can quickly assess where they’re looking, and where they’re going, and avoid obstacles, but also navigate faster.

And there’s a regulatory aspect of that. There’s a clear medical benefit, as far as the FDA is concerned, if we have a product that does things like that. And so that can help us through their Breakthrough Devices Program to get approval for this, for people with low vision conditions. So, that’s great and we’re super excited about that mission and continuing to build off of that mission. But at the same time, we want to take what we’re doing and make it apply to more people. 

So, we’ve specifically picked the sports and performance and athletic active users as a first early target market for us, for everyone else who has quote/unquote “normally sighted vision,” because we can give you eyes-up information while you’re working out, while you’re competing, while you’re training.

And those are people that are already choosing contact lenses because they’re doing something active, and so they want to have something that’s not slipping around on their face and getting sweaty and potentially getting in the way. And they also don’t want to look down at a wristwatch or a smartwatch or, you know, looking at a smartphone, and lose their focus, lose track of where they’re going, potentially create a safety issue.

So there’s a lot of good reasons why that particular area of athletics is important to us. 

We don’t see ourselves as a sports or athletics company necessarily, we have that broader longer-term view of what we’re building as our ultimate goal. But in the short term, those two markets for us really will give us the proof points we need that we can build a smart contact lens that’s useful to somebody and that they’re motivated to want to use, because they want to have some invisible edge in what they’re doing, and then it’ll grow. If you put it on in the morning and you’re using it for your workout later that day, you might use it again at work, might use it throughout the day in a lot of different activities. And so we’re going to open it up to a lot of different types of applications so you get useful information throughout the day. 

John Koetsier: There’s just so many applications, it’s really unbelievable. I’m big into hiking, and you do not want to look down at your wrist or at a phone when you’re on a mountain trail, you kind of want to know where your foot needs to go, right? And so being able to see that is amazing.

We haven’t even talked about it, but the augmented reality world that sort of Google Glass pivoted to, you know, machinists, technologists who are working on perhaps airplane engines or complex engineering builds or something like that, having the schematics overlay. There’s so many different options in those sorts of things. 

Now, you talked about fundamental breakthroughs.

Clearly there’s some in terms of battery. Clearly there’s some in terms of building… I’m assuming it’s Bluetooth, a radio that’s communicating to and from the jewelry to your eye, basically. I’m assuming there’s some in visual technology, whatever you’re using for a screen, right? Talk about the fundamental breakthroughs that you had to make to get this unit out. 

Steve Sinclair: Yeah.

So, the display I already mentioned and so I won’t cover that again, but that’s a huge core element of what we’re doing in that the optic that we had to build, which is basically a teeny, tiny, reverse Cassegrain telescope like the Hubble, and build that into a lens.

Like those are some interesting things that we had to do just from a display and projector perspective. 

The radio itself, we built that from the ground up. It’s actually our own proprietary low-latency protocol. It’s not Bluetooth, because Bluetooth was too slow and too latent for us… too power-hungry in some elements. And so we had to build that from scratch as well to make this work.

The eye tracking is, you know, a lot of effort’s going into making that happen. Because for augmented reality to really be useful, you want to place things around you in the world, which means you have to understand where people are looking and how their eyes are moving, so that you can contextually place things in places that make sense. 

So, building, working with some partners, we built in the right sensors into the lens and then we built the algorithms on top of that to really track how your eye’s moving. It’s very similar to what’s in your smartphone today, as far as the elements — the accelerometer, the gyroscope and the magnetometer — but it’s on your eye. And so we had to do some extra things to make that possible. But it’s far more precise and accurate than, say, the camera-based observational eye tracking that you see in other headsets today.

And so we think we have about a 10X improvement in eye tracking built into Mojo Lens. 

And why that’s important, is that it unlocks a new user interface for us. So here’s an area that we think we’ve really got something interesting, which is using just your eyes to manipulate and control and access the information. So, not using your voice, not using gestures, not tapping or swiping on something … nobody wants to tap their eye to do something like this either. And nobody wants to do blinks and weird gyrations with their eyes, they just want to look.

The eye is one of the world’s best pointing devices.

So we’ve instrumented the eye with our eye-tracking hardware and software to know exactly where you’re precisely looking, so that the system can pick up on those cues and you can interact with the content. So you can scan across information, you can automatically scroll blocks of text just by reading. It makes it very easy to pick and choose and do things with just your eyes. And so, we think that paradigm is just a huge step forward in user interface.

John Koetsier: Very cool order of magnitude improvement there. Absolutely necessary as well, because I don’t want to be staring fixedly at a certain direction for a long period of time. I want it to be able to pick it up pretty quickly and know what I want, and go for it.

Let’s talk about timeline. What’s left to do before this becomes available? And I’m assuming it’s going to be under some controlled circumstances initially. There’s probably some significant real-world test phases, all that stuff, but what kind of timeline are we looking at? 

Steve Sinclair: Yeah. You know, I’m not gonna break any news here with a date or actual schedule… 

John Koetsier: October 13! [Laughter]

Steve Sinclair: No, I’m not going to give you that. And my stock answer to that question is, well, the FDA is the one who determines whether we’re safe and effective, and so it’s really their call, at the end of the day, when we can prove to them that it’s safe for someone to wear and it does what we say it’s going to do. And that’s critically important to us to do that the right way and to work with them on making that happen. That being said, you know, we’ve been at this for six years. We feel like there are fewer years ahead of us than there are behind us, so we think we’re… we see a time soon where we can have something out in the market if we do all the right things. 

What we’re doing in this phase right now — now that we have this prototype — is we’re going to start wearing it. We’re going to start testing it internally. We’re going to start really understanding the edges of performance and starting to make some of the trade-offs we need to make as product people to really build a good experience for this product. So, not everything that we’ve got in there may make it to the first product. Things that we hadn’t thought of, we may have to change and make modifications. 

So, there’ll be a period over the next year or so where we are really refining and learning from what we’ve built, not just the simulations, but actually from wearing it. And that’s done under clinical observation by our optometry staff that we have here, and making sure that everyone’s doing things safe and we’re learning about comfort and fit. We have to make sure it’s a good contact lens. It can’t just be a smart contact lens. It has to fit well and actually be good for improving the world, your normal vision. So we have to work on those things. We’ll be working with people with low vision conditions that are visually impaired to make sure that we’re building the right things for them.

And then we have to lock in the product. And once you lock in the product, then you go into clinical trials. And that’s where you start recruiting people for the real deal. And they’re wearing the lens, they’re taking the lens home, they’re reporting back on what their experiences are. And then you take all of that data, you have to collect it and put it into a package that you submit to the FDA. And then you wait, and you hope that it comes out the other side as an approval. Now, while you’re waiting, you’re not just sitting on your hands, you’re working on new software, you’re working on experiences, you’re working on the next iteration of the product. So, there’s a lot of things that we can do between now and then, but it’s exciting because we’re at that point where we think we really can, you know, no pun intended, but we can see a point where we have a product that we can ship. 

John Koetsier: Well, that’s exciting. Let’s turn to the future then, and not so much your products specifically, but what it unlocks. As I mentioned to you before we started recording, I’m focusing TechFirst and my Forbes column on mega trends in tech: virtuality, XR, MR, AR, smart matter, biotech, a number of them. Where does this fit into the future? What’s it unlocking and what’s it a step towards?

Steve Sinclair: Yeah. So, we’re cutting across a number of those categories as you mentioned there, and it’s exciting that we’re starting to see some of that convergence happening across multiple areas, whether it’s AI and machine learning with AR and VR, with cryptocurrency and NFTs for what people are calling the metaverse.

But, for us, it may look like, well, you’re building a contact lens that does augmented reality. But longer term, our goal and our broader vision is much bigger than that. We do see ourselves as a health tech company.

And I think that convergence between traditional consumer tech and health tech is going to become a mega trend, if it isn’t already. You’ve started to see it with the Apple watch and some of the capabilities that are being built into the watch today. We’ve built a company from the ground up that assumes that there is a medical aspect of what we’re building and a consumer technology aspect of what we’re building, and we’re trying to pull those together. And I think we’re at the forefront of that trend. 

We’re going to see a lot more companies, I think, that start out their DNA early on trying to merge those two things together. Because once you’ve built a platform that you can put on your eye, sure, we can show you augmented reality, but what if we could look inward? What if we could look inside the eye and detect or diagnose certain diseases that are oncoming? What if we could understand or help you understand the onset of certain conditions, say, like a migraine?

What if we could measure the tear fluid on your eye, and be able to check other information or understand the intraocular pressure of your eye and that you have the onset of glaucoma?

These are all possibilities. The eye-tracking data alone could allow researchers to understand a lot of things about disease that they otherwise wouldn’t be able to track continuously today. So we really think it’s a great platform for that sort of thing. And it’s not just a platform that we can take advantage of, but that we can partner with other medically focused companies to become more of a research platform as well.

So we can build so many different things that eventually go beyond just a contact lens, but could become something that goes inside the eye. There’s all sorts of ways we could take this. But, it’s great, because it’s on the eye. It’s an advantage we think we have over smart glasses, because we’re as close as you can get without going inside, and just a lot of things that could be measured and potentially learned and addressed. 

John Koetsier: Well, it’s super interesting. I’m fascinated by the possibilities there. I once again volunteer as tribute when you’re sending out tester units. l look forward to a time when, yes, I can see the information that I want in my visual field. I can know where I need to go. I can put up my notes as I’m talking to somebody like you, and I don’t have to look down or something like that… that’s the question I wanted, that sort of thing. 

Steve Sinclair: Yeah.

John Koetsier: And also, you know, “Hey. Whoa, that’s amazing!” as I’m on some hike in the Grand Canyon. “Let’s zoom in,” right? [laughing] You know, and all kinds of other capabilities that I’m sure will come in the years and decades to come. Thank you again for your time, Steve. 

Steve Sinclair: Yeah. Well, thank you. I really enjoyed talking to you.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Consider supporting TechFirst by becoming a $SMRT stakeholder, connect to my YouTube channel, and subscribe on your podcast platform of choice: