If you have an iPhone, you’ve got a notch. Now there’s tech that can get rid of that notch … and the same tech can bring secure Face ID to Android: at a fraction of the cost. Ultimately, it can bring biometrics everywhere, and make sophisticated sensing technology incredibly cheap.
In this TechFirst, I chat with Metalenz CEO Rob Devlin about his metasurfaces product. (Remember them from 2001?)
Not only can Metalenz produce about 10,000 lenses on a single 30-centimeter wafer, just like computer chips, they can now decode polarization information on surfaces from the light reflecting off of that.
Check out my story at Forbes, or keep watching/reading/listening below …
(consider subscribing on YouTube?)
This gives them data on what that surface is made from, and that is a huge advancement for biometrics, phones, medical devices, and robots.
The technology, which can capture and process unique wavelengths and polarization information, enables the creation of smaller, cheaper, and more efficient optical systems. Metalenz’s partnership with ST Microelectronics has led to the integration of metasurface optics in products that have been previously sold in over 150 different smartphone models.
Biometrics everywhere: the podcast version
Metalenz and metasurfaces: full transcript
Rob Devlin: On this single 300 millimeter wafer, we have about 10,000 lenses in this case. So, wow. A single wafer, you print 10,000 lenses and it really has allowed us to take something that was let’s say an academic research topic and scale it into smartphone volumes very quickly
John Koetsier: Are we about to have biometric sensor tech in just about everything? Hello and welcome to Tech First. My name is John Koetsier.
Three years ago I chatted with a young Harvard lab startup with a crazy idea. They wanted to print camera lenses, 5,000 at a time in wafers, just like computer chips. It was called Metalenz, and the company’s product is in the market right now, could save the smartphone ecosystem billions of dollars every year.
They’re in some shipping products and probably in more soon. You can also add vision and biometric capabilities, just about anything for about maybe four, $5 of hardware costs. Netherlands is now releasing a new product fact, C.
Rob Devlin: Thanks for having me, John. Definitely looking forward to catching up. It’s been a while and yeah, a lot has happened here at Metalenz since we talked last, so
John Koetsier: I had no idea how long ago it was.
I I searched on my website where all the episodes are, and it was literally almost exactly three years ago. It was February of 21. Yeah. And yeah, it’s been a while. Bit of a ride over the last few years.
Rob Devlin: Oh yeah, there’s definitely been quite a lot going on at Metalenz. As you mentioned, we now have a first generation of this metasurface technology out there in the market, and we had announced actually partnering with ST Microelectronics to launch the first meta surfaces ever out there anywhere.
And so this is actually in a 3D sensing device that the module that ST sells that is now integrating our meta surface optics has been in something like 150 different smartphone models. Historically, and historically, they’ve sold about 2 billion units of this model and now they in 2022 have announced that they’re moving to Metalenz technology moving forward for all of their modules.
So, definitely quite a lot has happened and quite a lot is now out there in the market because of what we’ve been doing here at Metalenz with these meta services.
So.
John Koetsier: Very cool repeat customers. That’s not a bad thing to have, not a bad thing to have. A validation. That’s great. Now, in our first episode, it was a little three years ago, we went in depth on meta surfaces and the technology ’cause it was so brand new at the time. People can go back to that episode that they wanna see everything about it.
But give us a quick refresher for those who aren’t remembering or maybe brand new to the show. What is a metas service? What is a meta lens?
Rob Devlin: Yeah. The really, if you look at it from what we’re doing here at Metalenz, so optics cameras are pretty much everywhere now, right? We’re using one here.
Everyone walks around with one in their pocket from even the highest end cell phones down to the very lowest end cell phones. There’s a camera in everyone’s pocket, basically. And if you look at the lenses in those cameras, they really haven’t changed much, frankly, going all the way back to Galileo’s telescope.
Those are still the same general lenses. You have these shaped and molded refractive lenses in cameras, and often in order to produce a good image, you might need something like five or six of them stacked up on top of each other.
And what you can do with a metasurface, this technology we have is you can actually with a single completely flat surface, so you don’t need any of the curvature that you need from a traditional refractive lens with just one single flat surface that we make, you can replace five or six of the existing lenses in a camera.
So what this allows you to do is really take these complicated camera systems and reduce their size, reduce their form factor but also even do things that traditional lenses can’t. And so ultimately it lets you make a cheaper, better camera in many cases.
And it also then, because we’re making these completely flat, ompletely planar, it allows you to actually take those optics that have traditionally been molded and shaped and start making them, as you mentioned, in the same labs and in the same factories that are making the computer chips.
So it lets you print one just like chips.
And I think actually the last time I was on I held up the wafer here and I have our wafer with our metasurface is on it again. And so this is a 300 millimeter wafer from a Pure Play foundry partner that we have partnered with UMC.
And what they do is they actually, if you can see some of the individual chips here, on this single 300 millimeter wafer, we have about 10,000 lenses in this case.
A single wafer, you print 10,000 lenses and it really has allowed us to take something that was let’s say an academic research topic and scale it into smartphone volumes very quickly, because the beauty of this technology is not only what it can do for shrinking systems to a much smaller form factor and reducing the complexity, but that actually allows you to start making optics in the foundry.
So we are able to leverage the infrastructure of the semiconductor fabs, which has been honed and practiced for 50 plus years now.
John Koetsier: And it’s super high volume and 5,000, geez, 10,000 on that single wafer.
It’s impressive. Obviously cost can come down, if I’m not mistaken, you can also capture different wavelengths of light intentionally. Is that correct?
Rob Devlin: Yeah. Yeah. And that was one of the things where you know, for the technology as it exists today, it isn’t at the point where it can capture the full visible spectrum.
As we’re going through it right now, we’re not working on visible cameras, but you can work at specific wavelengths that the human eye can’t see. You can also separate out individual wavelengths within the spectrum. So you can essentially, with a metasurface, extract more information from the light coming through the lens than you would with a traditional refractive lens.
Traditional refractive lenses will throw a lot of that information away and it just essentially gets averaged out once it hits your image sensor. But with the metasurface, you can specifically design it so that it parses this information so that it keeps this information. And this is really the core of where we have focused our efforts.
It’s not necessarily just replacing the existing lenses for visible cameras. It’s actually saying, okay, what is something unique and new that we can do with this technology that you cannot do in any other way or certainly not in a way that is going to be compatible with the cost and the volume of something like a mobile device.
And that’s where we focus for the first generation, where these 3D sensing systems that tend to be some of the most complicated and expensive systems that are still being put into phones, it allowed us to simplify those.
And now with our new product, Polar ID we’re taking something that is really an optical sensing system that has been trapped in medical labs and industrial facilities because of how big and expensive these things are, and we’re making that accessible to millions and hopefully over time, billions of devices and mobile for the first time really.
John Koetsier: So I wanna get into that in just a moment. I’m just thinking right now as you develop this technology, as you go forward with the technology and accept more and more wavelengths, maybe the full visible spectrum in a few years, I’m really interested to see what this technology could do for astronomy.
I’m a bit of an astronomy geek, right? And so you’ve definitely got, obviously lots of different telescope types and lots of innovation there as, as well right now in terms of telescopes that you don’t even look through. And they’re sensor based and you see the result, the output on an iPad or something like that, amazing.
What you could do, what astronomers do right now is they’ll often use filters or something like that, so they’ll only capture certain wavelengths to see different things. But obviously there’s so much more that you can do If you capture all at once, I won’t ask you to comment on products that might come out in five years … but the potential is amazing.
Rob Devlin: I think, one of the interesting things there is a lot of when you’re doing astronomy again, you’re looking at these different wavelengths and the different wavelengths will tell you something about what’s going on from, the object you’re looking at.
It’ll tell you about the spectrum, it can tell you about what chemicals and what molecules are in that particular object that you’re looking at. And you do often need to take a series of images. The other thing that is really interesting, and this comes back to the product that we’re making now is looking at the polarization information coming in from light.
Polarization again is something that the traditional optics will throw away or won’t be able to use. It has a whole host of information about what’s going on, what is the material I’m looking at, made up of all of these different things.
So with a metasurface, being able to capture this additional information … if you come back to your example of astronomy or the one we’ll get into for our product, it’s really about being able now for the first time to parse this information, retain the information, and learn much more than just what does this thing look like?
It’s, what is it made up of? It is making these really complicated devices accessible to essentially everyone.
John Koetsier: Let’s talk about Polar ID then. It’s your new product. You are partnering with Samsung to make it what is it? What’s it for? What’s it do? What makes it unique or different?
Rob Devlin: Yeah, absolutely. And so Polar ID just as a quick general lead in here and then I’ll show a video real quick where it’ll talk both about the concept of polarization, ’cause I think this is something which people are probably generally not familiar with …
… but also it’ll talk about the product specifically in the beachhead we see in terms of using polarization as a way to enable secure face unlock for all phones and the Android community especially, and how we’re enabling that with polarization.
And then also some of the other things we can do with polarization. But generally with Polar ID, what we’re doing is we’re making it so you can have a much cheaper, simpler, biometric solution that still maintains security and convenience, just like you have in the existing solution in the iPhone.
So, iPhone has this face unlock feature, but it really hasn’t propagated outside of the iPhone because that module is very expensive, very complicated, takes up a lot of space.
So the Android phones have wanted biometric authentication. But haven’t been able to essentially foot the bill because it’s too expensive. And in many cases now they’ve wanted it so bad that they’re starting to implement solutions that are not necessarily secure, and the end user may not even realize that they’re using these not so secure solutions, so,
John Koetsier: Just to pause there for a second, that is interesting, right? Because you see a lot of finger fingerprint unlock, right? Which I think is an order of magnitude less secure than face ID on an iPhone and this, and I think what you’re referring to probably without getting into too many specifics, is just using visible light.
So a picture, you taking a picture and then face unlock and there I go. And that’s been pretty, I dunno about easy, but that’s been defeatable with just a photo.
Rob Devlin: You can defeat it with a picture. Some of the more sophisticated ones where they started applying more AI and some machine learning, you can’t trick them with a picture, but you can still trick them with a mask. And often again the end user wants this because it’s such, not only is it more secure than fingerprint when you start moving to facial recognition.
It’s also just more convenient, simply swipe up and your phone unlocks. Right. Especially, we’re here in Boston, if you’re wearing gloves in the winter it’s a pain to be using your fingerprint sensor, right? So, it, it is a better way and users have wanted this, but often they don’t know that it’s less secure if you’re just using the selfie camera because it can be tricked.
There’s fine print in all of the solutions that people have put out there, but, we often don’t read the fine print. So, and it’s also then less convenient because if you are just using visible light, you need a source of light in order for it to work.
So it doesn’t work in the dark.
Often the range is limited. It won’t work if it’s sitting on the dash of your car and you just wanna swipe up, then you have to go back to doing something else.
So, what we’re able to do with Polar ID is really bring down the cost while maintaining the security. So we don’t sacrifice security, we don’t sacrifice convenience, and essentially we’re able to make something that could be a third of the cost of the existing solutions out there.
So again, with Polar ID, we’re enabling face unlock secure biometrics. And we do this with polarization information that lies at the heart of this. And with polarization, we are able uniquely with a metasurface to capture and retain that information. Again, traditional optics throw it out.
So whenever light hits anyone’s face, when it bounces off the shape of your face, the material that your face is made up of versus a perfect 3D mask of you will polarize the light differently.
And so if you pass this through traditional imaging systems, it throws away that information. But when we put a metasurface in there, we can actually parse, sort, and retain that polarization information.
So it allows us to make a shape map of your face, but then it also allows us to say, this is human skin and not silicone.
So interesting. From that underlying polarization signature, you can then say, even if it’s a perfect 3D replica of John that is trying to unlock your phone, the material will be different and therefore that will polarize the light differently. And so we can say sorry, that may look exactly like John … the 2D image I take may look like John and the 3D shape may even look like John, but the material’s different.
Sorry. Stop. Don’t let that person in.
John Koetsier: Talk about what polarization is. What is that process? What does that mean?
Rob Devlin: Right? So again, in light, it’s not just wavelength is information.
There’s also polarization information, and it really tells you about essentially the direction that the electrical field is oscillating. But more than that, it’s we experience this even in everyday light life. You may have polarized sunglasses, right? Yep. So light comes in, it’s nonpolar, it hits the car that you’re driving next to, and when it reflects off of the car.
It undertakes a certain polarization based off of the shape and the composition of that car. And then with your polarized sunglasses, you are able to filter out that polarization. So really what polarization is giving you information about the shape of the object you’re looking at and the material can.
If you can retain that, you know much more about what it is you’re looking at, you know about what it’s made up of, you know how it’s shaped. Very
John Koetsier: cool. So what are we looking at right now?
Rob Devlin: So then if you take it from a product level, so then how are we using that polarization information? It, it sounds very esoteric, right? It’s ahow the electric field is oscillating …
But what we found is, with polarization, we are then able to do this secure biometric authentication. And so what you’re looking at here is on the top, this is the structured light module that is used in the iPhone. You have this very big, complicated and expensive module, and this is primarily what has kept this from propagating into all of the Android phones.
It’s the cost, it’s the size. This is also why when you look at an iPhone in your display, there is a notch. But what we’re able to do with Polar ID and this polarization information is actually give you everything you need to both recognize the individual.
So making sure that their 2D features match up with what that individual looks like, but also to authenticate.
And it’s that authentication piece, which is making sure it’s not someone just holding up a photo of you or a mask of you, and we’re able to do that in a meta surface. Gets rid of one of the most expensive modules, this structured light illuminator.
But then we also don’t need this additional space between the thing that is sending out some laser light and the camera that is collecting it. So in the end, we can have a much smaller, simpler, cheaper solution that still all of the security.
John Koetsier: No notch.
Rob Devlin: No notch. Exactly. And even over time, this can go under the display of the phone because now it’s small enough, the z height is small enough for this to even go under the display of the phone.
Amazing. And so the other, and this will just show now how the authentication piece works and compare that with what we’re doing with id. So again, when you unlock your your phone, your iPhone, for example, it takes a 2D image with one of the modules that is there, and then it actually shines out about 30,000 dots onto your face in order to build up a 3D map of your face, making sure it’s not a photo.
With Polar ID in a single shot, we get all of the information we need to recognize and authenticate. So that’s why we can make this smaller, simpler, cheaper. We don’t need two separate images, and we can do this all with just one single camera module.
John Koetsier: And is that more secure as well than what the iPhone uses?
Because if you’re polarizing and you can understand what the material is that the camera is looking at, that seems more secure than something that is just mapping a shape.
Rob Devlin: Yeah, there is some underlying mechanism that the iPhone is also using in order to do a little bit of material composition based off of how these dots scatter the light back.
So they, they do have a very high level of security still, and they can reject 3D masks based off of their machine learning algorithms, but it will be as secure, at least as that. And as we build up more data over time, we may understand that it could even be more secure.
But as a starting point on the security level, being able to maintain that same security level, it puts it far and away above fingerprint, as you pointed out and certainly far and away above any other face based biometric solution that the Android community may be trying to implement today.
And I was just gonna say, but perhaps one of the interesting things here, coming to that point of, now that you have this polarization information maybe not necessarily better in security, it’ll at least be the same. But one of the most interesting things for us is that polarization is actually known in labs and industrial labs.
You have these really big cameras that are about ye big maybe cost about a thousand dollars in order to collect the polarization information. What we enable is for the first time shrinking this to a form factor and a price point that is compatible with mobile. So, although face unlock in phones today really does one thing, it unlocks your phone with polarization.
It’s a whole new information set that hasn’t been there for users.
And there are a known set of applications you can look at, say a growth on skin, and you can tell whether it’s cancerous based off of the polarization information. You can do things like air quality monitoring. So you can actually, with polarization, tell what the local air quality is.
So over time we really see this as enabling a whole new set of applications.
John Koetsier: That’s pretty amazing actually, and I want to get into that. Other uses for this, uses that we want on our phones but other uses for it as well. Maybe just before we jump there, you are working with Samsung for a piece of this.
What do you need from them? Why are you working with them? Yeah.
Rob Devlin: The key piece of what we’re doing in making Polar ID is we’re essentially sending the light to the image sensor with our metas surface, and then when we send it to the me the image sensor, we’re able to then build up the polarization map based off of where we know we’ve sent.
Each one of the individual polarizations in the light that’s coming. So a critical piece of that is the image sensor. We need a very high performance image sensor. We need an image sensor that we can integrate our meta surface with. And so we’ve partnered with Samsung on that image sensor because they have a very high performance image sensor that really not only gives us the performance that we need, but the partnership with Samsung gives all of the.
Cell phone OEMs, the confidence that there is a large partner who has already delivered image sensors in the hundreds of millions that is now working with us. So it really has opened the door and it puts us in a position where again, we can very quickly scale and support. The largest volumes.
Ultimately, there are about a billion plus Android phones out there that would like to have some form of secure facial recognition, but don’t, so we’re hoping over time that we’re gonna be really supplying to all of those. And Samsung is a partner that has the scale already because they’re already supplying to so many of those phones.
John Koetsier: Very cool. So is that the visible light spectrum that you said that Metalenz doesn’t capture yet, that you need from them?
Rob Devlin: No. So actually we’re working still in the near infrared for polar id. So the near infrared is light that is just outside the visible spectrum, to the longer end. And there’s actually a couple of reasons why for biometric authentication, you wanna work there.
Again, what you want to be able to do with your face unlocked in your phone is have it work both in very bright sunlight when you’re outside, but also when you’re in the dark. In your room, or if you’re, again driving at night. So you want it to be able to work independent of lighting conditions. So if you work in a near infrared and you have an illumination source that is in the near infrared, it allows you to work in all of those conditions.
You don’t have to rely on it being a bright, sunny day for your phone to unlock. Okay? And then the other reason you work in the near infrared instead of the visible is. You don’t want something just shining on your face that is visible every time you unlock your phone. I think exactly. If you take the iPhone again as an example, this shines 30,000 laser dots on your face every time it unlocks.
If you were doing that and you had a bunch of red dots or green dots on your face every time I think people would be much less likely to use this or. I don’t know maybe that’s something that would be a draw for some people. But I think for the bulk of people out there, you want it to be invisible to the human eye.
So it’s all in the near and thread. And that’s actually why partnering with Samsung again has been important because you then need an image sensor that has been very much optimized for wavelength outside of the wavelength that image sensors are normally working at.
John Koetsier: I guess the reason I asked the question is because I was under the impression that the metal lens itself captures lots of this information in various wavelengths, near infrared, that sort of thing, and I was wondering why you needed the addition of another sensor from Samsung.
Rob Devlin: Yeah the metasurface is able to parse through the light coming in. Then in order to turn it into an image that you can work on, you still need the sensor at the back end. You still need the electronics. Okay, so we, the metas surface, what it does is sends the light to specific spots on the image sensor, but then the image sensor still needs to be read out and turn that into an electrical signal that our machine learning algorithms can work on and process and so forth.
I get it. Okay.
John Koetsier: It’s like the CCD that you might have in a telescope or something like that.
Rob Devlin: Exactly. And it’s similar to again the visible cameras that you have, they all have a sensor at the back of the camera.
John Koetsier: Very cool. Okay. Awesome. Let’s talk about other uses. You’ve hinted that already.
Skin health hey, that’s interesting to me. I’ve had skin cancer like four times. I’ve had surgery twice. Yeah. And other things like that. So that’s super interesting, having that in everybody’s hand. Wow. That’d be, that could be cool. And there’s gonna be all sorts of FDA and other regulations around that.
But, if you can just take a picture, send it to a doctor, an analyst, and they can say, Hey, come in. There’s something to be worried about. That’s already a big help. So that’s interesting. Having this sort of technology in other places could be interesting. Maybe in a car, I don’t know, maybe in factory settings, yeah. Maybe where you want to determine what is this material is it the right material? Is it the right thing? I don’t know. There, there seemed to be thousands of potential uses for this kind of polarization technology that’s cheap and ubiquitous.
Rob Devlin: Yeah, absolutely. And just generally from robotics, from ai, from machine vision, this is a whole new wealth of information being fed into those systems for the first time.
So it will, right now most of the vision systems that are out there for, say autonomous driving or robotics, any kind of iot device, they’re still very much just relying on the intensity image, the images that we’re looking at now, and then they have to work really hard in order to understand what it is they’re looking at.
And so just from a general perspective, it will make machine vision systems and robotics and autonomous systems be able to make more efficient, better and more likely correct decisions. A really good example when it comes to whether it’s autonomous or in factory settings is that machine vision systems are really bad when there’s a transparent object in the way they can crash into it.
Or let’s say in a robotic arm trying to pick and place things in a factory setting, it won’t actually be able to see visible it won’t actually be able to see transparent objects. And so you still actually have to have human interaction in that case. But with polarization, it sticks out like a sore thumb.
The polarization signature is really clear from something like that. So I think just as a general input that hasn’t been there for these machine vision systems and making this really cheap and ubiquitous ultimately it makes these algorithm make better, more efficient decisions and helps you from, things crash into windows or something else like that.
John Koetsier: Generally a bad idea! I can also see as we add more and more robots to manufacturing, to warehousing, logistics type of solutions, as well as in our homes, having an increased ability to know what is that thing that I’m going to be picking up.
What might it be made of? How strong might it be? We get all kinds of clues from that by just seeing things. If we see something that looks like it’s cardboard, we know, hey, it’s it’s probably not that strong, but it can be fairly strong. If we see something that looks like a metal or a solid or something that looks like foam or something, knowing what something is a really important part of knowing how to try to lift it, move it, use it, work with it, all that stuff.
The more information we can give to robots about that, the better.
Rob Devlin: Yeah, absolutely. I think that’s what it comes down to is it allows both people and then ultimately, ultimately robots and machines to understand the world around them in a way that really hasn’t been so accessible before.
And I think the other thing, just to put it in a another perspective when it comes to putting this on mobile devices, right? So there are a known set of applications for polarization, but now when you take a new information set and make it accessible to millions and billions of devices and people for the first time.
The set of applications we haven’t thought of yet, or that haven’t been thought of yet because it has been locked away in an esoteric setting is another really exciting aspect of this. It brings that information set out there and you can think about developers starting to work on this for the first time.
Millions and billions of people being able to, essentially you’re crowdsourcing new solutions with this new information where you haven’t been able to do that because it’s just been too expensive, too big and locked away.
John Koetsier: I think that’s super exciting. Just one of those applications that comes to mind is calorie counting applications.
They’re really bad at taking a picture of something and estimating, what is that? How many calories might that have? And so then you have all this, I have this many grams or ounces of something, and you have to input it manually. And that the easier you can make that process, the better you can make a.
Rob Devlin: Absolutely. And I, it sounds to me like what we need to do is get this on the phone really quickly and get you a developer kit, John.
John Koetsier: Sure, yeah. I’ll poke at that with my hammer. How’s that? That’ll be, that sounds great. I have done some coding, but yeah not recently. Rob, this has been a real pleasure.
It’s great to see what product is turning into and look forward to seeing more interesting things in the future as well.
Rob Devlin: Absolutely. Thanks for having me again. And hopefully we talk in less than three years this time. And maybe it’s when Polar ideas out there and you’ve come up with your first app.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Subscribe to my YouTube channel, and connect on your podcast platform of choice: