This startup prints smartphone lenses like computer chips, 10,000 at a time, that capture the full EM spectrum

Metalenz ceo Rob Devlin holding up a silicon wafer with camera lenses

A new startup out of Harvard Labs has invented a way to print camera lenses 5,000 at a time just like computer chips, and in the same semiconductor foundries that make our computer’s CPUs.

They’re 100X thinner than standard smartphone camera lenses, are simpler and cheaper to make, sense the full electromagnetic spectrum — not just visible light — and have excellent 3D-sensing capabilities that could bring Lidar-based dimensional sensing functionality that’s currently only on high-end phones like the iPhone 12 to smartphones across the price spectrum.

In this episode of TechFirst with John Koetsier, I interview Metalenz co-founder Rob Devlin.

Scroll down to subscribe to the podcast, watch the video, and get the full transcript, and here’s the Forbes story for this TechFirst episode.

Subscribe: printing smartphone camera lenses

 

Watch: Metalenz’ new technology for 3D sensing, full EM spectrum cameras

(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)

Read: this Harvard Labs startup just raised $10M from Intel, 3M to reinvent smartphone cameras

TF160: Metalenz

(This transcript has been lightly edited for length and clarity.)

John Koetsier: Could a Hartford Labs startup completely reinvent how cameras work? Welcome to TechFirst with John Koetsier. We have literally billions of cameras today. Most of them have multiple stacked lenses and they take up a decent amount of size on our devices. A Harvard Labs startup has found a way to shrink them 100X and capture the full EM spectrum, electromagnetic spectrum, and that promises to unlock enormous potential.

To get the scoop, we’re chatting with Metalenz co-founder Rob Devlin. Welcome, Rob!

Rob Devlin: Thanks, John, it’s great to be here. 

John Koetsier: Hey, it is great to have you. Tell me, what have you built? Why is it a game changer? 

Rob Devlin: Yeah. So we spun out of Harvard University, and they’ve been developing for about the past ten years in the group where I did my PhD — they were developing something called metasurfaces or meta-optics.

And really the concept here is to boil down into one single plane or layer — so you have a 2D form factor, nanoscale type dimensions, sort of a thousand times thinner than a human hair – to boil down optics into this single surface where you can completely control light. So there’s a whole bunch of information and light beyond just what we capture with our cameras, and so the idea was can I completely control light with just this one single layer.

So out of the Harvard group, they came up with a way that just with one single step you can actually make a device that now gives you this complete control over the electromagnetic wave and all of the information, that rich information that’s enclosed in that wave.

John Koetsier: That sounds very cool. I mean, obviously there’s so much more you can do when you’re capturing more than visible light, and we’ll get into what that all means and what you can learn, what information you can get, and what kind of new products that might enable.

But it’s also way smaller than current cameras in our smartphones, right, as I showed earlier. You have a wafer there with — you said, what was it — 3,000 different lenses on it? Can you show that? 

Rob Devlin: Sure.

John Koetsier: And I’m going to make you large right now. So what are we looking at right here? 

Robert Devlin,
Founder and CEO, Metalenz

Rob Devlin: Right, so another thing that we’re doing with these meta-optics, the way that we actually make them and produce them is using standard semiconductor foundries. So if you look inside of a camera inside your cell phone, you have electronics, your transistors, you have sensors like the image sensor that’s actually going to record the image.

These are all made in semiconductor foundries. You know, if you look at Silicon Valley, right? Silicon Valley is named after the fact that we started making electronics on these wafers and could really scale.

What we’re able to do with these meta-optics is actually now move the production of the lenses into the same semiconductor foundries that are making the electronics and the sensors. This is really the first time that you’re able to do this, where you’re now making the lenses using these same processes as the electronics. And what this allows us to do is in a single shot, essentially, produce thousands of lenses simultaneously. 

John Koetsier: Wow.

Rob Devlin: Today, if you look at it the way that they make lenses, they’re molded, they’re injection molded lenses, or grinded and polished, and they’re pretty much done in a sort of pick and place, one by one manner. Even though we’re making billions of cameras for smartphones, it’s still a relatively old and slow process.

John Koetsier: Interesting! So you’re making these the same way that you’d make a chip or computer chips in a fab. 

Rob Devlin: Yep.

John Koetsier:  Is it, what about the quality? I mean, so the first — one of the first most obvious use cases, of course, is in a smartphone, in small devices like that. Are we talking equivalent quality? Are we talking better quality? And is it at the same price level or a lower price level? 

Rob Devlin: Right, so one of the things that we’re really focused on to begin with, there’s a whole bunch of applications that you could use with these metasurfaces.

It’s really a platform technology in some sense, but we’re focusing in on one market to begin with, and that’s in the area of 3D sensing or 3D imaging for mobile. And so we’ve done some examinations and comparisons. And what you’ll see is, even though in the conventional case they’re using four different lenses, we’re actually able to have as good or sharper images — so, higher quality than these cameras that they’re using today.

Or, we’re also able to improve key metrics from the camera as well, so you can actually collect more light. So you’ll get a brighter image. 

John Koetsier: Wow.

Rob Devlin: You’ll get a higher performing image. And again, this is all going from the complex, you know, four elements down to just one single semiconductor layer. 

John Koetsier: Interesting. And let’s dig into the 3D aspect, because that’s really, really big these days, right? I mean, we just got consumer level Lidar, right—

Rob Devlin: Yep. 

John Koetsier: In the newest iPhone, the iPad that came out a while ago as well. And that’s really unlocking a massive growth in scanning of 3D objects, 3D spaces, and enabling them to be used in games, in learning datasets, and other things like that.

Will this be another quantum leap in our ability to capture 3D imagery?

Rob Devlin: Yeah. So there’s a couple of things that we’re enabling in the area of 3D. The first thing is that if you look at these modules that have been released, they’re still — they’re even more complex than the standard visible imaging modules, right?

So there’s actually multiple lens elements now that come from completely different suppliers. So you might have four lenses, and they’re doing very different things than the normal visible lenses. And so, where they’re really deployed right now is only on the top tier, the top end phones, because these modules are complex, they’re expensive, and they’re still a drain on the battery life as well. So, you know, when you start looking at doing things in 3D, you actually have to illuminate the scene with a laser. So now you’re putting active power into the scene.

Things are getting really — you can actually, in some cases, you can feel the phone heat up as you’re doing these sort of 3D mapping.

John Koetsier: [Laughing] Really?

Rob Devlin: So, what we’re able to do in the area of 3D sensing is, again, simplify this module so that ultimately the overall module costs can go down and you can deploy more broadly. So they can go from these really expensive modules that almost always make it into the top tier of phones only, but then it’s this idea of getting more light.

So, metasurfaces allow you to get more light back to the image sensor. And what this means is you can do things like turn the power down of the laser illuminating the scene, or get even higher quality 3D maps than you would get from the current technology. So it really improves the longevity and increases the use.

John Koetsier: You have some slides to show us as well, so I’ll bring those in in a moment, and that’ll give us a better visual sense of what you’re talking about. But you talked about price and you said it’d be way lower. Can you give us a factor? Can you give us a sense of — is it 10% the cost? Is 90% the cost? What is it? 

Rob Devlin: It’s the way that we’re looking at it is, when you’re going from one lens to one lens comparison, you know, the cost won’t be much different. But if you look at the fact that we’re going from four different lenses down to one different lens, then you’re driving significant costs to decrease there. So, you know, in the order of 20% or more in terms of the cost. And then the second thing is that—

John Koetsier: 20% of the cost or 20% reduction in cost?

Rob Devlin: 20% reduction. 

John Koetsier: Okay. 

Rob Devlin: But then if you also look at the fact that when you assemble these, a lot of the cost is actually caught up in the assembly of these modules, because they’re so complex. So we’re also reducing sort of up and down the supply chain, we’re having ways to reduce cost. 

John Koetsier: Interesting, interesting. So if you add that in, are you looking at like, perhaps about a 50% overall savings? 

Rob Devlin: In certain cases it could be pushing in that direction, but yeah, it can be a significant reduction in the overall cost.

John Koetsier: Okay. Okay, interesting. Well, let’s talk about what kind of products this unlocks. You mentioned 3D sensing and it’s only in the top high-end phones right now. You had mentioned it can go into all the other phones. But if you’re sensing the whole electromagnetic spectrum, what kind of product functionality does that enable?

Rob Devlin: That’s a great question. You know, if you look at the way we see metasurfaces, meta-optics, and Metalenz the company really going forward and what it’s really unlocking, it’s this ability to bring all new forms of sensing to mobile form factor, mobile price points.

So, optical sensing is a huge expansive field with everything from medical applications to chemical sensing and industrial, and so on. And a lot of these optical modules that you look at, like a spectrometer is a good example.

So with a spectrometer, what you can do is you can actually take the light and you can parse it into very, very fine wavelength ranges. So ultimately you can tell things about what is the makeup of an object that you’re looking at chemically. You can tell what molecules are in blood, for example. But if you look at these modules, you know, they sit in medical labs, often they sit in scientific labs because they’re quite large. They sit on a table top and they have very, very expensive optics in them. And again, they’re typically four different optics in there that come from very different suppliers, have to be customized, and so they’re large and they’re expensive.

So—

John Koetsier: And there’s consumer applications as well, right? I mean, I remember speaking to somebody who is trying to put a spectrometer into phones, a few months ago, and it unlocked applications like what’s in my food? And if I want to just take a picture of the food, the app, it can know what my food is and I can record my diet or whatever, or other things like that, right? 

Rob Devlin: That’s right. So there’s things like, one of the applications they talked about is fruit sensing. So you can potentially go into the grocery store and pick up your avocado and tell if it’s the perfect ripeness based on the chemical signatures coming off of it. And you can use a spectrometer to do something like that.

John Koetsier: Wow.

Rob Devlin: So one of the things, again, with a metasurface, what you’re able to do is we’re leveraging the scale that’s already there from the semiconductor foundries. So you can make these lenses, these complicated optical systems in a cost-effective way. You can make them at scale, again because the semiconductor foundries are already delivering the electronics to the cell phone manufacturers. So the supply chain has the scale to meet the demand. And then what the metasurface enables is the compression of these optics down into one single layer, so you can shrink that form factor. 

John Koetsier: Wow,

Rob Devlin: So it really gives you that combination of bringing it to a price point and a form factor that’s compatible with mobile.

John Koetsier: Now you had a bunch of slides as well that you wanted to show and that show what the lens on a chip looks like … did you want to walk through those real quick? 

Rob Devlin: Sure, I’d love to. So I mentioned this a couple times now: we’re making these in the semiconductor foundry, and so we actually end up referring to our lenses with a lot of the same terminology that you might hear from the electronics, because we’re making them now in the same way that you make the electronics. So these are just some images of a zoomed view of the wafer that I actually showed earlier. So this big 12-inch wafer. This is looking a little bit closer at what we actually have on that wafer. So each one of these individual squares that you’re seeing is a particular lens that we’ve designed and manufactured. So these different colors you have here—

John Koetsier: Wow. Some large, some small.

Rob Devlin: That’s right. So, if you look at the way that we’re manufacturing this, it also allows a high level of customization.

So on that single wafer we can produce 10, 20 different types of lenses. And then in terms of scaling across that wafer, you can have anywhere from, say, 1,000 to 10,000 lenses produced on that single wafer. 

John Koetsier: Wow. That’s impressive.

Rob Devlin: So just— [crosstalk

John Koetsier: 12-inch wafer … 5,000 lenses right there, and each lens is about a millimeter? Wow. 

Rob Devlin: That’s right. So our typical lens is about a millimeter in diameter. We can have about 5,000 on one of these 12-inch wafers.

And then if you look at it from a scaling perspective, what we’re able to do here is in a single day with our partners, we can deliver anywhere from, say, 1 million to 5 million lenses from one of our manufacturing partners.

John Koetsier: That’s got to be a game changer. 

Rob Devlin: Yeah. I really think that the scale we’re leveraging here is really a big, huge — it’s going to have a huge impact, especially as cameras and optics are proliferating to all sorts of different applications, right? The number of cameras in a smartphone, the number of cameras in a car, in all of these different areas — there’s more and more cameras coming, and you need the scale to actually meet that demand. And so this really allows us to meet that. 

John Koetsier: Rob, talk about this in terms of a smartphone camera if you will, for a moment. I know you have multiple other use cases as well, but for smartphone cameras we’re used to talking about them in terms of megapixel range and everything like that. And if you’re a little more advanced, then you think not just about the megapixel number but also how big the individual light sensors are, so are they capturing good quality light? What are the characteristics of these lenses that we’re looking at right now in those terms? 

Rob Devlin: Sure. So the first focus we have and these lenses you’re looking at on this wafer here, these are for some of our 3D-sensing cameras. Typically in the 3D sensing world, you’re looking at anywhere from, say, you’re looking at a lower resolution here.

So you’re looking at something like 1.5. megapixels at most in some automotive use cases, and maybe down to a sort of VGA 640X480 type of a resolution for some of the early smartphone applications. The way that we look at it is in terms of what we’re able to do with the metasurface here. For the 3D sensing use case, we’re focusing more on the actual key metrics like the light collection, or how bright your image is going to be, rather than scaling up for really, really large, you know, the 24 megapixel sensors that you might have for the visible camera. So we’re really focused on those metrics for the 3D sensing, because that’s the area where 3D sensing today is struggling the most in terms of performance. 

John Koetsier: Okay.

Rob Devlin: It’s getting more light back in order to get either a higher quality depth map, or to be able to reduce the overall battery consumption. 

John Koetsier: And would you use multiple of these for a single use case on a single device, for instance? Would you use 100 of them? 10 of them? One, two? 

Rob Devlin: So, this is again where we really are able to do something with a metasurface that you can’t do with the conventional optics, we’re able to boil everything down into just this one single layer.

So if you look at a 3D-sensing camera, there’s about four different lenses. We boil that down to just one single lens. So one of these lenses would be to collect your entire image. So we only need one to get the entire image that you need. 

John Koetsier: Okay. Okay. Wow. Interesting. Was there something else that you wanted to share on a further slide? 

Rob Devlin: Yeah. So, just continuing on here for one second, just so we can actually take an even closer look at what we’re doing. If you look at the image all the way over to the right, this is a 20,000 times magnification of one of our lenses. And so this is what really comprises and makes up our lens itself.

So you see these tiny little dots here. Each one of those dots in this scanning electron microscope image on the right … that is about a thousandth of the size of a human hair. So it’s really with this fine control that we have over the shape and size of these dots, using the semiconductor processes, that we’re able to get this complete control over light and really extract more information from the light we have coming in as well. 

John Koetsier: Cool. 

Rob Devlin: So, this is just a really brief demonstration to see what’s going on here. I mentioned earlier on that we have a planar form factor from our lens. So, you know, we think of refractive lenses, they’re normally curved and shaped.

What we have here is something that’s completely flat. So this is looking at a full simulation that we do of all the physics that’s underlying our metasurface lenses here at Metalenz. And so you’ll see light coming in from the bottom, and even though you have this planar structure here, you’re still able to focus light to a point like you would with a curved lens, just by controlling the shape of these tiny nanostructures that are actually making up our lens. 

John Koetsier: Interesting. So you have a flat surface that just by means of controlling it at a nano level you’re funneling light to where you want it to go?

Rob Devlin: That’s exactly it.

In a conventional lens, it’s the curvature that actually focuses it. So when people think of shaping or bending a lens, it’s actually with this physical curvature. But by controlling these nanostructures, we’re able to shape and bend light, essentially however we’d like in a relatively arbitrary manner, which unlocks a whole bunch of applications. But we’re doing it with just this flat surface. 

John Koetsier: I can’t wait to see some applications for space telescopes, like the Hubble or something like that. I mean, those use many, many honeycomb lenses to form one giant lens and they are controllable on those individual pieces, but the level of control you’d have now would be incredible.

Rob Devlin: Yeah, that’s absolutely right.

And that’s actually a good point there in terms of another — like I mentioned before, this is a platform technology. It’s anywhere that you need a high customization, anywhere you want to have small form factor or to simplify things. And certainly in Space, whether it’s on telescopes or even in cube satellites where you’re trying to put really small, compact light optics, there’s a whole host of applications that metasurfaces can actually provide advantages for. 

John Koetsier: Especially when you’re capturing the full EM spectrum. I mean, because you might — on an older satellite for sure, and I think even most satellites today, you’d have multiple instruments to capture visible light versus infrared and other things like that. 

Rob Devlin: That’s right. It really does take a whole set of either different and distinct instruments, or different and distinct optics, and allows you to extract the information that you want.

John Koetsier: Excellent. Cool. 

Rob Devlin: I think there’s just one more here to show, which is a demonstration of the actual, some of the properties I was talking about before, and gives you just this direct side-by-side view of the conventional camera. So this would be a camera like is used in one of these 3D-sensing applications. So these are working in the near-infrared. So they’re not in the visible spectrum, but they’re working in the near-infrared. And so they’re monochrome images typically. So what you’ll see on the left is the conventional camera, a mockup of what that actual camera looks like. And on the right, is what you get from the Metalenz camera where we’re doing this with this one single layer.

John Koetsier: Cool.

Rob Devlin: And these images you see at the bottom, this is highlighting some of the benefits where you end up getting a much brighter image under the same conditions, just driven by our optics. And then you also get a more uniform image as well, in terms of how that brightness is distributed. So—

John Koetsier: And that’s in spite of the fact that the Metalenz is a fraction of the size of the conventional camera lens, correct? 

Rob Devlin: That’s right. So we’re at one single element here. We’re able to make this smaller in many cases, and this then also gives you these performance benefits. So, often when you think about reducing complexity of systems, or you think about making something smaller and less expensive, you’re normally thinking about trading off against performance, right? You’re willing to give up some performance to do that. But what metasurfaces allow you is to actually increase performance as you reduce complexity. 

John Koetsier: Interesting. So let’s talk about market entry. Where is this going to hit the market first? And what kind of timeframe are we looking at? 

Rob Devlin: Right. So we’re really focused on 3D sensing as our first market, and that’s where we’ve gotten the most traction so far with some of the big OEMs, as well as other areas throughout the supply chain.

So our target right now is with one of our first products. We will be in market towards the end of this year, so we’ll be launching in mass production towards the end of this year. And that will be on this 3D sensing area, that’s really the first target that we have. 

John Koetsier: Okay. And who’s backing you? 

Rob Devlin:  So we have investment from a number of corporate venture capitalists, strategic partners, like Intel Capital, 3M Ventures, TDK Ventures, Applied Ventures, and M Ventures. And when you look at the names here — for us as a semiconductor company who’s putting optics in the semiconductor foundry, to have the backing of semiconductor giants like that, it really gives us that early validation in terms of us as a company. 

John Koetsier: Excellent. So, very cool stuff. Very interesting. You’ve talked a little bit about pricing as well, equivalent or less, 20%, maybe even 50% in some [cases]. I know you’re stretching yourself out to the future here. At what point do you think something like this might appear in something I hold in my hand — a smartphone camera that is shipping?

Rob Devlin: Right. So if we look at the first applications that are in 3D sensing, we’re working with mobile partners for those applications. So our first target really is the cell phone market. If we look at the broader set of cameras and cell phone market, so, now not just talking about the 3D-sensing cameras that are in there, but really going to the full visible and going after all of these cameras that are in cell phones … that’s more likely to be two or three years out in terms of going after that whole camera market and all of the visible applications that are out there as well.

John Koetsier: Okay. Okay. Very interesting. So, Rob, as you know, TechFirst is about tech that’s changing the world, innovators who are shaping the future. A bit of a personal question: why did you pick this area of innovation? What made you passionate about this? 

Rob Devlin: That’s a great question. So, I’ve had a passion for material science and nanotechnology and electrical engineering. Those were sort of the three areas that I really studied in college, and I was actually driven to study those.

Growing up, I had worked with my grandfather in his basement. He was a World War II radio operator, and I used to work with him to put radios together in his basement. And it was always amazing to me, you take these set of disparate components, you slap them together, and you’re talking to someone thousands of miles away. So that always kind of stuck with me, this idea that you can engineer something to connect with someone.

And what metasurfaces provided was the combination of materials, nanotechnology, and electrical engineering. So it really drove me to study that. But I think really what it gives you is, in unlocking all of these new forms of sensing and making them potentially proliferate by bringing them to a form factor and a price point that’s compatible with mobile … I think it gives people a new way to connect and interact with their world, sort of in that same way that you’re getting new information just by picking up a device that has been properly engineered. And it’s a whole way to — new way to see the world essentially. 

John Koetsier: Well, it is a whole new way to see the world. I mean, and if you have cheap, portable mass spectrometers that are in everybody’s hands, I mean the safety implications alone are incredible.

You know, what’s in my environment, what’s in my air, what … is this paint safe? You know, other things like that, you can identify so many different things and with the right apps, and providing the right data to the right apps — which will come, obviously — there’s so much more knowledge that we can have on an ambient level as well as an active level about the materials around us. Very interesting. 

Rob Devlin: Absolutely. I think that’s the most exciting part, is when you take these and get them into every person in the world’s hands, there’s also a whole new set of applications that we don’t, we’re not even thinking of today, that sort of emerge in that sense. 

John Koetsier: Let’s make you think about that a little bit. If you project yourself out, say 10 years into the future and you’re doing what you’re doing … what’s the impact? Where do you think you are in 10 years in terms of a company in this technology? 

Rob Devlin: I certainly think that what we’re doing in 10 years is, you know, we’re starting and we’re focusing on the optics themselves today. But the fact that these optics are so unique and customizable, we really see as a company starting to build the systems around the unique properties of the optics. A lot of the work in sensing in the cell phone market has actually been pushed into the algorithms and on the end electronics. The optics haven’t been—

John Koetsier: Computational photography. Yes.

Rob Devlin: Exactly. We actually see now, being able to have co-design of the system with the optics themselves because of how much new information and design freedom you get from the metasurface.

So as a company, we see really engineering the entire system around the optic, rather than the optic being something that’s going to meet the system specifications. So, it’s that unlocking of these new forms of sensing, and co-design of the actual optics with the algorithms. 

John Koetsier: Very, very interesting. Rob, thank you for your time. 

Rob Devlin: Yeah, John, this was a lot of fun and I really appreciated all of the questions. 

John Koetsier: Awesome. For everybody else, thank you for joining us on TechFirst. My name is John Koetsier. I appreciate you being along for the show. You’ll be able to get a full transcript of this podcast in about a week at JohnKoetsier.com, and the story at Forbes will be out probably before that in this case actually. Thanks for joining. Until next time … this is John Koetsier with TechFirst.

Well, now you have to subscribe. It’s mandatory

Made it all the way down here? Clearly you are some kind of psycho 🙂

The TechFirst with John Koetsier podcast is about tech that is changing the world, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer (yeah, that’s this one!). Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe on your podcast platform of choice: