Moore’s Law is dead, right? Not if we can get working photonic computers.
Lightmatter is building a photonic computer for the biggest growth area in computing right now, and according to CEO Nick Harris, it can be ordered now and will ship at the end of this year. It’s already much faster than traditional electronic computers for neural nets, machine learning for language processing, and AI for self-driving cars. And by using up to 64 different colors of light simultaneously, there’s a viable path, Harris says, to massively increasing its speed and power … up to 100X.
It’s the world’s first general purpose photonic AI accelerator.
Listen, watch, and read a transcript of our conversation below …
And, check out the story on Forbes here …
Subscribe to TechFirst
Watch: this Lightmatter photonic computer is 10X faster than a NVIDIA GPU
(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)
Read: the world’s first general purpose photonic AI accelerator
(This transcript has been lightly edited for length and clarity.)
John Koetsier: Apparently the future of AI is bright. Literally.
Photonic computers have been a dream for decades. Now there’s actually one that you can order. And according to light matter, CEO, Nick Harris, it has eight times the power of a top-notch Nvidia GPU while using just a fifth of the power.
[Note: Nick later told me that the numbers are actually better. See below.]
Nick: what is a photonic computer?
Nicholas Harris: Yeah, a photonic computer — at least this kind of photonic computer that we’re working on at Lightmatter — is a device that does calculations using light.
John Koetsier: Why do we need that? What’s wrong with the computers that we have right now that use electrons?
Nicholas Harris: Yeah. So, you heard a lot about Moore’s Law over the years, but I’m here to tell you that Moore’s Law is not over.
The problem is that every time we shrink transistors they’re supposed to decrease how much energy they use, and that hasn’t been the case for the past 15 years. And it’s turned into a really big energy problem and a challenge in cooling computer chips.
And so optical computing is here to try to address that challenge, reduce the amount of energy that computers are using, and speed them up too.
John Koetsier: So one of the other challenges, as well, with a traditional chip — transistor-based chip — is that you make those logic gates smaller and smaller, you cram them closer and closer together, sometimes you get leakage, right? Electrons jump over and then you’ve got problems?

Nicholas Harris, CEO, Lightmatter
Nicholas Harris: Yeah, that’s right. So, quantum tunneling is that jumping that you’re talking about, and that’s one of the main origins of the scaling challenge.
So as we’ve shrunk transistors, they’ve gotten on the scale of the electron, and so now they’re leaky and they kind of leak all the time. It’s something that’s not going to go away, it’s quite fundamental.
John Koetsier: We have a bit of a mental picture — it’s totally wrong, I’m sure — but we have a bit of a mental picture of a traditional computer and the chip and what it looks like. And we think of millions of wires — which aren’t there, obviously — but we have that mental picture of traditional computing. Give us a mental picture of what a photonic computing chip looks like.
Nicholas Harris: Yeah, unfortunately it’s going to be quite boring. So there are tons of wires, millions and millions of wires. And the other thing that there is, that’s maybe unique, is that we have these little channels — like 200 nanometer wide, 100 nanometer tall channels — that act as an optical wire and send light signals around the chip. And they sit right next to their good old friend the transistor.

The heart of Lightmatter’s photonic computer
John Koetsier: Okay.
Nicholas Harris: So that’s the picture.
John Koetsier: Interesting! So these millions of wires that you’re talking about — how are they carrying light?
Nicholas Harris: Yeah, so we’ve got the electrical wires and those are for interfacing between the electronics and the photonics — the components that are doing the computation for us. And then, you know, the wires that are optical, and these things are carrying information and they’re being used to, in our case, do a matrix spectrum multiply, which is one of the core operations that happens in deep learning. So you have these massive arrays of wave guides.
We have 65,536 of them carrying light signals and doing processing in the optical domain.
John Koetsier: Very, very interesting. So what is this type of computer good at, compared to a traditional chip?
Nicholas Harris: Yeah, so the kind of computer that Lightmatter is building is good at doing linear algebra, and that’s the mathematics that underpins a lot of scientific computing, like graphics processing and machine learning. So the technology that we’re building has a lot of overlap with what GPUs can do.
John Koetsier: Interesting. And we know that a lot of super computers these days are built with millions or hundreds of thousands of GPUs. Is that the vision here as well?
Nicholas Harris: Yeah, definitely. So we’ll talk about the product later, but scale-out’s really important. And building supercomputers out of these chips is a really important property for addressing the big problems like running neural networks, like GPT-3. And some of the most amazing leading research in machine learning just requires scale-out and networking lots of these chips together.
John Koetsier: Interesting. You mentioned product. You released — I think it’s called Envise — the world’s first general-purpose photonic AI accelerator is what you call it. What’s that mean?
Nicholas Harris: Yeah. So, basically we set out on the challenge to be able to address all of the machine learning workloads that people are running today, but using a new technology platform: photonics.
And Envise is really the first photonic computer, period, that you can buy, and it addresses any kind of neural net. So if you want to run algorithms behind Alexa or Siri or any of the voice assistants, Envise can run those. If you want to do translation, Envise can run that. If you want to identify things in images for your self-driving car, Envise can do that too. So it’s really quite a general purpose piece of hardware that addresses all of the machine learning workloads that are deployed today.
John Koetsier: Wow. So you you piqued my interest there when you said it’s the first photonic computer that you can buy … so obviously I want to buy one. How much will it set me back?
Nicholas Harris: I can’t say yet, but we’re pretty competitive with standard silicon. Although I think extreme performance might command some premium.
John Koetsier: Talk about that extreme performance. What kind of performance boost am I getting?
Nicholas Harris: Yeah.
So, you know, on typical workloads we’re up to 10 times faster than existing technologies like NVIDIA’s A100 chip.
So if you look at ResNet-50, which is a neural network that a lot of people operate; or BERT, which is a natural language processing neural network; or DLRM, which is a network that people use to recommend products to you, so like Amazon and Facebook recommending things to you — so on these networks we’re typically more than 10 times faster.
And we save a significant amount of energy in doing these calculations. So just as an example, we get something like eight times the throughput of a server blade from NVIDIA, using a fifth of the power.
John Koetsier: Wow!
Nicholas Harris: It’s pretty big.
John Koetsier: That’s a big deal. I mean, because obviously not only do you have to pump in the power to your traditional chips to run them, you’ve got to cool them afterwards. You’ve got to add more power to cool what you just heated up.
Nicholas Harris: Yes, exactly. And you have to cool it afterwards and during.
John Koetsier: Yes. Yes. Or you run the risk of it shutting down, essentially, overheating. So those are big numbers. What kind of scale can you provide? You’re very new. You said it’s the first photonic computer that you can buy, although you won’t tell me the price, but what kind of scale can you deliver? Let’s say somebody comes to you — Google comes to you tomorrow and says, ‘Hey we want to run “Hey Google” on this infrastructure. We want 500,000 of them.’ When can you deliver?
Nicholas Harris: Yeah So we’re targeting end of year for delivering hardware. So—
John Koetsier: Okay.
Nicholas Harris: We’ll be getting our blades out there to customers at the end of the year, and we’re really excited to get to that step. I can tell you that building this new kind of technology wasn’t easy.
John Koetsier: You think? [laughter] That is not a shock. That is not a surprise, because we’ve been talking about photonic computers for quite some time — decades, in fact — and there’s been sort of demo products here and there, and little pieces of products, and things that you might call kind of a computational engine, not so much a general purpose computer.
But you’re saying this is the first general purpose photonic computer. Very interesting. I’m assuming that with a brand new hardware platform you probably need some new kinds of software as well.
Nicholas Harris: Yeah, that’s right. And just one point to clarify: so we’re not a general purpose computer, we’re a general purpose AI accelerator.
John Koetsier: Okay.
Nicholas Harris: So it’s a bit more narrow. I’m not gonna run Windows for you [laughter] … yeah, so with a new computer comes requirements for new software. But what we’re doing with our software stack, which is called Envise, is we’re allowing people who deploy machine learning models to use the software stacks they’re already familiar with.
So people out there are using PyTorch and TensorFlow and Onyx, and these are the file formats and languages that people build neural networks in and store them. So we can interface with those directly and run those straight through our software stack and get it onto hardware. And we offer tools through Idiom that allow you to look at how much time Envise is spending on different operations that are happening in the neural networks — this is called profiling —how debugging tools and modeling tools.
So you can make trade-offs between speed and accuracy and other parameters like this.
John Koetsier: Is that a translation layer between types of software? Or is it just a methodology to run the software that you’re building already in those familiar tools?
Nicholas Harris: Yeah. So what Idiom does is it builds up an interface between our hardware, which ultimately runs binaries, and those existing software stacks. So in order to do that, you need a compiler — which we’ve built, the Idiom compiler — and then there are some layers above that interface with those software stacks.
John Koetsier: Oh okay, so it’s a compiler. So it compiles the native code for the hardware platform and runs at full speed. It’s not some sort of translation layer that saps performance 30% or something.
Nicholas Harris: I sure hope not. [laughter]
John Koetsier: Excellent. Let’s talk a little bit about the future. You mentioned that you’ll be shipping by December or something like that — maybe not half a million units, but you’ll be shipping by December.
Where is this going over the course of another 5 years, maybe even 10 years or so? Will I ever have a photonic computer on my desk or will it stay in the cloud?
Nicholas Harris: Yeah. So I think that right now we’re laser-focused on inference and — pardon the pun — so we’re really focused on AI inference. Now, if there is a use case that comes into the home that would require extreme speed … then yeah, you could imagine using that.
I think the way that you’re most likely to see our hardware over the next five years, personally, is through cloud access and services that you’ll run. You know, it’d be a dream of mine to eventually power a Google search. A lot of that is run on neural networks. So that’s the way that I think you’ll interact with it.
And where are we going over the next five years? Light’s pretty weird. We have some special properties and right now we process using one color of light. But we can actually take our processor and use a single core and act on multiple datasets at the same time, each dataset in a different color of light.
So this is a really special property that allows us to go much faster and amortize the energy cost and area of a single core across multiple simultaneous datasets. So I think the tech has some amazing roadmap on speed and efficiency.
John Koetsier: Wow! That’s almost another dimension there. I mean, you’re taking the same computer, the same core, and you’re running another set of instructions on it just by changing from red light to blue light or something like that. Is that correct?
Nicholas Harris: Yeah, that’s right. And it’s not just changing, it’s they run at the same time.
So you can kind of think of like a prism, like a Pink Floyd album, you have white light comes in and it spreads out red, green, blue, and so on. So that’s, in optical terms, that’s a multiplexer or a demultiplexer. So we pack in all the different colors of light, push them through the processor, and then unpack them on the other side. In some sense, it’s literally like prisms that are on the input and output of the processor.
John Koetsier: And what’s the multiple there for the same hardware by using multiple colors of light?
Nicholas Harris: Yeah. So it’s pretty big.
So for every color we add, we increase the throughput by that number. So two colors is twice as fast. Three colors is three times as fast, and the efficiency scales about the same way. So we think you can probably do 64 colors in the future. We’re not there yet, but we think that’s possible. Imagine having 64 virtual processors on a chip, and it’s just the area of one.
John Koetsier: That’s amazing. And that’s probably a software unlock, I’m guessing, in the future. Is that correct?
Nicholas Harris: Yeah, I think that there are some really interesting implications in terms of neural networks, specifically on like latency. So imagine if you have a bunch of work that you need to get done, on a normal processor you’re going to have to do each job one at a time until it’s done.
With an optical processor, you have each job in a different color, runs through the core, you get all the work done faster.
John Koetsier: Wow. You’re color coding your jobs now. This is really interesting! It’s quite amazing. Now, we have a known model, which has broken down pretty much, for growth and progression in traditional computing. You mentioned already Moore’s Law, right, where we were doubling performance kind of every 18 months or something like that.
Have you learned enough about what you’re able to do with photonic computing — with this multiplexing of color, as well as all the other hardware stuff that you’re putting in there — to know what kind of improvement factor we’d be looking at in photonic computing?
Nicholas Harris: Yeah. So Moore’s Law is really interesting. And I thought a lot about the question that you’re asking here. It’s really about a roadmap design. You’ve got an engineering team that’s building a product. You don’t want to ask them to build the fastest thing first, because maybe it’ll take a long time to get done. And also there’s no reason to beat everybody by that much.
So I think that we have a roadmap that extends beyond 100X the current speed of accelerators.
So there’s a long way to go, and for us, it’s really about pacing ourselves and going one product after another along that roadmap.
I have no idea what the law will be called, but I can tell you what it involves: it involves multiple colors. It involves clock frequency, so optical processors can actually run at very high frequencies, up to 20 gigahertz and beyond. Imagine if you had a CPU at that kind of clock frequency.
So those are like the two big parameters that we can play with: colors, frequency. And that takes us a long way.
John Koetsier: Wow! Interesting. I’ll throw my hat in the ring for the naming there, it’s the Harris Rainbow Effect [laughter]. So, what is — and you kind of answered this a little bit already — what is photonic computing not good at?
You mentioned that it’s purpose-built in the incarnation that you’ve created for AI and machine learning. What’s it not good at?
Nicholas Harris: Yeah, so it’s not good at doing logic operations. So if you write a general C program that has control flow, if/then statements, these sorts of things … optics are not good at doing that.
In the 1980s, Bell Labs was working on trying to build an optical computer with a device that was an analog of a transistor. It turns out that when you take two light beams, you can’t like gate one light beam on another light beam very easily. This is like a non-linear operation in the physics sense. So that’s really hard to do. We don’t do that kind of thing.
We do linear algebra. We do the mathematics that underpins, you know, electromagnetics natively. And that applies to machine learning. It applies to building rockets and looking at vibrational modes of rockets. It applies to chemistry. It applies to quantum computing, ray tracing, all sorts of things like this … but it’s not like a control flow type stuff.
It’s not going to run Windows. It’s not going to do if/then statements.
John Koetsier: Interesting. I’ve talked to D-Wave about quantum computing. And of course, totally different scenario here, but in a sense, a revolution in computing methodology and modality and thinking, which has some analogies to photonic computing.
And we wondered — of course, that’s available in the cloud primarily right now as well, unless you want to buy a $5 million, $10 million, $15 million machine — but we wondered about in the future, far future, of having like a module that you might have in a modern equivalent of a laptop, whatever that looks like, which would handle some of the tasks. We already know there’s Edge AI, for instance, and stuff like that.
Do you ever see a future like that? You look out 10 years and I might add that kind of chip that does my AI on my local computing device?
Nicholas Harris: Yeah, I think that’s definitely possible. We use some advanced packaging technologies that are becoming more and more mainstream, namely like 3D packaging. As that becomes cheaper and more commoditized, I think it’s definitely possible. The other thing with optical computers is you have a laser, and so that’s going to have to get into that laptop as well. But I don’t see any reason why you couldn’t do it.
John Koetsier: Interesting. Nicholas, super interesting stuff that you’ve been talking about and working on. Thank you for taking this time.
Nicholas Harris: Yeah. Happy to be here. Thanks, John.
Big reader, huh? Subscribe to the podcast already
Made it all the way down here? Who are you?!? 🙂
The TechFirst with John Koetsier podcast is about tech that is changing the world, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Subscribe on your podcast platform of choice: