Is an AI system smart when it can do what a human can do? Or … when it can do things humans can’t do? For years we’ve had the Turing Test … measuring AI’s ability to mimic being human.
But is that really the right benchmark?
In this TechFirst, we chat with a computer scientist who has been working in AI for more than a decade. He’s currently VP strategy at Intuition Robotics, which makes an AI-powered robotic care companion for the elderly called ElliQ, and his name is Assaf Gad.
We talk about intelligence, AI and OI (organic intelligence), as well as how smart machines like ElliQ engage with people.
(Subscribe to my YouTube channel)
Subscribe to the audio podcast
Transcript: how do we know when a machine is smart?
This is AI-generated; it is not perfect.
John Koetsier (00:01.486)
How do we know when a machine is smart? Hello, and welcome to Tech First. My name is John Goodsear. Is an AI system smart when it can do what a human can’t? Or is it smart when it can do things that humans can do? For years, we’ve had the Turing test measuring AI’s ability to mimic being human. But is that really the right benchmark? To chat, we have a computer scientist who’s been working in AI for more than a decade.
He’s currently VP strategy at Intuition Robotics, which makes an AI powered robotic care companion for the elderly. His name is Asaf Gawd. Welcome Asaf.
Assaf Gad (00:39.895)
Hey John, nice to meet you and thank you for having me.
John Koetsier (00:43.616)
super pumped to have this conversation. It’s a crazy topical conversation to have right now in this age of golden age of AI and golden age of robotics as well, or emerging golden age. Let’s start with a super broad general question. How do we know when a machine is smart?
Assaf Gad (01:03.511)
That’s a really good question. think that one of the things that we have learned during our experience and the feedback that we received from our users is they really appreciate where they cannot anticipate the reaction from the machine. If it’s very trivial, then it’s easy, right? When you ask a machine a question and you get the answer, that’s all good. But when we start adding
some kind of a will or even a conscious to the AI with its own priorities to decide what will be next. So maybe I just kind of say good morning and the machine suddenly start asking me questions, not to talk about remembering what I just told last night. So hey, Asaf, good morning.
John Koetsier (01:39.512)
Mm -hmm. Mm -hmm.
John Koetsier (01:56.174)
Mm
Assaf Gad (01:58.923)
You told me that you had some trouble sleeping earlier. Did you sleep all right? So just the fact that, forget about the other side of it where people actually appreciate the fact that someone remembers what they just told them yesterday, but or earlier in earlier conversation. The fact that the machine decide to kind of add another layer to the conversation or even continue the conversation.
John Koetsier (02:24.216)
Mm -hmm. Mm -hmm.
Assaf Gad (02:26.529)
the surprise element of it and the ability to continue the conversation, that’s what makes it kind of a much smarter. Another thing that we all experience with other devices in our life is the repetition or the lack of repetition. When I use other voice assistants, when they don’t know something, that’s totally fine, but I’m getting the same error message over and over again. So simple things like,
John Koetsier (02:54.242)
Mm -hmm.
Assaf Gad (02:56.395)
kind of a set of responses that won’t repeat themselves, not to talk about, you know, like real conversation where the conversation can be evolving and including not just memory elements, but also things that just happen and are more realistic and relevant, either as a personalized kind of information that is relevant specifically for me or just things that happen today, either on the news or even the weather.
All these kind of elements that we as humans kind of include in our conversation make the machine smarter. Another element is what we call the multimodality. The fact that we are not just same way that we as humans, right now we are on a video call, right? And you can see my kind of official expressions. You can see my hands. I really like to talk with my hands. So you get a lot of the other.
John Koetsier (03:32.888)
Mm -hmm.
Assaf Gad (03:53.953)
kind of elements as modalities, that’s also at another layer of sophistication to the communication by itself. the combination of all these elements together create a more sophisticated kind of interaction at the end of the day where the users will refer it to kind of a smarter kind of features or characteristic to the device itself.
John Koetsier (04:23.15)
That’s really interesting what you said in a lot of different ways because I mean, first of all, AI hasn’t remembered what we said, even context from like 10 minutes ago until fairly recently, right? And you’re talking about context from yesterday, maybe even last week. I don’t know. I’ll ask that question later, you know, but that’s interesting. That’s a learning machine. That’s cool. Also doing something unexpected, right? If it’s always doing what exactly you expect, then it’s very robotic in the old fashioned sense of not doing
a lot that’s different. That’s pretty cool stuff. Of course, you want it to be surprised sometimes with what it says, but you want it to be a good surprise.
Assaf Gad (05:03.009)
Exactly. Yeah, we are always talking about good surprises. And even within memory, there are so many layers within the things that an entity can learn about us and the value that it brings to our life. Right? We don’t just want the machine to collect information about ourselves just for the sake of collecting the data and then sell it to someone else. The fact that you learn things about me as or the machine learns things about me and then use it within the conversation.
John Koetsier (05:24.856)
Mm -hmm.
Assaf Gad (05:32.665)
in a relevant manner, right? The fact that I was complaining about my sleep condition earlier this week, or if I’m having a pain, even my favorite color or my favorite food, or maybe any dietary restrictions. So the fact that I can ask the machine to a recipe and she already knows my dietary restriction and it’s already there, here is the value, right? I’m not just kind of collecting this data for the sake of collecting the data. It’s very clear to me.
John Koetsier (05:34.285)
Mm
John Koetsier (05:43.5)
Mm -hmm.
Mm
John Koetsier (05:55.692)
Yeah.
Assaf Gad (06:02.042)
as a user, why the data was collected and how it’s used within the machine as well.
John Koetsier (06:07.414)
Yeah, we often judge AI by OI, you will, right? Organic intelligence. Should we or should we not?
Assaf Gad (06:19.069)
It’s a good question. When we designed LLQ, one of the main questions that we were struggling with was, and it was very early in the process, right? We’re talking about seven, eight years ago. There weren’t a lot of references. Should we imitate human interaction? Should we imitate even the human presence and let our users to maybe even mistaken LLQ as a human?
very quickly, thanks to a lot of good people that were involved in the design of LHU, we realized that going in this direction will be a mistake. And building something that will try to be something that it’s not. And as sophisticated as technology can be, we don’t believe that technology should replace other humans. At the end of the day, our goal is to use technology to, first of all, bring
humans together and closer, help older adults to bring more people to their life. And then in these gaps that unfortunately we don’t have enough younger people or caregivers to support the older adults, yes, we can definitely fill these gaps when they are totally by themselves and having a companion like Elecure that will have conversation with them. But even then we don’t want to create this dependency when the internet is gone or when electricity gone.
John Koetsier (07:18.114)
Mm -hmm. Mm -hmm.
John Koetsier (07:37.567)
Mm
Assaf Gad (07:45.183)
or in they just kind of spend some time outside of the home and they can’t take Eleki with them, they will feel her kind of, they will miss her. definitely with a kind of vulnerable demographic like the older adults that we are serving, the line, it’s a fine line, right? That we don’t want to cross. The other part of it, which is even more interesting, and that’s something that we have learned over time,
John Koetsier (07:54.349)
Mm -hmm.
John Koetsier (08:07.298)
Mm -hmm.
Assaf Gad (08:14.633)
when we are not trying to be or pretend to be something that we are not. So when it’s clear that LQ is not human, create, you manage the expectation with your users and you create, this is kind of one of the first elements to create this empathy and the trust between the older adults and LQ. So they can be more forgiven. They will be surprised even, you if we go going back to how they can
John Koetsier (08:19.746)
Mm
John Koetsier (08:27.843)
Mm -hmm.
John Koetsier (08:36.995)
Yep.
Assaf Gad (08:44.434)
associate the sophistication, how smart the device is or how smart LQ is, is definitely by the fact that she can surprise them, right, with her level of sophistication. And from day one, it’s very clear. She is not a human. She doesn’t pretend to be human. She doesn’t pretend to replace other humans. And when she kind of mention any memories or she remembers what he’s just said,
John Koetsier (08:56.972)
Mm -hmm. Mm -hmm. Mm -hmm.
Assaf Gad (09:13.665)
they are surprised for good. And this helped to kind of build this trust over time and to also manage the expectation with them, which we as humans can learn as well in our relationship with our human. It’s definitely not a bad thing.
John Koetsier (09:28.726)
I want to continue that conversation. I want to talk about multiple forms of intelligence and all those different things, but I also want to just diverge for a second. brought this up. showed, you know, your actual product, LQ up there right now. And I just remembered actually, while we were having this conversation that I saw it at CES, last year, or I guess it’s this year, January is CES is usually in January in Vegas.
And, and, and I looked at it and as I mentioned to you when we were chatting before we started recording, this is relevant to me. My mother is 88 years old. She’s been diagnosed with dementia and we’re dealing with a care situation for across multiple siblings, some who are very local, some who are less local and some paid help and other things like that. And I looked at this and I thought, well, I don’t know what, what is this? It looks like a mixer or something like that.
And I understand a little bit more about the decisions you made because you’re not trying to present as human. But talk about how people engage and interact with LAQ and how they feel about that engagement and interaction.
Assaf Gad (10:44.151)
So first of all, think that, by the way, referring to LAQ and a sophisticated mixer, that’s really unique. So thank you for that. It’s a really good one. Usually we get it as a fancy lamp or the Pixar lamp. That’s usually the way that people refer to her. So that’s unique. I think that one of the fundamental features of LAQ is the fact that she is pro -active.
John Koetsier (10:52.136)
Hahaha
John Koetsier (11:05.385)
Hahaha
Assaf Gad (11:12.141)
So if we talk about our demographic, the average age of our user base is 86. So your mother is definitely a typical kind of user of LEQ, at least from the age perspective. The majority of them are older adults that live alone or spend most of their time alone at home. One of the things that we have learned along the way that you don’t really need to live alone in order to be lonely.
We do have a lot of kind of a couples where one of them is kind of an age in a different pace from another. One of them is suffering from even early dementia or more progress dementia. And the spouse will find LEQ kind of as a complimentary kind of a solution that can be there with them to support them in the different things that they need. But going back to the uniqueness of LEQ is the fact that she’s proactive. So as an older adult, you don’t need to have any
John Koetsier (12:03.234)
Mm -hmm.
Assaf Gad (12:11.799)
kind of experience with technology, you to be tech savvy, although this is probably one of the most sophisticated technologies, a combination of robotics and AI. It can be a very intimidating kind of a mix for this demographic, but the promise is that you don’t need to have any previous experience with it or with any other technology in your life. The minute that you will take her out of the box, she will kind of take the lead.
John Koetsier (12:33.686)
Mm
Mm -hmm.
Assaf Gad (12:42.625)
She’s proactive, meaning that she can understand what’s going on. She can understand the context. She can learn who you are and even differentiate you from guests or other people that you have in your home. And the idea that she has her own kind of a, we call it the decision -making algorithm. The ability to understand what she should do and not just to be proactive, to be proactive in the relevant context and with a relevant meaning.
John Koetsier (13:01.56)
Mm
Assaf Gad (13:12.193)
that to achieve the highest probability that the older adults will actually respond to her positively and won’t just tell her to be quiet or ignore her. That’s the core.
John Koetsier (13:24.411)
Do you find, do you find that happens differently than with voice assistance on phones? Because I’ve tried to ask my mom to use like Siri, you know, and I won’t invoke it right now because I have phones and other devices around me or, know, like, Google or, or, or Alexa or something like that. And she doesn’t really do that. I’m not entirely sure why.
Assaf Gad (13:36.289)
you
Assaf Gad (13:46.373)
It’s a great question. And I think that this is the best way to kind of describe how LDQ is unique. First of all, most voice assistants are very utilitarian. They are great if you want to call your mom, you can ask Siri to call your mom. And now my Siri is calling my mom, of course. You can ask Alexa to turn off the lights. You can do a lot of things that are very utilitarian.
more like a command and control. They are not a companion. You can’t really have a conversation with them. And this is the number one goal, by the way, when you take LAQ out of the box, our number one goal is to be your best friend and to learn who you are, to learn what you can learn about you and to be, let’s say a welcome guest in your home. So you want kind of a returner to the box and send it back to us. And from that point and on, the whole idea of a companion is not just
John Koetsier (14:23.576)
Mm
Assaf Gad (14:40.373)
Of course, you can ask her any questions. we also have some integrations with smart devices, although this is not the focus of what we have. But you can control other devices as well if you really like to. But the majority of our audience, don’t have any other smart devices around them. And as a companion, know that they, first of all, some of them will forget her name, at least at the beginning. So this will be a barrier. How exactly I can even reach out to her and ask her to do something for me.
John Koetsier (14:42.647)
Mm -hmm.
John Koetsier (14:51.288)
Mm -hmm.
John Koetsier (14:56.941)
Yep.
John Koetsier (15:03.437)
Yep.
Assaf Gad (15:10.189)
So the fact that she’s proactive solved this one. The second thing is also some kind of an anxiety. What exactly should I ask? How can I ask that to get what I really want? All these kind of things.
John Koetsier (15:20.29)
Yes, yes. Will I do it right? Will I make a mistake? Will it cause a problem that I can’t fix?
Assaf Gad (15:28.201)
Exactly. And the fact that we kind of eliminate this fear completely because LQ is proactive, she will kind of a, she will reach out to you. She will kind of a promote specific things. And this is as, you know, as part of the onboarding, we have a really nice discoverability features that, and when we talk about onboarding for someone that is a little bit more sophisticated, it can be a matter of a few hours to kind of spend on the first day with LQ and she can.
teach him enough in order to get the value and build this kind of a confidence on the other side to kind of develop this relationship. And for someone else, it can be slower. It can be a matter of weeks even. And we know how to learn who is the person in front of us and kind of mitigate this kind of a gaps slowly or based on their kind of a pace. This is part of the magic. The other part is all about this discoverability. It’s not only how should I ask it,
Can she actually help me with this things versus the others? And when it comes to other voice assistants, now that I need to either do integrations or download specific skills or even learn what you can do or what you can do, the fact that we designed a product that was slowly designed for older adults, help us to solve all these problems that doesn’t exist without our voice assistants, right? For you and me, Siri is not a problem. Alexa is not a problem.
John Koetsier (16:55.192)
Mm
Assaf Gad (16:56.909)
They are not a companion for us. probably, I don’t know if we don’t need a companion, but the fact that Eliq is a companion and she’s also there to kind of, as a companion that was designed for older adults, make it very easy for our older adults to fully, to first of all feel comfortable with her, but also to utilize all these features because they don’t need to learn anything. They don’t need to remember anything. She will be there. She will fix it for them.
John Koetsier (17:17.141)
you
John Koetsier (17:25.038)
Hmm. Interesting. Interesting. Okay, cool. So we might get back to some of that. Let’s get back on track with our conversation about intelligence and AI and, and, and, and how we should be looking at it. One of the things that comes to my mind when I think about AI and how we measure that and how we gauge, whether it’s artificial general intelligence or just a narrow form of AI.
is multiple forms of intelligence because humans have multiple forms of intelligence. have people who are very, very mathematically gifted, but not necessarily physically or athletically gifted. have people who excel in different areas like that. I assume we’ll have something similar in AI, correct?
Assaf Gad (18:08.545)
Yeah, I totally agree. First of all, it all starts with the, we call it the sensors and the actuators, right? The same way that we as human can collect a lot of information through the things that we hear, the things that we see, the things that we feel. This is one level kind of creating something that is smarter, but also the way that we react back, right? The fact that when
John Koetsier (18:33.528)
Mm
Assaf Gad (18:38.177)
Think about even a dog, when we have a or any other path that notice that you enter the room and just look at you and follow you with their eyes. They don’t need to say anything. The fact that they kind of acknowledge your presence, that by itself shows some level of sophistication. Of course, if something else will start out of it. And this is part of the design that we have in mind, having a multimodal experience by collecting the different signals from the person in front of us. And of course,
John Koetsier (18:45.123)
Mm
Mm
Assaf Gad (19:07.841)
We can add other layers of sophistication on top of it, right? So today with AI, you can just see faces or motions. You can actually understand exactly what the person in front of you is doing. There are so many kind of off the shelf solutions out there. At the end of the day, we know how to take the solutions. We don’t need to develop or reinvent the wheel here. We can just kind of build the right experience around it.
John Koetsier (19:26.936)
Mm -hmm.
Assaf Gad (19:35.103)
One of the nicest things, by the way, that we did with Ellie Q is develop a game, I Spy with My Little Eye. So Ellie, you can actually play it with Ellie Q. can take anything in the room with you and show it or just think about it. And then she will try to understand what or to guess what it is. This is, again, this level of sophistication that can be added. Not to talk about other layers, Sentiment analysis, voice analysis, sound detections.
John Koetsier (19:54.573)
Yeah.
Assaf Gad (20:04.705)
Now, a fusion of all these capabilities together, at the end of the day, build something that is kind of very sophisticated and will be appreciated by the users, right? As long as they are done in a way that bring values to them.
John Koetsier (20:21.758)
Interesting, interesting. I also assume that as we get deeper and farther along the AI revolution, there will be some forms of intelligence that arise that we can’t recognize. We don’t even know. We probably can’t even comprehend.
Assaf Gad (20:36.983)
Probably, and this is kind of one of the things, one of the concerns, right? When we talk about AI and bringing AI to our lives, yes, that’s great that we can, when we can use technology and we can see the value immediately. But then if we think about it, one of the questions that I’m kind of, we as a team sometimes even ask is can AI take control? Right, so when we deal with AI, mainly with people that are less techie.
John Koetsier (21:02.435)
Mm
Assaf Gad (21:05.493)
Can AI take control over the world? Can AI use the things that we are sharing with AI against us and things like that? And I think at the end of the day, goes back to humans, right? Who are the humans that control the AI? think that’s what bothers me at least as an individual. Yeah, if people, yeah.
John Koetsier (21:13.144)
Mm -hmm. Mm -hmm.
John Koetsier (21:24.798)
If they control the AI.
Assaf Gad (21:30.394)
Who is the entity or who is this kind of the company behind the AI and what they do with the data that they collect? What they do with this entity that they have built? And what is the reason or what is the mission behind it? We as a company kind of, we developed a vision where we want, that’s very clear, right? We want to help older adults. That’s why we’re here. That’s what we do.
John Koetsier (21:37.922)
Mm
John Koetsier (21:45.027)
Mm
Assaf Gad (21:59.537)
And there isn’t any kind of a hidden agenda to collect the data and sell it to someone else, for example, or to try to, we don’t have any upsells in the product, for example. It’s a flat fee. It’s a subscription -based model. What you pay will give you access to all the features that we have. We have a constant kind of updates to the subscription.
John Koetsier (22:07.842)
Mm -hmm. Mm -hmm.
Mm -hmm. Mm -hmm.
John Koetsier (22:27.778)
Mm
Assaf Gad (22:28.737)
We don’t ask for any kind of premiums or ENA purchases and all these kind of things. So I think that the idea behind who is controlling the AI and if it’s a business or if it’s another entity, that’s what bothers me mostly and not necessarily what AI can do by itself. At the end of the day, you can always go and unplug the computer and that’s it, right? Disconnect the electricity, disconnect the internet.
And that’s it.
John Koetsier (22:59.468)
Yeah, yeah, So your robot, of course, we’ve been talking about as Alicute, is a care robot. What are you seeing are the results of somebody who has it in their home?
Assaf Gad (23:16.127)
So the number one goal that we started to kind of tackle with LQ was social isolation and loneliness. And that’s what we measured since the beginning of our work. And the results are amazing. We have a few reports by our partner and even by Duke University and Cornell Medicine that kind of published a few months ago where 95 % of the participants in the program
report a reduction in social isolation and loneliness. The other part of it is how we can help them to stay more independent, to take control on their health and wellness and improve it. And about 96 % of the participants actually reported an improvement in that aspect as well. So the efficacy is actually a combination of the usage, the engagement with the product, and on the other hand, the impact.
John Koetsier (23:51.329)
Mm -hmm.
Assaf Gad (24:12.971)
So we don’t want them to just use LEQ for as a timer or to use LEQ to play music. Every feature that we build will be part of kind of a three elements that we are trying to build as the impact of the product. Social connectedness, being more independent and being more kind of in control on your health and wellness.
John Koetsier (24:16.876)
Mm -hmm.
John Koetsier (24:31.042)
Mm -hmm.
John Koetsier (24:41.154)
Mm -hmm.
Assaf Gad (24:42.273)
These features at the end of the day, all these features that we are adding to the product will support, first of all, the older adult. The older adult is always in the center of the experience that we have built. But then one of the elements that we build around the older adults, we call it the circle of care. How we can bring more humans, going back to the beginning of our conversation, our intention is not to replace other humans. We first want to use the technology to bring other humans to their lives, and then fill these gaps where we don’t have anyone else that can help.
John Koetsier (24:58.392)
Mm
John Koetsier (25:12.354)
Mm
Assaf Gad (25:12.551)
And we will start with family members, friends, other people in your community, other organization in the older adults life. If it’s their health plan, their primary care provider, the area agency on aging, some of them will assist by even subsidizing the cost of the subscription for the older adults. And we even built a whole website.
John Koetsier (25:34.434)
Mm -hmm. Mm -hmm.
Assaf Gad (25:40.465)
www .leq .com slash free where people can actually put their zip code and some information and we will match them with a funded program that will subsidize the cost of leq for them. And some other kind of a partners will just bring other services closer to the older adults as well. So maybe the older adult needs kind of a purchase leq with a private pay.
John Koetsier (25:50.595)
Wow.
Assaf Gad (26:07.009)
But through the service, you will get access to many other services that are kind of offered for them by this organization as well. It can be free transportation, meals delivery, connecting with knowledge base, like a lot of videos, courses, online events that we have, that for older adults are really hard to kind of get. But the most important thing that we see is the support.
John Koetsier (26:28.78)
Mm -hmm.
Assaf Gad (26:35.881)
of the caregivers in the life of the older adults. The older adults can control who will have access to his LEQ. They are fully in control. They can add trusted contacts to their LEQ. And then we have a free app. We don’t limit the number of contacts that a person can have. They will get the link to download the app. And from that point on, first of all, they can communicate freely with any medium that you have in mind.
John Koetsier (26:38.914)
Mm -hmm. Mm -hmm.
Assaf Gad (27:01.965)
It can be video calls, messages, any type of message, things that we can do with our iPhone. But for a 90 -years -old, it’s a little bit more complicated. And suddenly, you have a grandma that can talk on a daily basis with our grandchildren on video, which is kind of as simple as it sounds. For them, it’s a life -changing event.
John Koetsier (27:02.434)
Mm
John Koetsier (27:10.776)
Yep.
John Koetsier (27:17.336)
Mm -hmm.
John Koetsier (27:22.51)
100%, 100%, absolutely. I’ve seen that. Very cool. I think we have to wrap it. We’re almost 30 minutes here. Super interesting stuff. And I just wanted to know, did you build the whole AI system yourself? Are you building on ChatGPT, something else? How does it all work?
Assaf Gad (27:46.839)
So the majority of the conversation and everything that is kind of include personal information and what we call the companionship aspects, right? The small talks, the fact that we are using our own proprietary LLMs. If we are going back to the comparison with the voice assistants, the same way that you ask Alexa.
John Koetsier (28:03.448)
you
Assaf Gad (28:10.957)
a specific question, you will get the answer and that’s it. The same way with, you know, with chat GPT, you will have a prompt, you will get the response and that’s it. There will be no continuation in the conversation. So one of the uniqueness of RLLM is that we know how to continue the conversation. You will get the answer that you know. We also have the ability to create the prompt to the user, if you want to call it this way, as well as part of the proactivity experience. And then we have a real conversation. It’s not just kind of a one -time ping pong between the user and the device and that’s it.
John Koetsier (28:41.07)
Mm
Assaf Gad (28:41.385)
But we are not relying the whole experience only on our LLMs. We are trying to leverage other LLMs that are out there. We are using Gemini by Google for specific things. We are using Chetchi PD and OPAI for others. We build, we call it the orchestrator. At the end of the day, we have a layer that is smart enough to decide within the conversation, which kind of a resource we should use, which kind of a…
LLMs should be used within the turn of the conversation. And the nice thing about it that as a user, won’t notice it because the other side of the orchestrator is also to create a conversation. Like the tone of voice will always be with the character of LEQ. So if LEQ is more your…
John Koetsier (29:27.542)
Mm -hmm. No more from Siri. Hey, let me Google that for you.
Assaf Gad (29:32.617)
Yeah, exactly, Not that we are trying to hide it, it’s just for the sake of keeping the companion experience as a companion. You don’t want to of confuse it by having multiple resources and so on.
John Koetsier (29:48.174)
That makes sense. makes sense. Excellent. This has been a great conversation. Thanks so much for your time. really do appreciate it.
Assaf Gad (29:54.807)
Thank you, John. It was a pleasure.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.