Autonomous robots & drones: working without GPS, radios, or humans

autonomous drones robots

We talk a lot about self-driving cars. But what about autonomous robots, doing work that isn’t safe for people? We’re talking environments like mines a mile deep … nuclear reactors … remote locations.

In this episode of TechFirst with John Koetsier, we’re chatting with Nader Elm, CEO of Exyn Technologies. Exyn is building robots that have to think for themselves and communicate with each other where they don’t have GPS or radio communication.

Get the full audio, video, and transcript of our conversation below …

Subscribe to TechFirst: autonomous robots and drones


Watch: Autonomous drones working where humans can’t, with Nader Elm

Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.

Read: Self-flying drones and autonomous robots with Exyn Technologies

(This transcript has been lightly edited for clarity).

John Koetsier: Do you think there are jobs maybe no human should do? Places no human should go? Welcome to TechFirst with John Koetsier.

We talk a lot about self-driving cars. What about autonomous robots, doing work that maybe isn’t safe for people to do? We’re talking environments like in a mine a mile deep, maybe nuclear reactors, remote locations. Places you might not have GPS or even radio-based remote control so the machines have to operate themselves and maybe communicate with each other.

To learn more, we’re chatting with Nader Elm. He’s a CEO of Exyn Technologies. Nader, welcome! 

Nader Elm: Good afternoon to you. Want to thank you so much for having us. 

John Koetsier: Hey, it’s a real pleasure to chat with you. Let’s start with the news. You just signed a deal for autonomous robots in mining. Can you tell us a little bit more about that? 

Nader Elm, CEO of Exyn Technologies

Nader Elm, CEO of Exyn Technologies

Nader Elm: Yes. Now so we’ve had many deals with mining customers around the world, but the most significant is a partnership agreement we just signed with Sandvik based in Finland, where we are going to be collaborating on exactly what you mentioned in terms of taking our robots instead of having folk, people underground, just removing them from a dangerous environment.

So what you’re seeing here on the video is exactly what it’s all about.

It’s having robots do some of the work that folks are doing underground right now, which puts them potentially in risky situations. Not only that, but also now we can actually provide a lot of insight into what’s happening in mining operations generally, kind of building maps and giving them really high-fidelity real-time data on what the geometry of the environment looks like. 

John Koetsier: Excellent. So, obviously they’re going down deep into a mine. How’s that work? How’s it being controlled? How do you task it with what you want it to do? All those sorts of things. 

Nader Elm: Yeah, sure. So the core of what we’ve built and where we’ve really kind of started is building software autonomy for robots, and in this case, an aerial robot. And generally, you know, in order to be fully autonomous you’ve got to have all the intelligence onboard the vehicle itself.

Now one of the unfortunate things with the word autonomy is that it doesn’t really have that much definition outside of maybe the driverless car space. So, as a result, everyone claims to have it.

So what do we mean by that? Essentially there are four tethers that we’ve identified that robots typically have, and you’ve mentioned some of them.

  1. So number one, infrastructure. So, think of GPS, so beacons in the environment, markers in the environment, anything that helps a robot figure out where it is. So that’s one tether.
  2. The second tether is communications. So one easy cheat is to basically have big honking computers in the back end, where you kind of capture things at the front and parts of the backend for number crunching, and then push the results to the front. But what happens if you lose communications in the middle of a mission? So that’s the second tether.
  3. The third tether is a tether to any kind of prior information, so typically that’s a map. In most cases, actually our missions are to go into unknown environments and build the map and come back with that.
  4. And finally, it’s a tether to an expert. So, you know, someone who knows how to fly drones or work with robots. So those are the four tethers that we’ve completely severed.

And we’ve kind of said, look, the robot’s got to have all the intelligence on board and the operator has got to be an expert in something else, not in how to operate this thing, so that we can enable them and their work more effectively.

So, a mining situation, this is a perfect example. We send it into unknown environments and … here’s an example. So this is a surveyor, their job is really to survey the environment and build these kinds of maps. So what you’re seeing over here. And the more complete, the more detailed the map, it really helps their operations and their planning be more effective.

These are often very dangerous environments, these are environments where they’ve typically just drilled, packed explosives, and set off an explosion. So it’s inherently dangerous. And that’s why we send in these robots to figure out what the geometry of that space looks like.

And, the way we’ve done that is really just by putting a lot of intelligence, a lot of good science on board for it to perceive the environment, figure out where it needs to go, and execute the mission in terms of mapping. 

John Koetsier: So one of the things that you talk about is having artificial intelligence on the robot. What are you talking about there? What kinds of AI are you dealing with? 

Nader Elm: So, yeah, it’s basically the intelligence that you and I have and we use without really thinking about it, turning that into code and putting it on board the robot. So, in this particular case, when we talk about autonomy, you’re typically trying to answer three questions in terms of autonomous navigation and guidance.

The very first question, which is actually one of the hardest questions to answer, which is: Where am I? And when you’ve got GPS you ping the satellites and you get a very accurate position on where you are, but you don’t have any awareness of what’s around you. So that’s one of the deficiencies of GPS.

So what we do to answer that question, is effectively mimic what you and I do, which is we put on sensors on board which perceive the environment. So that could be cameras, that could be lidar, sonar, radar, a variety of different systems that we put on board the vehicle so it senses the environment, builds a map, and then not only that, to figure out where it is exactly in that map as well.

So you answer that question first. Then you ask the question: Okay, where am I trying to get to?

And then the third question is: Okay, so what’s the best way for me to get from where I am to where I need to be? And this is all the intelligence and again, you and I, we take our journeys, whether we’re walking across a room or whatever, it’s the same thing.

And it’s iterative, so as soon as you take a step, you have to answer the question again, which is: Where am I now? Am I where I was supposed to be? Where am I trying to get to? And what do I have to do to get there safely? While avoiding objects in the space that might be moving, by the way, as well. So …

John Koetsier: And that’s an interesting question, because one of the things you talk about in the mining scenario, you’ve got mining robots, right, which are also there because it’s too dangerous for people to be there.

And one of the things you talk about is having collaborative drones, right? Collaborative robots. How does that work? How do they communicate? 

Nader Elm: Well, we have communications on board our robots, so they’re constantly communicating. For while it can … but we have to accept the situation where it turns a corner and it’s hard rock underground and you’re gonna lose communication, but while it can communicate it does communicate.

And when we’re talking about collaboration, that kind of takes on many, many different, again, different types. So, there’s communication after the fact. So that’s basically what happens right now. You’d gather the data and then you can push it to another system, another robot, so they can process it, analyze it, act on it.

But if you jump to the next stage, which is basically collaborative robots where they’re communicating with each other in real time, that has to be opportunistic. Again, when you can get a connection between two robots, then you can start passing salient information.

And it might be, look, here’s the map I’ve built. I’m going to push that to you. What map have you built? You push that to me so I can get a more complete view. And, okay, so what are you going to do? What am I going to do? And then continue the mission from there. So that typically is swarming and we’re building that right now.

So that will be something that we’ll be talking about next year in more detail.

John Koetsier: Excellent. How’s that function? Obviously in some of the scenarios you’re talking about, there are no humans present at all. You’ve just done some blasting, it’s a mine environment. It’s a dynamic environment, things are changing all the time. But there’s some environments as well where you might want to have multiple robots working and drones flying, and some people involved as well.

How do you foresee that kind of collaboration working over the next few years, where robots and humans and drones need to be all communicating around a common purpose? 

Nader Elm: And that is ultimately our goal, is, well, first and foremost, we’re here to enable people to be more effective in what they do. So we’re not here to replace surveyors or anything like that, it’s just another tool in the kit that they can use now.

But fast forward and let’s ground it with a real world example that we’re working towards, which is basically think of a first responder situation. There is a building which is too dangerous to go into and search and rescue is there, first responders are there. So how do we enable them to be more effective in what they do while being safe? And that’s where robotics really lends itself well.

Which is basically, in a search and rescue exercise, the search part happens to be the most dangerous part, where you’ve got teams of people going into a building which is structurally unsafe, and they’re doing nothing but trying to find people and survivors. And then once they found them, then you can send in other teams to go and extract them and do whatever triage is necessary.

However, fast forward to a possible future where you’ve got robots that you can send in, teams of robots that can go into a building in real time, collaborate with each other to figure out how to go and do the search part, identify points of interest — it could be people, it could be other things — so that, that map comes back to the first responders. And then they can plan a safe path from where they are to where they need to go, and effectively kind of do the rescue. 

So, in that kind of a situation, teaming happens in two ways. Number one, teaming between the robots. So how do we get that to work collaboratively, cooperatively, effectively together. And then the robots with the human team, so you can actually kind of get them to be effective in what they do.

So we’re currently working on the first one, which is how do we get robots communicating effectively with each other. But that’s not losing sight of the fact that we’ve got to provide an easy way for operators to set up the mission, ingest the data that’s coming from them, and then act on it, and then have that feedback loop — okay, so here’s what we’re planning, here’s what we’re going to do, and how does that adapt the nature of the mission for the robot group, and then individually what each robot needs to do. 

John Koetsier: Very, very interesting. I mean, you can see a scenario where there’s multiple drones and robots communicating, collaborating, going through a building or a disaster zone, or something like that. People as well, tasking and changing tasking as they go, perhaps with heads up displays or smart glasses seeing what the drones are seeing in different scenarios.

Very interesting world and very interesting future. I want to talk — I want to ask you, what do you see as the growth rate over the next decade or so of this kind of autonomous robot, autonomous drone? 

Nader Elm: So, I mean, obviously there’s a universe of applications for these kinds of drones, and let me make it more abstract, you know, autonomous robots in general. So, I think we’ve only scratched the surface in terms, in terms of the kinds of applications we can think of, and they typically tend to be substitutions. So we’re doing something with this tool this way, right now we can do it more effectively and better with autonomous robots. So I think that’s going to be the first thing and that’s big enough.

I mean, whether it’s surveyors, first responders, you know, things, for example, anything which is related to inspections, surveying, or mapping, previously had people putting themselves into precarious situations. Now you can put in robots to do that. But in a lot of ways, I think we’ve got to do that and we will do that.

But I think the more exciting thing is once we’ve actually put these into the field and people have started working with this kind of technology, we’ll start coming up with new ideas of what we could do with it. And the way I compare it is basically when the iPhone first came out, for example, the killer apps on the iPhone were the browser and email. No one had come up with Uber and, you know, OpenTable, and all of these other applications that go on to the same tool.

And I think that’s what we’re going to see. Robots enabling new ideas to be generated for some breakthrough applications that we haven’t even thought of yet. 

John Koetsier: If you think about that future and all the different jobs that might benefit by having drones. I mean, just one that comes to mind is farming. It’s amazing how big AgTech, agriculture tech, is getting and how we tend to have thoughts of farmers as nontechnical for some reason, in spite of the fact that for generations they’ve fixed their equipment, fixed their tractors, all that other stuff.

But farmers using drones is huge right now.

If you think about that future, what’s the missing puzzle piece to unlock hypergrowth in this space, to have autonomous drones and collaborative drones, or collaborative robots in virtually every sector of the economy.

What’s the missing puzzle piece to enable that? 

Nader Elm: There’s a few things, actually. So, again, it’s a new technology. So I think the application space itself, so the core enablers, I think we’re beginning to see in terms of the mechatronic side of things, the sensors are all coming out and the vehicles themselves getting more and more sophisticated. But then… 

John Koetsier: You mean we have the iPhone, we don’t have the apps. Is that what you’re saying? 

Nader Elm: That’s basically it.

Yeah, I think essentially we’ve got to start thinking about the software that goes onto this and the application we need to apply it to. So that’s the first thing, and I think that’s really limited by our own imagination right now.

But then beyond that, I think there’s a couple of other considerations we need to take. Number one, I mean you mentioned AgTech, that’s in outdoor spaces. We typically operate indoors, by which I mean to say we’re not in regulated air space. So, there’s going to be a whole variety of different things that need to happen around regulations to allow these things to happen. And then the other thing that we’ve got to be very mindful of, is basically, again, the user shouldn’t be an expert in these kinds of things.

So we’ve got to think about the interface between the user and human and the systems, ’cause it’s difficult enough right now, us interacting with one robot, now you just imagine you’ve got to try and interact with what, a dozen to a hundred robots.

What does that look like? And how do you simplify it so that you can abstract the human from all the details that need to go into setting up missions for the individual robots? 

John Koetsier: Very, very interesting. Excellent. Thank you so much. It’s been a real pleasure having you on TechFirst. 

Nader Elm: My pleasure, and I thank you so much. 

John Koetsier: Excellent. For everybody else, thank you for joining us on TechFirst as well. My name is John Koetsier. Really appreciate you being along for the show. You’ll be able to get a full transcript of this podcast in about a week at

See the full story on Forbes shortly thereafter, and the full video will be available on my YouTube channel as always. Thank you for joining. Maybe share with a friend. Until next time … this is John Koetsier with TechFirst.