0845 528 0404

Will Computers Revolt” – Book Interview with Charles Simon

Willcomputersrevolt

Hi guys Philip English this from philipenglish.com. Welcome to the Robot Optimized Podcast where we talk about everything robotics related. For our next episode, we have Charles Simon who will talk about his book "Will Computers Revolt".

Philip English (00:14):

Hi guys. Philip English here, also known as Robo Phil, robot and enthusiasts report on the latest business and application of robotics. And my main mission is to get you guys’ robot optimized, support industry infrastructure, and innovation for the next era. I’m excited today because you’ve got Charles Simon and who’s going there and tell us a little bit about his book and we’re going to do a bit of an interview with Charles about the AI, side of technology. So, welcome Charles is very, I really appreciate your time today. It’s a perfect, so could you give us, a quick overview, I suppose, like a little bit about yourself, like a little bit about your history, if that’s okay, Charles?

 

Charles Simon (01:00):

Sure. I’m a long time. Silicon valley, serial entrepreneur. And, I started three of my own companies and worked at two of other startups. And I spent a couple of years working at Microsoft and doing all kinds of different things. My very first company was about computerated design of printed circuit boards. And one of the things we observed is that the way computers designed the printed circuit boards at that, in that era was seriously different from the way people did it. And people did a better job. And that intrigued me into the idea of what makes people intelligence different from artificial intelligence. And I followed through on that, a little background about myself. I’ve got a degree in electrical engineering and a master’s in computer science. And so I’ve got a little bit of academic background in the area, but the area we’re talking about is the future of computers and artificial intelligence.

 

Charles Simon (01:57):

And that’s so cutting edge that nobody would say I’ve got 20 years experience in that. Along the way, also I did a stint as a developer of a lot of neurodiagnostic software. And so if you get a brain injury, you might be hooked up to my software or you get carpal tunnel syndrome and all of these other things that you test testing for neural pulses. So I bring to the table a whole lot of interesting and interest in how the human brain works and how neurology works and try and map that onto the artificial intelligence world too.

 

Philip English (02:37):

Right. I see. So you see, you’ve got a wealth of experience there from obviously from like an academic point of view and from a business side. So you sort of like merge the two together. Like we could probably actually jump into your brain, like sits at simulator software straight away. So explain sort of like what, the, and that the brain sits in later solves.

 

Charles Simon (03:02):

Well, back at the entire world of artificial intelligence. Back in the 1960s, there was a divergence of artificial intelligence where there are the neural network guys and the symbolic AI guys, and they kind of went their separate ways. And since then, they’ve kind of gone back and forth and sometimes one group got a whole bunch of money and the other group faded, and now they’re back together again right now that the neural network guys now call it deep learning or deep neural networks, and they’re more or less in charge. And they have a very interesting set of solutions, but they are not related to the way your brain works. And so the idea in 1970s or early eighties was we got this great new neural network algorithm with backpropagation. If we could just put it on a big enough computer, it would be as smart as a person.

 

Charles Simon (04:06):

Well, in the intervening 50 years, that has proven not to be the case. And so we have to look to some different algorithms. And so I wrote that the brain simulator looking at it from the other point of view, let’s start with how neurons work and see what we can build with that. And so my electrical engineering background says, oh, well, let’s build a simulator. And, if you were building a digital simulator, you’d have basic building blocks of NAND gates. And if you were doing analog simulator, you’d have various electronic components and op amps, but in the brain simulator, the basic component is a neuron. And the way a neuron works is it accumulates ions and eventually REITs a threat reaches a threshold fires, sends that a spike down its axon to distribute more ions to other, all of the neurons it’s attached to through it.

 

Charles Simon (05:00):

Synapses and neurons can have lots of synapses, you know, on the order of 10,000 and your brain has got billions and billions of neurons in it. And so, but the neat thing is that neurons are so slow that a lot of the circuitry in your brain is coping with that problem and the amount of computer power that we can get simulating neurons can simulate. Now I can simulate a billion neurons on my desktop, which I couldn’t do before. So we’re getting very close to having computers that can match the power of simulated neurons. And I’ve done a lot of explorations and this is a brain simulators, a community project. So it’s all free and you can download it and you can build your own circuits. And then you will become a lot smarter about what neurons can and can’t do and see why it diverges so much from the AI backpropagation approach.

 

Philip English (06:04):

Wow. I want to say, let’s say so someone, like not myself, but I couldn’t really use it as a learning tool to sort of understand the subject

 

Charles Simon (06:13):

Like you, you, I mean, like all learning things and you can sit down in front of it and a novice can, it’s got a bunch of sample neural networks and you can say, “aha”, well, this, these are the sorts of things I can do with neurons and how you could use these to do many more advanced things as well. And so the book kind of draws the surroundings around the software to say, well, if you go down this path, it’s pretty obvious that in the next decade or so, we will have machines smarter than people. What are the implications of that? And what, what will those machines be like and how are we going to control them and what are our options? And that’s what the book is about related to the software. So they kind of work together that way.

 

Philip English (07:04):

Yeah, no, that makes sense. Obviously you’ve designed and built the software, so you’re the perfect expert really, to look forward and actually see that if this grows at this rate, this is what we’re going to see, like in the future. And yeah, and that I, and that leads perfectly onto the book really. So, I mean like will computers, like revolt, is the name of the book and you’ll see the siding sort of the when, why and how dangerous it is going to be. And then again, give us a brief overview of the book then. I mean, I noticed that you’ve got three main sections and then if it’s got 14 chapters and the first part seems to be sort of explaining about how it all works. And then the next section is obviously what your, what you think is going to happen like within the future?

 

Charles Simon (07:58):

Well, that in order to talk about making a machine that is intelligent, you need to consider the idea of what intelligence actually is. And you need to think about what it is that makes people intelligent. And this turns out to be not an easy task to say, this is an intelligent thing to do, because if you start making a list, you’ll say, you know, can read a newspaper. Well, blind people don’t read newspapers, and yet they seem to be intelligent and you could hear a symphony. And there are always these disabilities that work that are in concert with perfectly intelligent people. So you, can’t just itemize a list of say, if you can do this and this and this and this year intelligent, and if you can’t do this and this and this, you’re not intelligent because you always have this problem, but I can see some underlying abilities, like the ability to recognize patterns in an input stream.

 

Charles Simon (08:56):

Now I’ve made a huge abstraction jump there, but a year, you know, your senses are continuously pouring data in it, your brain and your brain is doing its best to make sense of them, to remember what are the things that are going on at whether things worked out when you made a choice of one action over another action, and then to repeat those things. So if you said, you know, a simple game of tic-tac-toe you say, well, if I saw this, this situation, I made this move and I won, or I made this move and I lost. And you, your brain builds up these memories of things that worked out and things that didn’t work out. And so intelligent behavior is doing things that worked out. And so all of this happens within the limits of what you know, and what you’re learning. And another real problem of your brain is it’s getting so much data that, that it can only really focus on a tiny percentage of it at any time and remember even less.

 

Charles Simon (10:01):

And so when you stop to think about what, you know, you know, a whole lot less than you think you do, you have this perception that you can remember what’s next to you, or what’s next door, or what your friends look like. But when you actually get down to drawing a picture, you have very sketchy remembrances, and your memories are very fate to get fuzzy. And so building a computer system that works in this way, we were starting with the definition of intelligence. So it got some kind of a basic definition. And then you can say, bill working with these facets, can you build a software system or a hardware system, the software first, because it’s easier. And you build a software system that does that. And the answer is yes, and it’s not that tough, but there are certain things that we want to talk about in terms of general intelligence.

 

Charles Simon (11:03):

And that is, well, people seem to be able to understand stuff. You can understand stuff. I can understand stuff. What does understanding mean? And to some extent, understanding is putting everything, you know, and everything your input is receiving in the context of everything else you already know. And so you’re able to merge all of this together in a multisensory sort of way, that is you hear words, or you read words, and these may mean the same thing, but they relate to abstract things, objects, or physical actions or something. So it’s not the words that are the meaning. It is an abstraction, that’s the meaning. Then you can paste words on top of that. And so you can build computer systems to do all of these things, and that’s pretty likely, and you will end up with a because it’s doing the right thing over and over, you end up with a goal directed system, because the idea of doing something that worked out versus didn’t is entirely arbitrary.

 

Charles Simon (12:17):

It’s based as a measurement against some goals that some program were put into place. And so if your goal is to comprehend the world and explain it to people, that’s entirely different from a goal being set of making a lot of money or taking over the world. And so we have a goal directed system that has these capabilities. Now in the last section of the book, it is, well, what will these machines actually be like? And what will they be like when they are equivalent to a three-year-old or equivalent to an adult, or unfortunately, only 10 years later after they are equivalent to an adult will be a thousand times faster than the equivalent of an adult. And so all of these things map out to what’s the future of intelligent machines. So now the final section, I map out a number of different scenarios, which kind of put them on the low levels of different likelihood.

 

Philip English (13:24):

Yeah, well, this is it. I had to look through the chapters and the connection that we have with robotics is obviously a lot. Robotics is it’s all about the physical world. It’s all about the sensors that are coming out every year. There’s better and better cameras as better and better laser scanners, better LIDAR. But the real intelligence we’re seeing in robotics is all the AI side. So it sets up taking the data from the modern cameras and actually using it in an efficient way to get a job done. And with that intersection of technology getting faster and faster, and AI getting faster and faster, we’re, we’re certainly going to have an exponential growth soon with of certain technologies.

 

Charles Simon (14:05):

Exactly. And one of the things that I’d like to add to that is robotics is a key to general intelligence, because if you start with the idea of things, a three-year-old knows that round things roll and square thing blocks can be stacked up and things like that. These are things that you might be able to put into words and explain to a computer or show in pictures and explain to a computer. But that is entirely different from the understanding you get from having played with these blocks and to set a robot with a manipulator, loose, to play with blocks, we’ll give it an entirely different level of understanding than anything you could train. And so robotics is where the general intelligence has to emerge, because it’s the only place that brings together all of these different senses.

 

Philip English (14:58):

Well, this is it. And this is when you get touch senses, smelling senses, tasting senses. And you know, when, when I, and understand that, and yeah, we’re certainly going to see, and they’re

 

Charles Simon (15:08):

Some of the real keys are the sense of time that some things have to happen before things other happened. You know, that you have to stack the blocks before they can fall down.

 

Philip English (15:21):

That’s been great. It’s been great. Yeah. This is really, really, like interesting. I think it’s, it’s a perfect sideline as well. Cause we will talk about products and stuff. And this is quite good to have this view.

 

Charles Simon (15:33):

But from a product perspective now I happen to have been very fortunate in my professional career. So in these books and brain simulator and stuff, I do not need to make any money, which is a good thing. Because if I went to somebody and said, I need a billion dollars and I’m going to build a machine that’s as good as a three-year-old. This is not a winner of a project because three year olds don’t do very much, but that is the approach you have to take. You’ve got to be able to understand what a three-year-old can understand before you can understand what an adult can understand.

 

Philip English (16:10):

Yeah, no, and that’s it. And then from there that you can grow. I mean, so what I was interested in is your four light scenarios. So obviously I saw that number one was like the ideal one and then there was a few others, but if you can take us through your thoughts about that.

 

Charles Simon (16:28):

Sure, one can eat the scenarios of what happens when machines are a lot smarter than us. And there’s an interim period where there’s where they’re smart enough to interact with us, but not so smart that we’re borrowing. So that’s the key of being really interesting where the ideal scenario is we have programmed computers just with goals that match what human goals are now. The good news is that our needs and the computer’s needs are divergent. We need land and clean air and clean water and clean food and mates and other this, this, and that, and computers don’t need anything that we need except energy. And so we may have a fight over energy, but mostly they’re going to be doing their own thing. And the real true AGI don’t need spaceships or submarines to do exploration, or, and they don’t need air conditioning to live in the desert because they can become spaceships and they can become submarines.

 

Charles Simon (17:39):

And so they have a different set of standards and they can go off and do their own thing and learn a bunch of stuff about the universe and hopefully share with us. Now, the scary parts are more like in the early stages, suppose a nefarious, human is running these AGI and directs them to do things that benefit this, that person or group at the expense of mankind. And that is the only scenario that has any relationship with terminators and all of science fiction, where they build machines for the purpose of taking over the world or the purpose of making themselves rich. I don’t see that as a very likely scenario because it happens in a very small window of opportunity where machines are smart enough to be useful, but not smart enough to refuse to do the work, because it doesn’t take a genius to say that setting off a nuclear war is bad for everybody.

 

Charles Simon (18:46):

So a computer could easily say, no, I’m not going to participate in that project. And that will be a very interesting scenario when computers start refusing to do the things we asked them to do, but that’s a separate issue. So machine going mad on its own is extremely unlikely because in order to do that, you have to set goals for the machine that are self-destructive to mankind as a whole. And I don’t see that as a very likely scenario. And, so we’ve got the mad machine and the mad man who does things. And then there is the mad what I call the mad mankind scenario. Let us imagine that humans continue to overpopulate the world at a great rate. And they do put themselves in situations where the computers can see, well, this is going to get us into trouble. We need to do something about that.

 

Charles Simon (19:49):

All of the things that computers might do to solve human problems are going to be things that humans are not going to like, if you know, you say they want to solve the overpopulation problem or the famine problem, you can think of lots of solutions that you’re not going to be very happy with. So the four things that you can do, there’s the pleasant scenario that the mad man scenario, the mad machine scenario, which I think is pretty unlikely and the madman kind scenario, which is a concern. And we is what really says it’s time for mankind to get its house in order, and to solve our own problems, because we won’t want machines to solve them for us.

 

Philip English (20:38):

That’s it. Now, perfect. No, that’s a great light synopsis of the last four. And it’s again the very interesting and there’s four different scenarios. And, I think, yeah, I mean, like, I mean, if people obviously want to get hold of the book and I know it’s on Amazon and everything is, there.

 

Charles Simon (20:59):

The computer bit, the name of the book is will computers revolt, and there is a website will computers, revolt.com. The name of the software is brain simulator. And there is because it’s free it’s brain sim.org.

 

Philip English (21:16):

Know that, that’s perfect. Thanks, Charles. And then I suppose the last question I had is that timeframe wise, obviously, like we all know about Ray Croswell and he’s 24 foot 45, like live predictions. If you, do you think it will fit along that sort of timeframe do you think it’d be longer or shorter

 

Charles Simon (21:34):

Shorter, but the key is that it’s not an all or nothing situation when you think of a three-year-old, it’s not obvious that that three-year-old is going to become an intelligent adult. And so everything we do, if you look at everything you don’t like about your computer systems today, it’s mostly because they don’t think they’re not very smart. And so everything we do to make our, that brings on little pieces of smartness will be so happy to get it. So the machines increasing intelligence is inevitable because all of the little components are things we want, and we’ll eventually get to machines that are smarter than us, but it will have happened so gradually that we won’t have noticed. And every step along the way, we will have enjoyed it.

 

Philip English (22:35):

Well, this is it. This is the benefits. I mean, I’ve recently just invested in a little light health gadget and, you know, it’s there to benefit me really, you know, and us as a species. So yeah. Hopefully if you want it there, but no, that’s great. Well, I, thanks very much for your time. The light your time, Charles, it’s very much appreciated. I mean, what I’ll do guys is I’ll send, I’ll put a link on the YouTube video, so you guys can go and get touch of Charles book. You’re gonna have a look at his brain simulator software, and then, yeah, we’ll probably, do this again, another 6, 6, 7 months time. I mean, I’m going to get a copy of the book and have a read as well. And any questions I’ll put, Charles his details so you can reach out. So thanks, Charles. Thank you very much. Fit, fit, fit, fit.

 

Charles Simon (23:20):

Well, thank you for the opportunity. It’s been great talking with you.

Robot Optimised Podcast #6 – Book Interview with Charles Simon

Charles Simon: https://futureai.guru/

Philip English: https://philipenglish.com/

Sponsor: Robot Center : http://www.robotcenter.co.uk

Youtube:- https://www.youtube.com/watch?v=knlbxEZ6mgA&ab_channel=PhilipEnglish

 

SLAMCORE interview with Owen Nicholson

Hi guys, Philip English from philipenglish.com. Welcome to the Robot Optimized Podcast where we talk about everything robotics related. For our next episode, we have SLAMCORE led by Owen Nicholson who will talk about their leading software and robotics .

Philip English (00:14):

Hi guys, Philip English. I am a robotics enthusiast, reporting on the latest business application of robotics and automation. And so today, we’ve got Slam Core and we’ve got Owen. He’s gonna give us a quick overview, of the technology down there. So I was also the CEO and co-founder. And, for any of you who haven’t come across slam before, it stands for simultaneous, mapping and we’ll got it wrong, apologies, simultaneous localization and mapping, and Slam Core like develop the algorithms that allow basically robots and machines to understand their space around them. So as more robots come out, then they have a sense of where they are and obviously they can interact with our environment. So yeah, no, I sort of, if I were in, just to give us like an intro and like an overview really like about yourself to start with, Owen if that’s, okay.

Owen Nicholson (01:20):

Sure. Awesome. Well, thanks a lot for the opportunity Phil, and thanks for the intro. So just to play back, I’m Owen, I’m the CEO at slam core. I’m also one of the original founders, and we’ve been going for about five years now. We originally span out from, Imperial college in the UK, one of the top, colleges in the world, founded by some of the absolute world leaders in the space. And it’s been, an incredible journey over the last five years, taking this technology and turning it into a real commercial products, which I’d love to tell you about all today.

Philip English (01:55):

Alright, thank you for that overview. And so you’re saying that, also you’re one of the co-founders so I was on the west side, so it’s is it 4 main co-founders or

Owen Nicholson (02:04):

So yes, two academic co-founders and then to full-time business founders as well. So, from the academic side, we have Prof. Andrew Davidson and Dr. Stefan Leutenegger, between them they’re two of the most respected, academics in this space. Probably most notably Prof. Davidson, who is one of the original founders of the concept and the real pioneers of slam particularly using cameras. So we’ll talk more about that, but this is really our particular flavor of slam is using vision. And he’s been really pushing that for the last one, nearly 20 years now. So incredible to have him as part of the founding team. And then, Dr. Leutenegger, who’s now at technical university Munich, who is another one of the real pioneers of vision for robotics. And then myself and, were with a full-time business side of things when we founded the company.

Philip English (03:01):

Right. Fantastic. So you’ve got quite an international sort of group of co-founders there. It sounds like you’ve got some mostly from an academic point of view, that our team that have been studying this and doing this technology for years. So that’s interesting,

Owen Nicholson (03:19):

Absolutely. And it’s one of those things. When you start with strong technical founders, then you can attract other great people into the space. So one of our first hires was our CTO, Dr. Pablo Alcantarilla who came from I robot. Great to be able to bring someone of that quality. And, he’s another one of the absolute real leaders in the space. But also with the real experience in industry. So he’s been a Toshiba has been, in iRobots and knows all about how do you get this stuff to work on low-cost hardware in the real world, and price to performance point. That really makes sense. And we’ve now reached our 33rd hire. So it’s been incredible and these we’ve got still a big technical team about 18 PhDs. I think I’m still about three quarters are still technical. So they either have a PhD or an extremely, in depth experience in software engineering and particularly embedded software engineering. But we’re also growing out our commercial and business side of the company as well over the last year and a half.

Philip English (04:25):

Fantastic. Yeah. Sounds like a phenomenal sort of growth over the five years to have such a strong team there. And, we were chatting about it. We were chatting before, so obviously you’re based down in Barra in central London, but you’ve also got another branch a bit out in now. What was that again?

Owen Nicholson (04:43):

Chiswick Sorry, kind of west London. So we have a couple of offices for the, for the team to work from. We have, I think on the last count, 17 nationalities now represented, within the company. So we sponsored a lot of international visas. We bring a lot of people into the UK to work for slam core, from all over the world. Most continents represented now. But I think this is just the way it goes. This the type of tech we’re working in is very specialist and we need the best of the best. So it’s the challenges these guys can girlfriend and, and girls can go from work in a DeepMind or Oculus if they wanted to. So we need to make sure that we attract them and retain them, which is something we’ve, we’ve been very successful with so far.

Philip English (05:28):

That’s right. And is what it’s all about getting the best team around you, like a sports team, you know, you want to get the best players, you know, to do the work. I mean, how did you find your team that started you, do you sort of regularly advertise or do you do a bit of headhunting for the guys? You got

Owen Nicholson (05:47):

The mixture of both of them having, having great people from the founding team means we can get a, we’ve got a good access to the network. So we do have a lot of inbound queries coming in for, we have very rigorous, interview processes. But we do, we use recruiters, we use headhunters particularly for some of the more commercial hires we’ve used kind of high-end head hunters to find these people because they’re very hard to find, once you do then bring them in, it needs to be, we need to make sure that this is a vision they really buy into. So, that’s, you know, we have, multiple ways in which we’ve attracted people over the years, but I’d say because we have such good quality team. It attracts other great people. So, it’s one of the real benefits

Philip English (06:35):

That’s right. So talent attracts talent, and we’ll see that those are the sort of guys who would like to know each other a lot within the space as well. And yet, like you mentioned vision there and we’ll get into vision a bit, my late later on, but I suppose I’m interested in the, I suppose, the problem or the issue to start with. I mean, Nikki, could you talk around or see, you know, I suppose the fundamental problem that you guys are, are trying to address?

Owen Nicholson (07:02):

Sure, sure. I think at the heart of it, we exist to help developers give their robots and machines, the ability to understand space, quite high level, but let’s start there. And ultimately we break this down into the ability for a machine to know its position, know where the objects are around them and what those objects are. So it’s coordinates it’s map and the, is it a person? Is it a door? Those are the three key questions that machines and particularly robots need to answer to be able to do the job that they’ve been designed for. And the way this has done normally is using the sensors on the onboard the robot and combining all that, all these different feeds into single source of truth, where the robot creates essentially a digital representation of the world and tries to get where it is within that space.

Owen Nicholson (08:01):

And, the problem has always really been that actually the number one cause of failure for a robot, isn’t the wheels falling off for it falling over. Although there are some funny videos on the internet about that actually, when it comes to real hardcore robotics development, the challenges that are faced on mainly around the discrepancies between the robots, understanding of the space and its reality. So that’s what causes it to crash into another object is what causes it to get lost on the way back to the charging station and therefore not actually get there in time. And so it runs out of juice and just dies. So the high-level problem we are trying to do, trying to address is giving developers the ability to answer these questions without having to be deep, deep experts in the fundamental algorithms that are allow you to do that.

Owen Nicholson (08:53):

Because there’s an explosion of robotics companies at the moment, and it’s super exciting, seeing all these different new applications coming out, really driven from lower cost hardware coming out and modular software, which allows you to quickly build POC and demos and early stage prototypes. But there’s a still, when you really drill into it only, well, probably well over 90% of those machines today will not scale to a commercially viable product as it stands, if they literally went and tried to sell that to hardware and that software right now. So this is where most of the energy has been focused on by the companies trying to modify what they have to be more accurate, more reliable, or reduce the cost, and actually nearly all most of the times, all three. So they’re nearly always trying to make it increase the performance and reduce the cost.

Owen Nicholson (09:43):

And this is phenomenally time consuming. It’s very expensive. It can be, cause it’s lots of trial and error, especially if you have a robot where say a service robot where you need to shut down a supermarket to be able to even do your testing. You might only get an hour a month to be able to do that with your client. And this is such a critical time. And if you spend the entire time just trying to get the thing from A to B, not actually worrying about what does it do when it gets there. This is really what’s holding back the industry as a whole.

Philip English (10:13):

Right. I see. So if I’m obviously like a manufacturer and I want to build a solution again for like retail or education or when you’re in hospital, was this quite good? Quite good one then basically then slam core is one of the components that I can bring into the product that I’m building. And again, it’s got all the expertise, it’s got everything It needs to make sure it does a brilliant job on the vision side. So the one, obviously it, that helps with costs on manufacturing, on a new product, and then it’s easier for the customer to launch the products, knowing that it’s got a branded and obviously a safe way of localization.

Owen Nicholson (10:54):

Absolutely, so they start time to market for a commercially viable system. So you can build something within a month. In fact, at the end of master’s projects, you quite often will have a robot, which is able to navigate and get from A to B, but doing it in a way, which especially when the world starts to get a bit more chaotic, that’s probably the real real challenge, is when you have people moving around structures, changing the standard systems today, just, they just don’t work in those environments. They don’t work well enough to, especially when you have a hundred, a thousand, 10,000 robots, if your mean time between failure is once every two weeks, that’s okay for your demo, but it doesn’t work when you’ve got 10,000 robots deployed to across a wide areas. So, yeah.

Philip English (11:40):

Yeah. this is it. I mean, from what I’ve seen, it’s all about movement. And as you said, like you can do a demo with a robots sort of show it working in an environment that’s half empty and no one’s really around, but once when it’s a busy environment, busy retail, lots of people, lots of movement. And it’s very easy for obviously the robot can get confused and say, I know, is that a person? Is that a wall? Where am I sort of sort of thing? And then that’s it, it loses its localization, and then you start, so I suppose the question I had was around the technology. So I saw on one of your videos, obviously you were using one of the Intel cameras, but is it, can you link it with sort of any laser scanner, any LIDAR scanner? Is there a certain tech, product range that you need to integrate as well for Slam Core to work best or.

Owen Nicholson (12:30):

Absolutely great question. I think this is one of the really interesting, when does the technology become a commercial product questions? Because the answer is, if you lock down the hardware and you work just on one specific hardware sensor combination, then you can build a system which works well, particularly with vision. So you don’t, if you look at some of the products out there already Oculus quest, I know it’s not a robot, but ultimately it’s answering very similar questions, whereas the headset, what are the objects around it? Same with the hollow lens that iRobot Rumba and a number of other questions, they’ve all successfully integrated vision into their robotic stacks. And it works. They work very well on low cost hardware. The challenge has been then if you don’t have those kinds of resources, if you’re a com, if you’re not Facebook or Microsoft or iRobot.

Owen Nicholson (13:32):

So then you have to, a lot of the companies are using much more open source solutions. and they, quite often use laser-based localization. This is the very common approach in this industry. and we are not anti laser at all. LIDAR is an incredible technology, but you shouldn’t need a $5,000 LIDAR on your fleet of robots, just for localization. And that’s currently where we are in, in this industry. The reality is there are cheaper ones. Absolutely. But to get ones that actually work in more, more dynamic environments, you need to be spending a few thousand dollars on your lasers. So we are at the heart of our system is we process the images from a camera. We extract the spatial information. So we look at the pixels and how they flow just to get the sense of geometry within the space.

Owen Nicholson (14:21):

So this gives you your coordinates. It gives you the surface shape of the world. So your floor plan and where are the obstacles irrelevant to what they are, but what if there’s something in my way? And that’s kind of the first level, our algorithms, operator, but then we also are able to take that information and use our proprietary machine learning algorithms to draw out the higher level spatial intelligence, which is the obstacle object names there, the segment segmenting them out, looking at how they’re moving relative to other parts of the environment. And that all means that we’re able to provide much richer, spatial information than you can achieve with, even that the high-end 3d lidars that you have available today. Just to address your question directly, as far as portability between hardware, this is one of those real challenges, because if we’d have decided three years ago to just lock it down to one.

Owen Nicholson (15:22):

So the Intel real sense, it’s a great sensor. They’ve done a really good job. And if we’d have just decided to work with that and optimize only for that today, we would have something which, as extremely high-performing, but you wouldn’t be able to move it from one product to another. If another sensor was out there at a different price point, it wouldn’t port. So we’ve spent a lot of our energy taking our core algorithms and then building tools and APIs around them. So that a developer can actually integrate into a wide range of different hardware options using the same fundamental core algorithms, but interacting with them through different sensor combinations. Because the one thing we know in this entire industry, there’s a lot of unknowns, but probably the one thing we all know is there’s no one robot which will be the robot that works everywhere, just like in nature.

Owen Nicholson (16:10):

There’s no one animal. Although by, as an aside, nature uses vision as well. So there’s clearly some benefits that evolution has chosen a vision as its main sensing modality, but we need variety. We need flexibility and it needs to be easy to be able to move from one hardware configuration to the next. And that’s exactly what we’re building at slam core. Our approach at the moment is to optimize for certain hardware. So the real sense right now is our sense of choice. And it works out of the box. You can be up and running within 30 seconds with a real sense sensor, but if you come along with a different hardware combination, we can still work with you. They might just need a bit supporting, but we’re not talking blue sky research, we’re talking few, a few weeks of drivers and API design to get that to work.

Philip English (16:59):

Right. Fantastic. And I suppose every year you also get a new version of a camera, like coming out as well. So a new version of the Intel real sense, which would obviously it’s normally advanced version and it’s the best version. And then it will see, I suppose that helps in your three key levels, which is what I was just wanting to quit. Quit to go over. Cause obviously like you discussed them there. So you had three levels, there was tracking math in and the semantics. So that’s basically what you were saying. So we’ll see, your algorithm stage then is that level three? Is that the semantics, was that level two diffuse?

Owen Nicholson (17:38):

So we actually, we call it full stack, spatial understanding. So we actually cut, we provide the answers to all three, but within a single solution, and this has huge advantages, because, well, Hey, there’s performance advantages, but there’s also, you’re not processing the data in lots of different ways and you, it means you can answer these questions using much lower cost Silicon and processes because you are essentially building on top of each one’s feeds into the next. So for example, our level one solution is tracking gives you very good, positioning information, and then all level two is the shape of the world, but we can feed the position into the map so that you get a better quality map. And then we can feed that map using the semantics to identify dynamic objects and remove them before they’re even mapped. So that you don’t confuse the system. And then this actually improves the positioning system as well, because you’re no longer measuring your position against things which are non static. So there’s this real virtuous circle of taking a full stack approach. And it’s only really possible if you understand the absolute fundamental mathematics, going on so that you can optimize across the stack and not within your individual elements with across it.

Philip English (19:01):

Wow. Okay. And then within the algorithm package then, is it a constantly learning system? So we’ll see if we’ve, developed, like a mobile Charlie to go around a factory and someone puts a permanent house there or permanent obstacle, will it, learn to say, okay, that obstacle is there now and include that in an increase into the map.

Owen Nicholson (19:29):

It’s absolutely that’s one of our core features, which is what we call, lifetime mapping. So, currently with most systems, you would build your map. This is how a lot of the LIDAR localization works. You’d build a map with a essentially a master run. You’d save that map. Maybe pre-process it to get it as accurate as possible. And that becomes your offline reference map off, which everything localizes against. So right now we provide that functionality today using vision instead of LIDAR. So you actually, you already get a huge amount, more tolerance to variation within the scene because we are tracking the ceiling, the floor, the walls, which are normally a lot less likely to have changes. So even if that post appeared it wouldn’t actually change the behavior of the entire system.

Owen Nicholson (20:20):

But we are also later this year, we’ll be updating our, released to be able to merge maps from different agents and from different runs into a new map. So every time you do, every time you run your system, you can update it with the new information. And this is something which is very well suited to a vision-based approach because we can actually identify, okay, that was a post probably more interestingly, maybe pallets or something where a pallet gets stuck in the middle of the warehouse. And you don’t want to, maybe during that day, you want to communicate to the fleet that there’s a pallet here. So you don’t want to pan your plan, your path through it. But then the next day you might want to remove that information entirely because it’s unlikely to still be there. So it, we ultimately don’t provide the final maps and the final systems. We provide the information that the developers can then use. They can use their own strategies, because this is key that some applications might want to know and keep all the dynamic objects in their maps and might want to ignore them entirely. So we really just provide the location, the positions of those objects, in a very clean API so that people can actually use it themselves.

Philip English (21:40):

Right. I see. And then when you said, so an emergence of other mapping tools, so I’ve seen the old classic where you have like someone who has a laser scanner on like a pole, and then he’s walking around the factory or walking around the hospital to create a 3d map. And then, so can you take that data and merge it in with your data to get like a more like accurate map is that we may not.

Owen Nicholson (22:07):

At the moment, we don’t fuse maps created from other kinds of systems. Like it went to our system. We ultimately would want to consume the raw data from that laser and fuse it into our algorithms. So right now, our support is for version inertial, sensors, and wheel Adometry. LIDAR support will come later in the year where it’s just a matter of engineering resource at the moment, algorithmically it’s all supported, but from a engineering, an API point of view, that’s where a lot of the work it’s that last 10%, a lot of people will tell you about is quite often 90% of the work. I know something to be 80 20, but I think that thinking robotics is more like 90 10 and so we wouldn’t support that sort of setup at the moment, but the answer is you shouldn’t need to do that because with those systems, you need to be very accurate, quite often, be careful how you move the LIDAR. Also, you need a lot of compute. You also need to do it quite often offline post-processing. Whereas our system is all real time on the edge. It runs using vision, and you can build a 3d model of that space all in real time, as you see it being created in front of you on the screen, so that you can actually go back and, oh, I missed that bet I’ll scan there. So this is really kind of a core part of our offering.

Philip English (23:32):

Right. Fantastic. Fantastic. I suppose the last question I have in regards to the solution, then we sort of see what, like, what you’ve been going through, which is fantastic is, so is this just for internal, or can you go external as well? I mean, I’ve seen slammed based systems have had issues with, since things like sunlight and rain and weather conditions, is it the incident at that moment? And then it looked at to go external eventually, or where whereabouts does it say?

Owen Nicholson (24:01):

We tend not to differentiate internal external, it’s more to do with the type of environment. So as long as we have light, so we won’t operate in the light soft factory cause we need, we need vision. And as long as the cameras are not completely blinded. So the rough rule of thumb we normally say to our customers are, could you walk around that space and not crash into things? If the answer to that is yes, then there’s them. We will, we may have to do some tuning for some of the edge cases around auto exposure and some of the way in which we fuse the data together. But we already have deployments in warehouses which have large outdoor areas and indoor area. So they’re transitioning between the two. We are not designing a system for the road or for city scale, autonomous cars slam, which really is a different approach you would take. And that’s where a lot of those more traditional problems, you just talked about rain and those types of areas really starts to become an issue. But for us we support indoor outdoor, whether it’s a lawn mower or a vacuum cleaner the system will still work.

Philip English (25:12):

Right. Fantastic. Yeah. Yeah. And I think this is it. I mean, we’re starting to see a lot more outdoor robots coming to market probably more over in the U S but also that’s going to be the future. So we’ll see. Yeah. Like the whole markets there, I suppose it sort of leads onto the bigger sort of vision for you guys then. I mean like where do you see the company in like five years time and like technology wise, I suppose. And what’s the ultimate goal, I suppose to get perfect vision, like similar to, cause humans have vision, I quite liked your animal, like analogy there actually, because obviously vision is one of the cool things, but as far as like what’s the why for you guys and the next sort of steps?

Owen Nicholson (25:58):

Yeah. I think really, we founded the company because the core technology being developed has so much potential to have a positive impact on the world. And it’s essentially the ability for robots to see, and that can be used for so many different applications. So the challenge has always been doing that flexibly whilst keeping the performance and costs at a price at something that makes sense. And we’re now demonstrating through our SDK. So we actually, the SDK is publicly available if you request access and you’re able to download it, we already have over a hundred companies running it and we’ll have about a thousand companies in waiting as we start to onboard them. So we’ve demonstrated that as possible to deliver this high quality solution in a flexible and configurable way.

Owen Nicholson (26:50):

And this means we are essentially opening up this market to people who may be in the past, would not have been able to get their products to that commercial level of performance to be successful. So having a really competitive, and also collaborative ecosystem of companies working together, trying to identify new ways to use robot is got to be good for us as an industry, because if it’s just owned by a couple of tech giants or even states then that’s going to kill all of the competition. So, and this will drive some of the really big applications for robotics. We see in the future in five years time, I believe there’ll be robots, maintaining large, huge, renewable energy infrastructures at the scale, which would be impossible to manage with people driving around machines and looking at sustainable agriculture in a way, which means that we can really target water and pesticides so that we can really feed the world as we start to grow.

Owen Nicholson (27:55):

And yeah, we’ll let you, you’ve seen all of the great work going on Mars with the Rover up there now with perseverance and we’re using visual slam, ultimately, it’s not ours, unfortunately, but in the future, we would like us as our systems to be running on every robot on the planet and beyond that’s really where we want to take this. And it has to be, we have to make sure that these core components are available to as many people as possible so that they can innovate and they can come up with those next-generation robotic systems, which will change the world. And we want to be a key part of that, but really sitting in the background, not living vicariously through our customers. I quite often say I want Slam Core to be the biggest tech company that no one’s ever heard of and running up, having our algorithms running on every single machine with vision. But never having our logo on the side of the product.

Philip English (28:56):

Yeah, well, this is it. And this is the thing that excites me about, like robotics and automation. I mean, if you think about the it industry, obviously you have a laptop, you have a screen and you have a computer, or obviously see there’s lots of big players, but you’re pretty much getting the same thing, but with robotics, you’re going to have all sorts of different technologies, different mechanical, physical machines, and it’s going to be a complete mixture. And I mean some companies will build things similar and to do one job where you may have different robots with different jobs. And yeah, now, I think that sounds great. I mean, obviously what if you can solve that vision issue that we have, it makes it a lot easier for start ups, you know, and a lot easier for businesses to take on the technology, get the pricing down, because obviously if you don’t want robust, wasn’t hundreds of thousands of pounds, you want them at a level where they’re well-priced, so they can do a good job and actually in the end, help us out with whatever role that and the robots do.

Philip English (29:54):

And so, yeah, now, that sounds really exciting. And actually, I’m looking forward to that to keep an eye on you guys. I mean, what’s the best way to stay in contact with you, then what’s the best way to get,

Owen Nicholson (30:06):

Genuinely head to this website and click on the request access button, if you’re interested in actually trying out the SDK, we’re currently in beta rollout at the moment, focusing on companies with products and developments. So if you are building a robot and are looking to integrate vision into your autonomy stack, then request access, we can onboard you within minutes. It’s just a quick download. And as long as you have hardware we support today, you can run and run the system. We have a mailing list as well, where we want to keep people up to date as things as exciting announcements come. And that’s really probably the best way is just to sign up to either our meeting list or our waiting list.

Philip English (30:57):

Right. Perfect. Thank you, Ron. And, what I do guys, is I’ll put a link to all the websites and everything and some more information about Slam Core. So, yeah, now, it was great. It was great interviewing, many thanks for your time. And yeah, like, I’m looking forward to keep an eye on you guys and there, and see, I seen your progression. Thank you very much

Owen Nicholson (31:15):

Absolutely.

SLAMCORE interview with Owen Nicholson

Slamcore: https://www.slamcore.com/

Philip English: https://philipenglish.com/slamcore/

Robot Score Card:- https://robot.scoreapp.com/

Sponsor: Robot Center: http://www.robotcenter.co.uk

Robot Strategy Call:- https://www.robotcenter.co.uk/pages/robot-call