Conversations in the Park

Will AI Make Our Roads Safer or Riskier?

• Y-Mobility • Season 3 • Episode 3

Our host Timothy Papandreou, is joined by Dr. Paula Palade, a renowned expert in automotive ethics and safety with contributions to the European Commission's AI ethics standards, and Michael Weisinger, a key figure in autonomous vehicle innovation, leading the charge at Kodiak.ai in revolutionising AI-powered transportation.

🚨This episode covers the future of road safety and a deep discussion whether or not AI saves lives or creates new risks? 🚨

From ethical dilemmas to groundbreaking innovations, this discussion challenges the status quo and explores whether AI can truly make our roads safer—or if we’re playing a dangerous game with technology.

This podcast is powered by Y-Mobility.

Tim Papandreou  0:00  
Paul welcome Paula. Welcome Michael. So good to have you both to join us today. Really excited to cover the areas that we've been really hearing a lot about in the news lately, about just AI, the different versions of AI, the governance structures around AI, the ethics around it. And, you know, in real time, we're having to commercialize products that are, you know, in the real world, we're not talking about software or chatgpt. We're talking about embodied AI, which is physically in a vehicle, robots on wheels, etc. That's the world that we all are part of. And I'm just so glad that both you are willing to come on and join have a conversation the park about this. But before we go into the full on question side, I'd love to both for you, just to give me a little bit of your own introductions about yourselves and why you're interested in this space. So why don't we start off with, sorry this one time. Dr, Paula, then I'll call you Paul for now on. But Paula, why don't use a quick introduction of where you're coming from and what interests you about this space?

Paula  1:19  
Thank you so much for having me. It's really exciting. I'm really excited to be here with you. So doctor comes from actually having a PhD in electrical engineering. That's how I began my research career. I worked in research for a couple of years, and then I decided to join the automotive industry, mostly being attracted by the fast developments that were happening from moving from internal combustion to electrical vehicles and Adas and all these cool features that you are obviously very well acquainted with. I joined the automotive industry working on hardware, so electronics, electronic smart systems, and later on, automated driving systems. And as I was looking to the future technology space, I became, I discovered the gap, effectively a gap into the ethical issues and ethical implications when developing these highly automated systems, and with my research hat on, I started to investigate this topic. And I became surprised that actually, many engineers such as myself are not equipped with the ethical training to tackle some of these critical questions. When we are developing any new technology, it doesn't even have to be automated vehicles. So then I was privileged enough to be selected and invited by the European Commission to work on the on the their position paper on ethics for connecting automated vehicles. And later on, I started working on a standard which is now published, ISO 39,000 free guidance on safety, ethical considerations for connected and automated vehicles. I do other work on standardization and regulation, and I am very pleased to work in this industry at this moment in time.

Tim Papandreou  2:59  
Thank you, Paul, that was a great introduction. And I know juggler Land Rover is a really exciting place to work, so this is going to be a lot of interesting conversation. Now, my next guest, for me personally, doesn't need an introduction, but we should have him introduce himself. Mike Weisinger, amazing career trajectory have followed him all the way from the early, early BCG days to now, basically running huge parts of the stack at Kodiak. And Michael over to you. Give us your introduction a little bit and why you're so interested in

Michael  3:37  
this space. Yeah. First of all, thanks so much for having having me Tim, and it's a pleasure to talk to you and Paula today. And I started my career in the automotive industry, actually at Detroit Diesel. I'm an industrial engineer by training, and then worked at Detroit Diesel in Detroit. Moved back to Austria. I'm originally from Austria, where I started at Boston Consulting and really focused on automotive projects. You know, consulted for every German automotive OEM at some point. I also consulted with, I cannot say names for an OEM in the UK for quite some time. There's certainly one, at the very least. And so I was really excited about the automotive space. But then also, as I, you know, I did some projects in network optimization, of course, in strategy, but also network optimization, supply chain optimization, and got really, really excited about the combination of automotive as well as transportation. Then moved to the US with BCG still, and at some point, decided to, you know, leave the consulting space, really go into the industry. And to me, autonomous driving was just absolutely fascinating, right? It's gonna change how people move. It's gonna change how goods move. And just in. New modality, and will give us so much efficiencies, will make things so much safer than what it is today, and so now, with Codex, and since nearly five years at this point, running all of commercial product and our operations and and yet, just an extremely exciting space where we are today. We are it's really from, you know, has been researched for a long period of time when you think about autonomous driving, and now we're really moving into the commercialization and the deployment stage. And I think it's an extremely exciting space and time to be in. Yeah,

Tim Papandreou  5:31  
that's a fantastic round rounding up of the your introduction. The first question is to think about the sustainability implicit, the sustainability opportunities and implications of AI ethics. What are the sustainability implications that we should be looking at when we're trying to understand AI ethics in this autonomous vehicle technology space?

Paula  5:57  
That's a very good question. Okay, so you touched upon couple of points there. So I think when I talk about ethics, or we, when we think about ethics, we have a range of ethical principles that usually we talk about, and first, one of the first ones is, do no harm. And actually, when I think about sustainability, I think of it as a part of the wider umbrella of ethics, right, do no harm to the environment. Let's say you could, you could conceptualize it like that. And I think AI in automated vehicles. Obviously it has a lot of implications on the future of work, on sustainability and many other things, right? And obviously it's around how you drive the vehicle, and energy efficiency. But also what I think it's an interesting paradigm and change is around, and we didn't touch upon that very much. Is around this move to software defined vehicles, right? And how the software defined vehicles play with electrification and how the two merge together. Because that change happens simultaneously, right? You have the move to electrification, but then the same time you have the move to AI and automated vehicles, or SDV Software Defined vehicles, and how do they merge together to serve the same purpose? And that's where we're going to start to see synergies and the system effects of that. So we'll have benefits to sustainability. You know, wider, wider than we can imagine at this point in time. Do you

Tim Papandreou  7:20  
Do you have a sense of this is where I think I've been, I've been challenged myself. And is that when we go from internal combustion engine to an electric battery, electric vehicle, we go from 1000s and 1000s of parts to just hundreds of parts. And so that itself is a massive reduction in materials costs and physical equipment, which unfortunately it's upset by the need for the batteries, which are a whole bunch of railroads, minerals, etc, etc. I've been asked the same question of the software defined vehicle architecture when it's married into a battery, electric vehicle, vehicle, we shouldn't have the 10 kilometers of wiring anymore. We should have much less, because now it's almost the architecture is more integrated. But does the AI autonomous vehicle technology overlay, does that re complicate things that is actually make it even more efficient when it's all fused together? Because I've actually been asked several times, like, when you get a purposely made vehicle that's commercially, commercially, a private, a private passenger vehicle that's basically available, and you retrofit it for AV technology, you add a lot more weight to it because of the sensors, the wiring, etc, but when it's built in the ground up, the theory at least is that it should be fused, which means it's gonna be a lot less materials, and it shouldn't be heavier and it should have a lot less cabling. Is that your thoughts as well? Are you thinking in that same way of how to bring all that together to reduce its impacts. I

Paula  8:43  
think the whole industry is but as you said, when you are going to add AI on top of SDV, which SDV doesn't mean AI at the moment, that complexity is going to increase, right? But what we need to also consider and think about, which is an aspect we haven't touched upon, is when you're adding AI on top of hardware, or retrofitting or actually updating your software. What happens to the hardware these vehicles need to operate in the fields? 10 years, 15 years in the future? How do we make sure that they are still fit for purpose? Then you have degradation. Right? Hardware can degrade. You might have cameras degraded. So you have whilst the AI or the software is still very capable or robust, hardware can degrade. So there are different aspects here that we need to think about. Because when you have aI embedded on a cyber physical system, such as a car, that's where the problems arise. If you have AI in the clouds running I don't know what sort of app, it's totally different, different application. But when AI is embedded on a hardware that actually runs on the road 10 to 15 years in the future, that's where the different degrees of complication and product life cycle do appear as well. And that's

Tim Papandreou  9:53  
where I think we've we have the issue of fleet owned versus personally owned. I just for that's. Specific reason. He just said, owning your own AV AI enabled vehicle seems very difficult unless they've got modular pieces that, as part of your warranty, must be replaced as they become obsolete. That just becomes a very complex, very expensive exercise, versus a fleet of autonomous vehicles that are managed by a fleet manager and a company that's just part of their their depreciation cycles, where they basically replace all the hardware, swap it out for new hardware, where they get updates and upgrades. Because, as you said, you can't, well, don't know yet, actually, if you can do any of the over the air upgrades or updates on physical hardware. I don't, but don't think that's actually possible yet, although maybe it is. I don't know. What are your thoughts on that,

Paula  10:46  
so we do use over the air update. So I think that's quite common. However, it's about the capability of that hardware that you're installing in the vehicle at this point in time. I think it's obviously redundant systems, and all those, those things that we talked about are important, but it's also, it's also very complicated. So I just want to reiterate that, because in Europe or, you know, you cannot sell AI on the vehicle, it's not type of proof. You cannot type approve those type of vehicles. So regulation doesn't allow you to do that. There will become a point. There will come a point where actually we will be able to type approve this type vehicle, this type of vehicles, which have aI on board. But we are not there yet, and I don't think we'll be there in the in the next, you know, few years, regulation is under development standards or under development however, it's a very complex, complex things to do, and that's why we need to do it, use case by use case.

Tim Papandreou  11:36  
That's a really good way to frame it, because the way that I usually talk about it to audiences outside our internal nerd group, that we're all here, right? So we're this is like we're in the in the in the space is there's a lot of things in transport are complex, and a lot of things in transport are complicated. And AI in autonomous vehicle technology is complex and complicated at the exact same time, and yet it needs to move from A to B and do it safely. And there's nothing more, I guess, rewarding and exciting and kind of concerning within moving large vehicles over large distances versus passenger vehicles in urban areas. And you know, Mike in the in the codiac space, what you guys have done truly is extraordinary, because it's no it's, you know, it's gone from sci fi to SCI fact. It's no longer this idea. It's happening. But not only that, once you commercialize, when you get you enter your first contract with your first customer, you now have a legal obligation and libel you're liable to make sure that it gets there on time and on budget, but also to get so safely in one piece as it causes damage. As the head of commercialization walk us through some of the how. I mean, you could almost like not sleep, right? Because it's so there's so many things that could go wrong, and yet, Kodak is delivering day in and day out. It's, they're on the road. They've got customers. They've satisfied customers. I think they're, they're almost like, just taken aback at how amazing it's working. So walk us through those are big, big decision points on how safe is safe enough, how, how reliable is reliable enough. Like, you know, some of those are huge ethical areas. Like, you know, that's a huge, huge space. And how are you guys working? Well, thinking, thinking through that.

Michael  13:26  
Yeah. I mean, you have to go back and think what you're actually comparing yourselves to, right? Every year in the United States only, there's more than 40,000 fatal accidents. That is insane, if you think about that number, right? 94% is caused by human error. I mean, that's even more unacceptable, right? And so we said, well, this is a huge problem. And the industry said, Not only we right, we want to solve that problem. So that's essentially how we have you think about this. Okay, we are really here to make things safer. And of course, we want to make things more efficient and more productive for our customers as well. But really the safety is, is kind of the initial, initial thing we want to solve for. And so what you have to do is, of course, many, many things to get to the bar. First of all, you have to set yourself the bar right. What is, what is your safety bar that you initially want to hit, and that is obviously being safer than the average human driver. Now, again, that bar is pretty low, because, you know, there are those 40,000 accidents, right, or 40,000 fatal accidents every year, right? But that's kind of the minimum bar. And then, of course, from there, you increase it. At the same time, as you mentioned, you need to make sure your customer stays happy, stays informed, and not only your customer, but also the public audience, right? So when you are testing autonomous vehicles or deploying autonomous vehicles, you want to inform people, you want to inform your customers, you want to tell them. You know, this is what we do. This is the road map we currently operate with the safety drive. On public roads, right? And these are the next steps we are doing until we can actually remove the safety driver from from the vehicle right. And one thing that we realized is, well, there is the public road use case for autonomous trucking, but we also realized, really last year, that there is an interesting use case of private road driving where the environment is much more controlled, right? There's way less, you know, actors. We left things happening. And so we actually worked with a company, Atlas energy solutions. We announced this last week where we deployed um driverless vehicles on private roads, which, of course, makes the safety case. We have a different kind of beast, if you will. It just looks a little bit different. Now you do still this all the same analysis, but it just has different factors, different statistics and so on that go into it. So it's really about setting yourself a bar, also knowing which environment you're operating in and then executing against that, while communicating exactly what you're doing and then telling everyone who is involved, this is what we have done. This is our analysis. This is what it shows. This is why, why it is safer than than what is out there on the road today, and this is why we are comfortable actually launching.

Tim Papandreou  16:18  
Yeah, I really like that. So, you know, the topic of guardrails is always a big discussion when we're doing anything around AI governance, like, what are the guardrails? What is the what is the safety tolerance? What are we willing to go within and work work around? The beauty of AI, though, is that, you know, AI can ingest reams and reams of data, volumes and volumes of rules, laws and regulations, and understand them as, frankly, better than a lot of us can, because we can't recall all the pieces of code or transportation regulation, but, but it can basically understand based on, you know, in this situation, in that situation, etc, because we can do a lot of rules based focus in how we develop these, these AI systems. But, you know, this is the challenge that we have, is that, you know, there is so much regulation, there's so much complexity around this space. There's so much nuance. And you know, you know, Michael, you mentioned before, like, you know, there is a historical reference of, like, 94% of all crashes are caused by human error. But even that report has been debunked lately, saying that actually, you know, that was actually for a specific situation, and there's been more and more understanding that it's actually about road design, the way that we design our roads, the intersections of our roads, the way that we design them, really do put people in some very precarious situations. And to Paula's point, vulnerable road users the most, the most affected by that. But the beauty about this AI technology, when it's tied to like later, LiDAR, radar, etc, is that it can see much further than we can. It can understand the world much better than we can can, and it can do things and basically predict things much better than we can. Can you talk a little bit, Michael, about your technology, because it is unique in the way that you guys have designed it, and the there's some specific things about it that make it, at least in my view, and correct me if I'm wrong, more commercially, commercializable. That's a word, more scalable, I should say, because it's so adaptable. Can you talk a little bit of that, and then share with our views on what makes Kodiak just that so unique, like that?

Michael  18:26  
Yeah, I will. I'll answer the tech question in a second. The one thing that I want to say before is, if you think about a human driver, and we all do it right, and I'm not just talking about other people, I'm talking about myself as well, we get distracted, but that's the reality. There's not the 360 degree view that we have at any given point of time. There are things that we are looking at and at the same time we are not looking at other things. If you think about an autonomous vehicle, be it a car or a truck, with sensors, you know, creating an accurate picture of the environment at any given point of time, having a 360 degree view, it obviously allows you to react to things quicker, to see things quicker, and to never miss anything, right? That that's really the point, and never be distracted. So this is critically important when you think why an autonomous vehicle can even be safer than human driver, right? I think that is one of the most important things. Now, when it comes to the technology, I think the things you are referencing, we have really done, you know, we have we have started when we started codec in 2018 and said, Okay, we are focusing on the trucking use case. And so what does the trucking use case actually mean, right? It's, you know, highway driving mostly. Now, of course, there's some surface street driving, but we're not like focusing on on city street driving, right? So it's really highway and driving at highway speeds. And in order to do that, you need to first of all start with a proper field of view analysis and really say, okay, which sensors do I actually need? Which are which one. Do make sense for my specific use case, right? And that's how you start. And so we came up with a design that really has all the sensors in what we call sensor parts, meaning cameras, sliders, radars, everything is in a structure, and we have that structure fully redundant, basically replacing the mirror. So the sensor parts are replacing the mirror that household, the sensors, and why that is so important and critical to our customers is because it keeps the truck up and running. All they care is uptime. All they care is productivity. And if you have, like, just exactly, because essentially, they want to operate that truck. 24/7, of course, needs to get fueled and everything right. But that's what they want. And so we have designed it in a way, so it's super easy to change. If there was an issue with any of the sensors, you could just take off the sensor part, put a new one on. We'll afterwards figure it out which sensor exactly failed, but, but you can basically be on the road again, and that, you know, just gives people the sense, oh, yeah, that's exactly what I want, because I know it allows me to actually run two times, three times as many miles as a human driver could run today. So that's really one thing. The other thing, from a tech perspective, I think, is really, when you think about AI, you need to think about what parts of AI actually makes sense, right? It needs to be verifiable. It's not like an a full black box that you can use. You need to figure out, okay, you just mentioned there's some rules, right? I mean, the road has some rules, right? So we need to, we need to know there are speed limits. Yeah, it's pretty simple, right? There are speed limits. There are certain things we have to do. So of course, our truck needs to follow those rules. But then, of course, there are other things that that you can allow the drug and the system to learn and be trained, right? So you really need to figure out, via, does it make sense to put algorithms behind it, to to make it AI driven and not not rules based? And the important thing is, you need to understand what you did so you can afterwards, validate and verify it. And I think that is a very critical point in the deployment of autonomous vehicles.

Tim Papandreou  22:09  
Yeah, that's great. Related to what Michael was saying about how safety is very, very front and center to what they do. We have this ethical dilemma, and it's not the trolley problem, because we know that we can actually, if we slow things down enough, we should be able to see further ahead enough that we should be able to deal with the trolley problem issue. But we have another thing, which is, right now, we have 1.5 million deaths on the road worldwide. And you know this, even within the EU not every country is the same. The road networks are very different in terms of their quality, their robust, etc, and Africa, Asia, Australia, all the countries are very different in their approaches. But one thing that's for sure is that we're seeing that the vulnerable road users, the number of vulnerable road users worldwide, is starting to get more and more exposed to serious and fatal crashes, and there's a trend that's happening, which is a really terrible trend worldwide, where the cars are getting bigger and bigger and bigger, and then the safety on board is getting better and better, but the safety outside the vehicle is a real ongoing problem. If an AV technology came forward, and I want you to put your standards hat on and your ethical researcher hat on. That said, we can guarantee that we will reduce 20% of the fatal crashes across the world, but there still will be the other 80% is that good enough? Because if we have this push that we always say that AV technology should go to zero from day one and versus while our current system doesn't do that even with brand new vehicles, how do we accept a better fatality rate while still acknowledging that it's going to cause fatalities? That's a huge ethics question. No government has been able to answer that question for me. And I was wondering, have you thought about that in that way of like, is there a way that we can step this in, or do we have to wait until it's perfect before we start? Well,

Paula  24:14  
I think again, it will depend use case, but use case, but I think it's also about the metrics and the benchmarks we use, right? So how do we measure this improved road safety and the net positive effect these vehicles will have on the road? So I think that's that's a question, because obviously, as a minimal requirement, as manufacturers deployers of these vehicles or fleet of vehicles, you need to make sure that you decrease, or at least do not increase, the amount of physical harm. But also, besides decreasing that with 20% as you said, is the question around, how do you split that 20% across different road users? And in the European report, we made the recommendation that no category of road users, either pedestrian, cyclists, vehicle passengers or whatever, should be at an increased risk of harm from these calves, from this vehicle. Health, which is in line with the principle of justice. So you can reduce it by 20% but that doesn't mean you can increase it in certain categories, such as vulnerable road users. Also, we talked about sustainability a little bit. There are no other possible benefits of the systems being introduced, such as environmental impact or whatever it would be to compensate for an increase in road deduction. So you asked me about sustainability, and I didn't talk about talk about this aspect effectively, but there is no other benefit that could beat safety. So when we actually have to trade off the ethical principles or the ethical questions we have to address, I think there is a clear one that rises to the top, which is safety. Okay, yes, that's that's paramount. Obviously, the other principles are equally important. However, we cannot compromise safety for other benefits, such as environmental impact. Okay? I hope we agree on that, and then it's no, it's

Tim Papandreou  25:59  
a it's a big it's a big ethical question, because we have wizards three, actually, so there's the safety, sustainability, and then mobility and access. And I have a lot of people in my in my camp that says safety is important, but what about congestion? You know? And it's a congestion affects the economy, and the economy affects jobs. But I'm like, Well, if everybody's going to be disfigured from crashes, it doesn't matter, frankly. So we need to make sure that. And I keep saying that even if, if we could reduce traffic fatalities down by 50% and still have congestion and still have air pollution, that is a massive, massive win for society, right? Absolutely so. And

Paula  26:38  
that's what we believe in the European report as well. And actually we say no other benefit, environmental impact, conjection reduction, we call them out could actually compensate for an increased risk of safety. So is that not acceptable? But then it's about discontinuous the same page? Yeah, absolutely. I think it's about this continuous monitoring and improvement of the safety performance, right? Because as you introduce these vehicles, these benchmarks need to change and adapt. And also, what I think it's important to conceptualize and think about, and we encourage that from the European Commission point of view, is, how do we encourage companies to share data, not only about collisions, but also about near collisions, or near misses, for independent agencies for crash investigation? How do we make that data more available so we don't have to drive millions miles of miles to be able to collect this data, but actually share that more openly, so everyone can benefit from that learning. And one final point I want to make is about that, that that we have an opportunity, I would say, with the introduction of this new technology to readdress some of those inequalities in vulnerability and mode wrong users. So what do I mean by that is, as we introduce these vehicles on the road, can we address some of the those inequalities, such as, you know, pedestrians or cyclists that have been a disproportionate amount of harm relative to their road exposure. So let's say we decrease that with 20% in in all categories. Can we actually proactively decrease it more in certain categories so give more safety or more prevention or more space to vulnerable road users? That that is also an ethical question to answer. It's

Tim Papandreou  28:19  
interesting. You said it. So just to wrap up what we were talking about, there's some very technological things that AI can help us do, which is, basically, we've been talking about improving safety, improving a lot of the different elements of what we're doing wrong right now, but there's some very policy measures that we can do. For example, what I always give the example the difference between Paris and London's approach for for mobility, London has put in pricing, signals, technology, etc, etc. Paris has been just slowly making it harder and harder to drive a personal car in the city center, right? It's made it easier and easier to walk to bicycle, to take public transport. That's a very policy oriented approach. London's a very technology and pricing approach, and they're both receiving very interesting different outcomes. Do you foresee a world where AI can only do so much and but the majority of the work that still has been done in this space is going to be policy?

Paula  29:18  
I think policy has a huge place to play, to a huge role to play. And mainly, I think that because in policy, we're all engaged right in a democratic society. So we all have a word, we all have an opinion that is equally valid. So my opinion as an engineer is no less valid than an ethicist or a lawyer or, you know, an average person from the street. It's his opinion or their opinions equally valid. So that's how we I think in democratic society, we engage with policy and policy making for our vote mainly, but other mechanisms as well. And I think that has a huge role to play. So we see in Brussels, we have the EU AI act in which is very rules driven, right, whereas in the UK. Have more an open kind of environment that's more fostering open innovation, but in the same time, you have the future AV build that's going to regulate these vehicles. So I think whilst those are totally different, where do it would converge together. It's around the standards that are set up by technical and industry experts, and that's where us as a community of engineers, we collaborate in for the greater good, to establish, to establish the standards. The problem with standards is quite complicated to reach a conclusion, but when you reach it, it should be a collaborative effort that we all work together, and we all kind of approve. It's very complicated, complex to navigate, but it has it's the bedrock of the future development.

Tim Papandreou  30:44  
Well, one thing I'll say about engineers is that when we talk about like technology and engineering and policy, the best place to put an engineer is to solve a problem. They engineers the best, in my opinion, I've worked with 1000s of them, that best problem solved in the world. They're not good at making policy, making decisions. That is not their strength, right? So other people, you know, you mentioned other people in the different spaces, but together, those groups do actually come up with very good standards, and we've seen it now with battery, with electric vehicles, with charging, the whole charging infrastructure. Discussion on standardization, our entire world is built on standards. That's how we industrialize. It became where we are today, and this shouldn't be anywhere anything different. We need to figure out the right set of standards that don't hinder innovation but also don't run away like it did with social media. We really didn't standardize and regulate social media enough, in my opinion, and that's what led us to the problems we have today. And so AI is right there again. And so we've learned the lessons from under regulating and from over regulating, and hopefully we'll get that right point. So really appreciate that point he made, Paul, it's very, very, very, very stupid. Maybe

Paula  31:53  
I'll give a quote from Henry Ford, which he gave in 1926 he said that today's standardization is actually the necessary foundation approach. Tomorrow's improvement will be based and I think that is that is accurate and correct even today. So obviously, I'm biased.

Tim Papandreou  32:10  
No, it's good. It's very good. It's very good. Okay, I'm going to go to the next section because I have my own very spicy version of this. But how do we get to scalability? Because that is going to be a perennial problem that we can't scale. A lot of criticism in the AI space around mobility saying that you just can't scale it because it needs constant supervision, whether it's whether it's basically in the vehicle or behind or in a room somewhere, someone's basically doing tele operation, etc, is not really or, you know, automated vehicle technology, but you know, there are companies that actually saying, Well, no, we actually can do that. And I believe cardiac is one of them. Michael, so what, what is the as the head of commercialization, what are the big obstacles that you need to overcome to get into that, that point of scalability? Because some may argue, like myself, you're already doing that, but others from the outside might not see that. So is there a way that you can explain by not giving away your secret sauce about, how are you guys going to approach that?

Michael  33:09  
Yeah, I think you first of all have to look at differences between Europe and the US in us. In fact, the regulations already allow autonomous driving vehicles, right? I mean, way more has commercialized and is driving without a driver in all of San Francisco, right? So there is actually no regulation that prevents anyone from doing this. Now, the same is for trucking. 24 states have actually passed official bills that allow deployment of fully driverless trucks, right? So from a regulatory perspective in the US, you actually have everything in place to commercialize, right and then, and that's, of course, for the public road use case. Now, if you think about other use cases that we are working on, be the defense use case or be the industrial use case, there is actually no regulations. So of course, there are speed limits on private roads as well as sometimes there are speed limits on private roads, but there's no regulation, right? And so it's really about the safety case is proving to yourself that you actually fulfill your safety bar, right? You kind of define that by yourself, and then you say, okay, with statistical method, speed on road, driving and data from that the simulation, using scenario databases to simulate other things that you might not have encountered on the road. Right, create using AI to create new scenarios that you might not have actually encountered on the road. But you, of course, you want to iterate on right? That's really the way you you build out your safety case and then prove that you hit your safety bar. And that needs to be very openly communicated on what that looks like, right? What is the analysis you did? What even we even go back a step by when we, when we talk about our safety case framework, it's not just actually the. Analysis, but it's also, what's the culture, how you interact as a company, right? How do your engineers, but also your operations people, actually give you feedback? How are they empowered to speak up if they see something that they don't think is, you know, perfectly, right? When it still needs to be improved. So it's really, it's really built. We call it this, as I mentioned, safety case framework. It's really about building a whole company culture, a whole mentality around safety and about people, you know, saying, Yeah, I I'm signing up for this because I know we did everything the right way. So I can also tell our customers and say, yep, this is what we did. We did everything the right way. We did. We did meet our bar. So we are actually ready to to commercialize, and that's exactly what we did for for the industrial use case, right? So we are, we actually already operate driverless out there on those private roads. And we informed everyone who was involved, right? We showed them. This is what this is how we do the testing. First of all, before it was driverless, then this is what we do to actually take out the driver, right? So being extremely open, extremely transparent with everyone that's involved. There might not only be the customer, there's also an ecosystem around it, right in inform your suppliers and everyone is a critical thing to to gain this public acceptance. And of course, there's more to be done, right, especially when we talk about public growth, but this is the process that we have started on private roads, and I think it has really helped us, you know, making everyone feel involved and really making everyone feel comfortable eventually. Yeah,

Tim Papandreou  36:41  
no. Really like that. So Paula, Paula, based on what Michael said, what can you please just share your thoughts? Yes,

Paula  36:48  
I really like the piece on that Michael shared with us around the company culture, and I think that's so important. And actually, in the standard that I refer referred to earlier, ISO 49,003 we have a whole section that looks at that and looks at how the company enables engineers to be trained into ethics, how that safety culture and sharing is actually happening from way up at the executive level down to the engineering level, what training is in place, and how those we talk about four layers of organizational structure In that standard, how those communications and culture is enabled to actually make this, this technology, a reality. So we do acknowledge that not only about the technology itself, but culture plays a huge role, and organizations need to enable that to happen. So thank you for that point. No,

Tim Papandreou  37:35  
that's really important because, you know, super important, and culture is really important. Culture makes the company. Culture makes the the opportunity to happen. But also culture makes the safety. And when you have a safety first culture, you know, a lot of the AV technology companies have got mixed results because safety has not really been at the forefront of their approaches. And yet, the ones that safety is number one clearly evident, whether it's Kodiak Waymo and a few others, for example, they're criticized for going too slow, right? It's like, on one hand, you should go faster because you're not going fast enough that the VCs mentality, right? On the other hand, the regular saying, Where were you going too fast? We're still not safe enough. And there's a real dilemma of like, we need to go at this. I always say we're not going to the speed of light, we're going to speed of safety, right? We should go to speed of safety. And what makes sense? You know, both of you are working on different different form factors. You know Paula, you're working on the passenger vehicle side. You know Michael, you're working on these very large vehicles. It's some of them are big trucks. But also, can you talk a little bit about your entry into the defense space as well?

Michael  38:47  
Yeah, absolutely. So, as you said, we started with trucks like Class A trucks, long haul vehicles. And one thing that we actually from the beginning thought of is we need to build a system that's as modular as possible, and not only for the maintenance point that I already mentioned before, but also for, you know, different for example, some of our carrier customers, they use different trucks, right? They might use, you know, a Ken with vehicle. They might use a freight liner vehicle. They might use a Volvo vehicle. And and so you need to be able to integrate with all of them, and not just with one specific thing. We call it, sometimes overfitting, right over fitting to a certain platform, over fitting to a certain lane. And you don't want to do that. And so one thing we realized in that is we have built our stack super flexible from a hardware perspective, but also from a software perspective. One example is we do not use high definition maps, right? We use something different, which is way easier to create, way easier to update and way easier to maintain. And so when we talked to started talking to the Department of Defense, they were basically interested in a system that works. Is off road, right? And now off road, you know, high definition maps just don't make any sense, right? Because things constantly, yeah, there's no road. Things constantly change. The grass grows and then the trees change and whatnot. The

Tim Papandreou  40:13  
sand drifts along the dunes, right? So, yeah, exactly. So

Michael  40:17  
it just doesn't make sense to update your maps in, like a big effort to update their maps constantly, right? And so we were selected amongst 33 other companies to actually participate in the program called RCV robotic compact vehicle, because they were attracted to how we have designed our system, how flexible it is and how capable it is to handle different use cases. And this has been super successful working with the Department of Defense, and eventually, when you think about why it's about people out of harm's way, right? So everything that we want to do on over the road, trucking, or in our industrial use case, which is making things safer for everyone, not only for the driver, but really for everyone, is the same with the Department of Defense. You just try to protect people. And that's really one of the, one of the main reasons why, why we did that. And the other one is, of course, it makes total sense to get really proof points of commercialization of customers actually wanting this, right? And and, of course, the Department of Defense, you know, they have a very good reputation and them saying, I trust the covid system. I want the covid system is, of course, a huge proof point for us as a company.

Tim Papandreou  41:28  
Yeah, that's a really, really good point, you know, because what a lot of the discussion has been about, most of the news about robotaxis, right? Because that's what takes up all the news, because that's what, that's what, I guess is the most for people, it's all about storytelling. It's the most exciting thing to them, because it's the thing that they'll feel in touch. What did I realize, though, is that E commerce is global and massive, and everything that you want that comes to you is coming to buy a ship, train, train or truck or some sort of robotic vehicle. And that is, frankly, the best and the most focused use case there is, and there's these other areas off road, whether it's agriculture, mining, construction, all the different things that we've been talking about, they all make our life livable every day, our modern way of life. But for some reason, the news is all about robot taxis. And there's this other area, but this news that I think is really misplaced, and I would love to hear two perspectives on it. On one day I can drive my own, own, my own and drive my own autonomous vehicle, I personally don't think it's going to happen. I think it's very, very far away. Paula, Michael, what are your thoughts on that? Like, is that really where we should be going with this eventually? Because I'm just not sure it makes sense. And I have 100 reasons why that doesn't make sense in my mind. But love to hear from your two perspective. Like, do you foresee this eventually getting there? Is it saying that just, just the culture that you know, New Generation Z doesn't want to own a car? So, like, how does this all kind of make sense to you all?

Michael  42:55  
Yeah, I'm happy to give my perspective. Initially, I might have Paula. Paula lean so to me, from a technical perspective, I don't see an issue, right? Why would not every vehicle be an autonomous, fully autonomous vehicle? I think that's that's a given, that we will get there. But from a business case perspective, let's say I, as a person, buy a fully autonomous vehicle equipped with all these sensors, right? And then I use it. What is it like? Single digit percent of the time that the reaction. So I'm like, why would this make sense to me personally, right? So it all, in my opinion, only can make sense as something that's deployed in a fleet, that's something that is she had used, that it's not one person benefiting from it, but many, many people benefiting from it also having further societal effects. Like, you know, less parking might be needed when we talk actually, talk about the robo tax use case, right? Less parking, you know, less vehicles that actually need to be maintained and all of those things. So I fully agree with you, to me, are fully autonomous vehicles are not really something that's personally on, but mostly because I don't think the business case makes sense,

Tim Papandreou  44:07  
and that's, by the way, it's, by the way, one

Michael  44:10  
of the reasons why we focus on trucks, right? It isn't the sexy thing, as you said, but it totally makes sense, because all a fleet, or all somebody who wants to transport goods cares about is keeping those trucks on the road running, 24/7, right? That's all they came up. So

Tim Papandreou  44:26  
I'm gonna, I'm gonna bring in all my Tesla fans. But what about what Elon Musk said about Tesla's gonna become full autonomous, and you can basically have your car rent it out. It makes money for you. It may come back with like, like damages and vomit, etc, but who cares? Right? Because it's like, you can rent it out. That That, to me, doesn't make sense, because even when Paris tried the auto car share thing program, people really were upset that the vehicles were coming back dirty. They weren't cleaned properly. The maintenance issue, maintenance is a big, big deal, and that's why fleet, to me, fleet ownership makes the most sense. But. That's also an issue as well. I just, I just don't see us getting a person giving their car out to the public and getting it back in the condition that that wasn't when they sent it out once, twice, three times. That's off the market, right? It doesn't that's not how everyone things down. I know Turo does car sharing, and they have figured out some of this, that the cost is so much higher as well.

Paula  45:24  
So from my perspective, is use case by use case, right? So they will there are already use cases, or they will be use cases that are actually make businesses to develop. And I'm talking generally right, like I worked on a VPS automated valid parking system. And actually people might want to have their car just drive, give the leave the cars in. The car parks itself. You don't have to struggle to get out of the car, you know, you might have a baby, and you have to put the, you know, take all the things out. So it's actually use case that's being developed. I worked on the standard for that. The standard is very robust. Has a lot of use cases to test against, and we did, I said, many other companies, we did shows of this technology, so demos of these technologies at various occasions. So that's a that's already a system that's level four, system that's actually being developed, and there are others as well. So I think it's use case by use case. Now, obviously, with mobility, a lot of other questions do emerge as some of them, you some of those issues you touched upon. For me, I always think about how equitable mobility will become, right? And actually, will these technologies, or these vehicles be so expensive that will not be adopted, or they will actually create even more inequity in the mobility sector? So there are a lot of things to actually consider. A lot of ethical aspects. Obviously you'll enable people to use vehicles or to, you know, to have mobility people that currently don't have because of various reasons. But actually, how will you make sure it's equitable and everyone can access it? It's also a question. So there are many, many different ethical kind of aspects to think about, even to a simple question like that. I do think that technology can be developed, but it's not going to be overnight, so it will take some time. And I think that also it's all right to do so, because then we make sure that we develop it with trust and confidence, and we, you know, we are confident in what we are developing. It's actually robust and reliable for the road users in general, because it's not only about car occupants, like we make big cars, right? People are inherently safe in our cars. That's why many people maybe buy our products. However, what about the rest of the people you know? Cyclists, pedestrians, children on the road? How do you how do you manage that? It's we have a wider responsibility than just the occupants of our vehicles, and we have a responsibility for the society as a whole, as we develop this new technology, all of us, I think, regardless,

Tim Papandreou  47:46  
absolutely, well said. Final thoughts. Michael, so looking to the future. This technology is not coming. It's here. It's on the road right now. What look look a few years ahead for us. What are you excited about?

Michael  48:03  
Yeah, I mean, I think this is one of the most important takeaways for listeners who are not super familiar with the state of the technologies. This is real. Those vehicles, those trucks, are already on the road. Waymo is operating driverless in San Francisco. We are operating driverless in this industrial environment already. This is happening today. Now, what's going to happen over the next few years is, of course, you expand the use cases, and then you get more reach from from a commercial and customer perspective. And essentially, what will happen people will understand and realize how much safer those vehicles actually are, right? And it will make not only roadways safer, but also transportation more efficient. I mean, it's so funny, if you think about it, supply chain or supply chain constraints wasn't the topic for most people before the pandemic, right? Nobody. They were just magically appearing at your store, at an event there, and suddenly people started thinking about, so why is goods not there? There's a driver shortage and things like that, right? So there's a big safety aspect, and that autonomous vehicles can, of course, unsolved. But the second one is actually providing more capacity, providing things that people want. Everyone want things one day, two day delivery, right? But the current supply chain isn't even equipped to do that. So autonomous trucks will be a new a new modality that will enable a lot of things and and actually make transportation more efficient, safer and more sustainable, and which we haven't touched on today, but maybe we can at some point for everyone.

Tim Papandreou  49:43  
Yeah, absolutely, absolutely, final words, Paula,

Paula  49:48  
absolutely. I think the future is exciting for all of us, for engineers, for new people coming into the industry. And I would just want to maybe make a call for action, for people to upskill themselves. In these new technologies, whether it's AI or the safety implications or AI safety or anything related that. Because why do we like it or not? You know, we are going, all of us going to be touched by this technology in one way or the other. So I think it's an exciting time to join the automotive industry to work on these topics, but also upskill yourself, for sure. I

Tim Papandreou  50:22  
love that. I love finishing on a high note of upskilling, because that's really the the big thing right now, when a lot of people talk about AI, they worried about, oh, AI is either going to take my jobs, kill us, do something dangerous. No, that's actually all the fear, uncertainty and doubt. AI is going to change the roles that we have. It's going to improve our roles, just like email did, and just like the internet did 25 years ago, it really just changed and augmented roles. We could do high level, more more interesting things. So I'm personally really excited about that. And Michael, to your point, if we do this right, we're going to see a lot of opportunities for reducing our emissions, improving operations, improving efficiency, reducing fatalities, reducing costs, which means that we're going to open up brand new opportunities for the economy, brand new ways to sort of think about things, things that we take for granted now that were novel 20 years ago. Imagine when your favorite friend or my my partner complains when the Wi Fi is not working on the plane. It's like, I say there wasn't Wi Fi 10 years ago, like this is now your complaints are fast enough you can't watch your Netflix. Like, that's the normal, that's how we normalize things. And I think we're going to go through that same process right, where we're going to expect it to be clean, green and efficient. My baby daughter, when I think we should grow this up, she's gonna say, You did what, you used fossil fuel. What are you talking about? Like, they couldn't even imagine that. Just like we can't imagine using a feather and ink to write something like, what would you do that? So, you know, I think those things are happening, it's going to change. So I'm really excited about that, and I just want to once again, thank you both for your time. Really appreciate you joining us and look forward to continue this conversations in the park, in the future. Thank you for having being

Michael  51:57  
part of this. Thanks

Paula  51:57  
Do you

Tim Papandreou  52:03  
have an opinion about what Vince has spoken about, or experienced view on a hot topic in mobility space? If so, join the conversation. Go to www dot CITP pod, which is Conversations In The Park pod, CITP pod.com, and apply to be a guest speaker, or let us know about a topic that you'd like us to cover. Thanks again for listening. Hit the subscribe button on Spotify or Apple podcast to keep up with the latest and the hottest topics in mobility. And once again, I'm Tim Papandreou, your host for this season. I look forward to having these conversations with you in the park. You.

Transcribed by https://otter.ai

People on this episode