Last night, Cruise issued a statement on ex-Twitter-now-X that stated the company intends to “proactively pause driverless operations across all of our fleets while we take time to examine our processes, systems, and tools and reflect on how we can better operate in a way that will earn public trust.” While apparently not “related to any new on-road incidents,” the decision is happening right after Cruise lost its operation license in California over an incident where one of its robotaxis dragged an injured woman underneath itself for 20 feet. Cruise will still have operations and development with a human safety driver in the driver’s seat, ready to take over. I think what is important here is something that’s maybe not so obvious. If you look at Cruise’s most recent incidents, something becomes apparent: One of the biggest hurdles of automated vehicles isn’t the obvious problem of solving the mechanical tasks of driving, but rather how to emulate the more blurry and vague general sense of surroundings that humans innately have.
First, here’s Cruise’s full statement:
The most important thing for us right now is to take steps to rebuild public trust. Part of this involves taking a hard look inwards and at how we do work at Cruise, even if it means doing things that are uncomfortable or difficult.
In that spirit, we have decided to proactively pause driverless operations across all of our fleets while we take time to examine our processes, systems, and tools and reflect on how we can better operate in a way that will earn public trust.
This isn’t related to any new on-road incidents, and supervised AV operations will continue. We think it’s the right thing to do during a period when we need to be extra vigilant when it comes to risk, relentlessly focused on safety, & taking steps to rebuild public trust.
So, I have to hand it to Cruise for voluntarily realizing it’s important to take the time to really solve its problems, instead of shoving those problems aside and just barreling ahead, solving nothing. This feels prudent and careful, which is what you want from a company building 5,000-pound robots and releasing them into cities. What Cruise isn’t telling anyone, at least not yet, is what it plans to do to improve the safety and usefulness of its AVs.
Tellingly, Cruise hasn’t reached out for my opinion, but if they do, I promise to deliver a really satisfying spit-take and then offer them this advice: Focus on improving the broad and general situational awareness of the cars, beyond what’s required for the specific driving task. This won’t be easy.
Here’s what I mean; let’s look at the past three significant Cruise failures that we’ve written about over the past year. Most recently, we have the tragic incident where a woman, hit by another car, was run over by a Cruise AV, which then proceeded to undertake a procedure to get off the active traffic lane (a good idea) while the woman was still trapped under the car (a bad idea). From Cruise’s statement on Twitter:
In the incident being reviewed by the DMV, a human hit and run driver tragically struck and propelled the pedestrian into the path of the AV. The AV braked aggressively before impact and because it detected a collision, it attempted to pull over to avoid further safety issues. When the AV tried to pull over, it continued before coming to a final stop, pulling the pedestrian forward.
Then, in June we had an incident where a Cruise EV was in an area where (yet another) mass shooting was taking place, and appeared to be getting in the way, causing frustration for police officers and emergency workers, as can be seen in this video:
Fellow Mission friends. Please stay away from 24th/Folsom. Gunshots fired; reckless Cruise cars. pic.twitter.com/fICRtS6e05
— Paul Valdez 🚲🏳️🌈 (@paulvaldezsf) June 10, 2023
And then in January, we had a strange incident where six Cruise EVs all converged on one intersection for 20 minutes, for apparently no good reason, and the July before, 20 Cruise AVs converged on an intersection close to that one. Here’s a video from the six-car incident:
A friend of mine took a video in San Francisco tonight of 6 @Cruise self-driving cars stopped at an intersection for 20 min. Traffic came to a standstill + people didn’t know what to do. Two of the cars were on wrong side of the road. Doesn’t seem particularly safe. #Autonomous pic.twitter.com/tKZrHgrdEi
— Jose Fermoso (@fermoso) January 21, 2023
So, what do all of these incidents have in common? They all have a common failure mode, and it has nothing to do with the mechanics of driving or sensory systems or not understanding a traffic sign or anything like that. It has to do with a general sense of knowing what the hell is going on around you, a sense which these machines have absolutely none of.
Of course they don’t know what’s going on around them – they’re machines! This is also why I’m not worried about some grand AI uprising or whatever; artificial intelligence is simply incapable of that sort of awareness– they’re not self-aware, they don’t have consciousness like we do. They’re capable of incredible acts of categorization and combing through data and generating text and images and interpreting camera and sensor data, and while it may seem like these systems have real intent and they feel like they’re thinking, the truth is they’re not. They have no idea what the hell they’re doing.
You can ask MidJourney to make you a photorealistic image of what a Volkswagen Beetle would have looked like if instead of metal we built them out of gouda cheese, but it has absolutely no idea what it’s actually doing. No computer does. They execute programs, they can’t come up with original ideas or have any self-awareness at all. And yet, to some degree, that’s what AVs need to be able to do if they’re to successfully operate in the human world.
Think about all those three incidents I mentioned: Each is about the car not understanding the situation it’s in. For the convergence of the 20 Cruise AVs, if you or I show up to a Taco Bell and immediately find ourselves surrounded by 19 other people all dressed the exact same as us, we’d know some manner of shit was going down. Something isn’t normal, and our behavior needs to adjust accordingly.
With the Cruise AV in the area with the active shooter, any human would have seen the police lights and caution tape, heard the sounds, and just felt that something wasn’t right. It wouldn’t take much for a human driver to encounter such a situation, and decide that this is an area best avoided (to be fair, emergency vehicles were eventually able to get around the stopped Cruise AV). Even if you kept driving into the situation, at some point a cop or EMT or someone would probably yell at you to get the hell out of there, or perhaps ask what your problem is, and if you’re not leaving at that point, you’re probably part of whatever is going down.
For the most recent incident, barring some unusual circumstances, you know if you hit somebody and they’re trapped under your car. You’d have felt and heard things that should feel very, very wrong. Perhaps you’d panic and drive or something, but the point is you wouldn’t just be calmly blasé about it.
Sure, there are mechanical/electronic solutions to this particular problem: sensors and cameras underneath the car to stop if any object is detected there, for example, but that’s not currently implemented on any deployed AV I can think of.
Cruise’s AVs may understand the basics of how to drive, but that’s only part of what driving is. Driving is, on some level, a human social activity, because we do it with and around other people doing the same thing, all reacting to the immediate surroundings while communicating with each other and making decisions based as much on the overarching culture we live in and how we interact with other people as the fundamental rules of driving. Situations that are identical from a driving perspective – the same speed limits, lighting, stretch of road, weather, and visibility – can be wildly different based on what is happening there that has nothing to do with driving.
Imagine the same intersection on a late Sunday afternoon and the same intersection on a Wednesday morning when an active and heated protest is happening. Imagine that same intersection with construction being done, or the road painted for an art installation, or with a street fair happening or any number of other possible things that humans can do outdoors. The basic traffic rules and driving rules don’t change, but the behavior of how to drive does, sometimes dramatically.
Consider this hypothetical: If you’re driving through the desert in 110° heat and see a broken-down car, you might pull over to help. But you’d assess if that person looked like a threat or not: Are they ranting around, waving a machete, or are they just looking glum and defeated? And once you stopped and talked to them, you’d be assessing their behavior too, innately and immediately – are they just talking about the Night Weasels that follow them everywhere and rat on them to the FBI and NFL agents hiding in every cactus, or are they thanking you for stopping and telling you how they need to get to Barstow for their niece’s karate demonstration? You suss out how creepy they are or aren’t pretty quickly.
If you wanted to program this behavior into an AV, think of all the factors you’d need to assess: the location, how to visually identify a vehicle that’s actually broken, the ambient temperature, comparing the look of the person and their stance and actions to vast databases of behavior, identifying any objects they may hold, transcribing their speech and comparing it to databases of stored psychological tests, and on and on and on. Is it even possible? And do people even want their AVs to help people in distress? I mean, I hope they would?
Exactly how Cruise can pull this off is a huge question. I think it may be possible to create some sort of algorithm that takes in a lot of varied factors of the environment – including visual data, auditory information, references to calendar dates, even referring to local news – and build some sort of basic behavior matrix. Perhaps as a number of markers get flagged, a human is contacted to rapidly assess the situation the car may be in and give instructions – something most humans could likely do almost instantly.
Of course, I’m far too stupid to know for sure how to solve this. What I do know is that this is a non-obvious and still very unsolved factor of automated driving, and it seems to be one that most AV companies, Waymo, Cruise, Tesla, whoever – has not spent much time or effort focusing on.
Of course, I’ve been arguing with David about this for way too long now: David thinks this concept is obvious, and any AV engineer already knows and understands this. And, while I did reach out to Cruise to ask about this and what else they’ll be doing while they pause driverless operations, I don’t know for sure yet what they’re cooking up in their R&D labs. Maybe they have vast Dynamic Cultural Environment models and algorithms being tested as we speak!
But, I would like to point out that I had a very similar argument with David about Level 3 autonomy. He felt like of course the engineers had the tricky hand-over problem solved! And then I talked to an actual AV engineer, who told me
I asked our automated driving engineer about this, and while he said that “processes are being designed,” he also added that the fundamental parameters of Level 3 are “poorly designed,” and as a result “everyone is just guessing” when it comes to how best to implement whatever L3 actually is.
“There is absolutely a lot of guessing going on,” he added, noting that there simply isn’t any real well of L3 driving data to work with at this time.
So, I’m still going to be skeptical, but I’m also happy to be proven wrong here. Any AV engineers working in this particular field should absolutely reach out to me. That said, I still think it’s time that AVs learn to read the room.
My “turing test” for a self driving car is a festival parking lot. The kind that’s set up in a field, where there’s just some tape hung on stakes to mark the rows and a volunteer in a high-vis jacket waving at you to tell you where to go. Humans manage this just fine. Cuttent “self driving” cars would have no hope in hell.
You’re giving some us too much credit.
Several years ago I had an incident on I81 which turned my opinion on level 5 autonomy to not bloody likely in my probable lifetime. In a F450 stake body on a bridge under construction (so no shoulder), I saw some weirdness at the other end—a leaf or construction flag-tape or similar light object acting odd. Had a moment of puzzlement, then decided there must be a vortex from under the tall bridge. Had time to slow a bit—and, importantly, hug the barrier on my side. It still moved me several feet and (iirc) into the other lane.
We are pretty damn good at pattern-recognition. Sometimes too good (pareidolia much?), but we’ve all had those moments when, for no conscious reason, we slowed down or changed lanes only to have a car blow a red light or a ball bounce into the lane we were in.
To be fair, millions of vehicles sharing data might well have noticed my situation.* But I have driven that route —often in that truck—pretty regularly and never noticed it before or since. Much as I would love to be able to tell my work van “Take me home”, I do not want to share the road with the currently available autonomy.
*I previously posted this on (I think) Jason’s article about how the various levels of autonomy were complete BS at the old site, and got some decent feedback about what various sensors could & couldn’t do. To that, someone here earlier mentioned GM’s 27cent ignition switch murders. Car manufacturers are primarily about making a profit; the cars are just (sorry) the vehicle for that profit. Forget the cost of various sensors on each profit-unit; think about the volume of data needing to be collected and the cost of gathering, processing, storing, and sharing it to vehicles so equipped. Ain’t gonna happen without a mandate in my uninformed opinion. And you first have to guess at what needs to be collected, figure out how to meaningfully process it, etc. Color me highly pessimistic.
The last issue is probably solvable, the second issue might be sped up a little but ultimately the same response, and I don’t think there is a “solution” to the first one.
Well there is no way to tell. I have a FB account. Now sometimes I get a feed for a link AITA, (am I the a##hole). Now if I respond and say you are the a##hole my account gets blocked for bad words and there is no way to contact a human to solve the issue. They have a page rated R and it is funny and filthy and approved but respond as a fan using the same words? Your account is shut down. Disagree it is just a computer that won’t approve a pg word in a R rated site. You think idiots can program a safe system? NEVER
This has been the basic problem for self-driving cars from the beginning: at the limit, self-driving is a general intelligence problem, not a limited one, and that’s a difference in category, not degree – you can’t bootstrap yourself to AGI by making progressively better cruise control.
The other problem is all these companies are staffed by software engineers of the mold that brought you Facebook and Doordash, so there’s some blind spots around things like normal human behavior and the kinds of stuff one would expect to see while walking around outside.
If the police start shooting at a self driving car will it stop?
Depends on what they hit. Can’t aim for the driver’s position and expect results.
“ Cruise will still have operations and development with a human safety driver in the driver’s seat, ready to take over.”
Sounds like a severely dreary job for a Dickensian sci-fi character.
And yet here we are folks. It’s reality.
And it won’t help, because Uber had a person in the drivers seat when they ran someone over in Arizona, but if you can imagine, after several hours of sitting in the car doing nothing, their attention was not fully focused on the task.
Exactly. Might as well just let them drive the damn thing at that point.
Maybe, going forward, we don’t really need this self driving nonsense to begin with.
Yeah but she was watching HBO, she was deliberately not paying attention to the job
What bothered me at first, turned to smoldering anger, and now has devolved into boredom is the fact that they’re spending all of this time and money developing shittier busses. And busses are already shittier trolleys. Just build the fucking trolley lines, cowards. All of this time and expertise they’re wasting on stupid percentile chance bullshit could be better put to use solving the logistics of running a fully automated public transit service that we SHOULD have had twenty fucking years ago. When even the Czech Republic beats you to something you need to step back and reassess if you’re being a moron.
It’s like some guy building a huge pneumatic machine driven by a 2HP motor and a 7,000PSI hydraulic tube to put a nail in a board. And then after missing three whole times some guy comes over and just uses a hammer in two taps to do the job.
There’s a name for this specific effect, where all urban transport planning eventually boils down to making shittier versions of trains. Google is failing me without my coffee this AM, but its definitely a real thing.
Herbie never had any of these problems. If he hit someone it was because he wanted to. So the solution is easy: just invent the equivalent of Herbie the Love Bug and everything will be fine. Too difficult you say? Well, that’s not my problem. That’s the only kind of “self driving” car I’d ever consider riding in and the only kind that should be allowed to operate in real-world traffic situations.
i read most of the article and started thinking of something I don’t know if AI can handle. The wave through. I am pulling into a through street from a stop sign during rush hour, as the traffic lines up and stops in front of me (backed up) the next guy who would normally stop right in front of me, stops and waves me in. I don’t have room to pull out completely, but can nose into the lane and pull forward when traffic starts moving again. Not the most dangerous situation in the world, but frustrating if AI can’t take the hint from other driver that it is ok to pull out.
I look forward to the inevitable future article which takes a deep dive into what the automotive industry would’ve been like if cheese was the primary building material.
That already exists:
https://en.wikipedia.org/wiki/Automotive_industry_in_France
This reminds me of the cream cheese mirage
Just imagine a Ferrari pealing away from the stop light, leaving two steaming stripes of Parmigiano Reggiano on the pavement.
Ah, who am I kidding – you’d never be able to grind off that rind with a burnout.
Sorry to go off on a tangent, but lately I have just gotten sick and tired of every mention of X including “ex-Twitter” or “formerly Twitter” or something similar. Can we please agree on some shorthand to save myself and countless others the 3.2 msecs spent reading some form of “X, it used to be Twitter.”
I nominate “XTX” and suggest getting the Elon-gated one to throw some Musky-Bucks at Matt’s (not-so) secret crush Charli to rebrand herself Charli XTX for a month to promote this change.
We now return to your regularly scheduled self-driving virtual bitch session.
We should just keep calling it Twitter.
That or just X. No one cares either way and both take less time to type/read
Most of Reddit seems to have settled on Xitter (pronounced “shitter”).
AVs just need to operate in a closed loop with other AVs. Adding sentient meat puppets just throws too much variance into the operation.
AVs are well suited for an environment free of unexpected variables. Unfortunately, a vehicle that can only work in spaces like that is of little use to most people because those aren’t the places we like to live/work/exist.
So, a rail network.
Basically, but I was thinking more Minority Report. Mass rapid individual transit where all the vehicles are autonomous, with the ability to exit the “loop” and drive on your own. Think more of a HOV for autonomous cars.
You could take it a step further and physically ensure the cars remain on the correct route by forcing the wheels to travel along a defined path and OH MY GOD I INVENTED TRAINS AGAIN
I always thought it would require years, if not a decade, of non-interfering deployment, just recording when it would have intervened. Continual review, and improvement, then maybe reach a point of NHTSA consideration. How the NHTSA justified non-consensual public beta testing is baffling!
Government agencies love to encourage and rubber stamp this stuff because it 1). makes them look progress-friendly to constituents and 2). lets them outsource their jobs (in theory) to Silicon Valley.
I’m with you, I don’t see why they aren’t paying millions of people to drive around with a bunch of sensors on their car collecting data to feel to the learning algorithm.
No need to pay, there are plenty of paying enthusiastic supporters. Cruise is now paying for “always ready to take over drivers”.
Tesla already has a gajillion miles of driving data of both good and bad behavior driving. They are alone in this, and the way their car computing is designed, the car can edit the data of interest and only upload that to the cloud. AI has surprised a lot of experts and will surprise us all in how fast it gets to the point that it’s far safer than a typical driver. There’s huge potential for good–imagine that the car knows that a certain road area is prone to deer crossing at a certain time of day and certain seasons. Imagine the car communicates with other cars up ahead to know how slippery the road is.
Imagine all the people … living life this way! And none of them know how to drive anymore. I believe in separation of church and state, and robots and Drivers.
You assume the NHTSA can regulate something that most people don’t understand. They can’t. Regulation comes after everything blows up, not before it.
Apparently you have a point. Inaction should have kept it at bay. Why didn’t it require a new regulation (after they prove safety) to allow a new questionable technology to “play” in the public space?
Because what do you measure? What’s important? How does it work? Worse: humans drive by *breaking the rules* all. the. time. How do you decide when that’s okay, and when it’s a violation? Tesla measures the “what-if” and monitors if the system gets in accidents, and they seem to believe you get in less accidents with their system. What if they’re right? Is that the wrong metric? I don’t agree that dragging someone that’s injured is good behavior, but did anyone see this coming (a passenger is kicked under the car by another car hitting them?) Sure, we can look at it now that it happened, but what regulation would have prevented this, given we’ve seen this happen with human drivers?
The state of human driving skill is getting worse, and that *is* regulated. Regulation isn’t the golden cure. We’re going to see a lot of stuff happen before this gets nailed down, because they don’t even know what to look at, really.
“but what regulation would have prevented this,”
A regulation that states none of these assisted or full autonomy systems can go “live” till the companies developing them can demonstrably prove they will reliably perform flawlessly. I don’t think they will ever be able to, and can’t understand the logic of letting them loose on the public. As people get used to assisted driving features, they dull their senses. This is a Drivers enthusiast site! Yes there are many awful drivers, and they should have their license suspended, but having lane keep assist, and emergency braking is producing more of them.
Impossible. Humans can’t do it. Worse than that, if we program a system that obeys all laws, it will actually *cause* accidents because of the speed delta between it and non assisted drivers. Oops. Or the powers that be will have to define a new standard for acceptable driving, which will only happen after the heat death of the universe. Gotta get that enforcement revenue!
There will be issues, but will it prevent more than it causes? Using your metric means no progress, anywhere. You’re also positing a government agency that can produce a test that would prove it (The Standard). These are the people that can’t even approve new headlight technology that’s been proven elsewhere. Not happening. You’re going to have to settle for “good enough”, which brings us back to the metrics question again.
Citation needed on your thesis about driver aids. Seriously, what studies have shown that? Is it worse or better to have these systems now that everyone has a cell phone? That said, I’m an enthusiast. I’ve also driven across the USA many times, the first few times without even cruise control. I’ve seen people fall asleep driving. Hell, it’s happened to me! I’ll take the future with assists, and turn them off on the fun roads.
It would be nice to be able to summon Jeeves to take over for a long trip. Wanting something to be capable does not make it so.
In your opinion, because no metrics. I have both FSD/Autopilot and BlueCruise in my fleet. In my opinion, FSD is getting really close. I’ve put thousands of miles on both systems. BlueCruise has a nasty habit of cutting out randomly. FSD can get uncertain on surface streets but does really well on the highway. My only real gripe with it is that I’d like them to add the radar back in — I want something better than the human eye, not equivalent to it.
I’m not anti-tech/progress, far from it. Driving is a full attention requiring task that humans are remarkably capable of. I don’t trust or desire a computer to countermand control of my vehicle. I still get pissed when my recently acquired automatic pauses and downshifts when the increase asked for didn’t need it. Previous 40 some years of manual trans. cars. Good to hear you haven’t had any serious issues with your experience.
The interesting thing on both these systems is that you can override them with the brake, the accelerator, or the steering wheel. I can’t help you on the auto transmission I’m afraid… I’ve got the 10 speed on the powerboost and sometimes it makes me annoyed but usually behaves. Is it time to remove the overrides yet? I’d argue that it isn’t, but a case can be made that a human grabbing things at the wrong time is BAD. I recall the stories from Mountain Home AFB, and the pilots flying terrain guided approaches in the Rockies with FB-111s. They’d see the mountain coming at near Mach 1, panic and take over control. And auger into the mountain, because the computer can make the turn but a human isn’t fast enough. Hmmm.
My 7 speed automatic has been fine 99% of the time for the past two years. Twice the computer control module acted unexpectedly, while coasting/slowing as my lane was backing up and looking for the gap in the other lane, got it, and its programming decided downshift needed. It has plenty of hp and torque, and would have been smoother without that. This makes me think that until AI develops way beyond current status, and can be incorporated into autonomous vehicle decision making,beyond just following programs, the systems are not ready. You have experience with two systems, and I respectably ask. has it ever tried to dodge a squirrel? Can it distinguish from a small dog? An errant wind born plastic bag? Will it politely move over to the outside edge of the lane when an oncoming vehicle will need the room to clear a parked car?These are situational awareness issues that humans are innately good at that I suspect these systems to struggle with.
I haven’t had a squirrel moment on FSD, but I have had a deer moment. The system proactively dodged, slowed, and urgently required that I take over. I was impressed. One of the newer updates saw it more obviously lane position away from intruding vehicles; it moves away from semi’s drifting over and away from stopped vehicles on the shoulder. I haven’t had an animal encounter with the Ford system so I can’t comment on that, but it’s not as good at lane positioning as the Tesla system is in my experience. Perhaps they’ll get there though.
Thanks. I expected the move away from intrusion, I don’t expect it to anticipate a likely need to preemptively do so in the scenario of a parked car on the other side, and an oncoming.
Also Chevy may have realized stopping an expensive experiment when there is no logical solution is cheaper than continuing it so the supporter computer geek can just continue to run in the search for more information. If SciFi teaches us anything the scientists are as irresponsible as the Management just for different reasons.
Sci-fi by its very definition isn’t reality though. That would be non-fiction. How do the decisions of scientists and management stack up in that genre?
Yeah that crazy Asimov’s writing about sentient robots, going to the moon, tiny sized communications devices you can carry with you and contact anyone else with one. Or Jules Verne with his imaginary submarines? Boats that can go underwater for a long time, and made of metal. Metal would sink. Or tha Galileo character saying the Earth is round. Where do they get these crazy ideas? I here the new sci-fi vehicle is a giant metal bird that can fly hundreds of people across the country in a few hours.
Had you actually read those stories you’d know those include distant future galaxy spanning empires, FTL hops in that distant future calculated with slide rulers, “Multivac” Univac derived vacuum tube computers (also in that distant future) being at least half a mile long (~800 meters) and three stories high, human like autonomons, para universe matter swapping entropy dumping perpetual motion schemes, coal fueled submarines, and a whole host of other very made up, very unfeasible plot points aka “fiction”.
As brilliant a writer as Asimov was (yes I am a fan) even he didn’t see the pocket calculator coming.
As to that “Galileo character” the earth was well known to be round by the time he rolled onto the scene. Magellan AND Drake had already circumnavigated their way around it. Hell even the ancient Greeks not only knew they managed to accurately determine its diameter. That the earth was a sphere wasn’t in doubt by anyone with half a brain cell. The moon is clearly a sphere, the sun is round why should the earth not be? Need more proof? The fact a “horizon” exists. Ships go over it, they come back with the people on board none the worse for wear.
No Gallileo got in trouble for publishing his work on heliocentralism, which his Catholic persecutors didn’t admit was real until 1992:
https://www.nytimes.com/1992/10/31/world/after-350-years-vatican-says-galileo-was-right-it-moves.html
“Where do they get these crazy ideas?”
I dunno, maybe by looking up? Heliocentralism is obvious to anyone who bothers to look with even a crappy telescope and (closer to the truth) who’s business model isn’t dependent on it not being true.
Submarines? Already a thing.
Massive, power hogging vacuum tube computers? Oops! TBF those DID seem to be the wave of the future.
The rest of it? Fever dreams, weed, absinthe and LSD.
This is the right take, and not one that I had fully considered. I have 3 teen drivers in my house, with many of their friends around. One thing that most people are aware of is that teens have a higher number of accidents than more experienced adults. However an interesting observation I have made (no scientific evidence to corroborate) is that they get in a higher number of accidents that are not their fault than more experienced drivers and I think it is for a very similar reason to the one you highlight here. They don’t know when to not follow the rules and do something different (slam your brakes on a green light because another car is running the red light, for example). They don’t have the experience to handle non-standard situations. Unfortunately, while AI is very promising, it can’t currently gain experience, learn, and adapt as quickly or in the same manner as a teen driver can.
The part about wrecks not always being the teen’s fault is a very interesting take and I think there’s some merit to it. I almost got wiped out a month or two ago by a driver running a red light. Fortunately I saw them coming and realized they weren’t going to stop. A less attentive driver would have gotten t-boned right in the driver’s door.
That’s not entirely accurate. The software is really good at learning. But it’s not really good at driving, because it doesn’t pay attention to every little thing humans do, even unconsciously, and it has no instinct or intuition.
“That thing over there, IDK what it is, but it looks like bad news” is a very human reaction. It’s hard to build AI that emulates this.
I beg to differ. AI cars (theoretically) have the advantage to communicate with each other and with the grid on a level humans never can. As such AI cars can (again theoretically) get a far clearer picture of “that thing over there” humans will never be able to achieve.
Whereas a human driver simply sees a wreck in the road AI already knew all about it a few miles back, knows the licence plate, make and model of the truck that dropped the couch, has all the information from the AI car that (unavoidably) hit the couch, becoming the wreck and the information from all the cars that saw the wreck happen and the preceding cars that are now moving around the wreck while also mapping the debris field. AI even notified the correct emergency responders with a complete description of the situation as well as the insurance companies involved just after the wreck happened.
As a bonus the AI DOSEN’T “pay attention to every little thing humans do, even unconsciously, and it has no instinct or intuition”. AI isn’t going to be a selfish jackasshole by holding up the line, taking it’s sweet time to gawk at the carnage, maybe even whipping out a phone to call its friends and gets some snaps for the ‘gram. Instead our friendly AI is going to get all the pictures it needs from its own cameras as well as everyone else’s cameras while getting through as fast as possible as the situation allows so everyone gets where they’re going as quickly as safety and economy allows.
All of that is years away though.
Hence “theoretically”
But no one is actively working on such a system. It’s a cool theory that’s primarily pushed by Tesla-stans on Xitter, but while we theorize the actual companies working on self-driving algorithms are still revamping their systems to add the concept of object permanence. Just a couple of years ago I was shocked to learn that most of these “self driving” systems did not store memory of obstacles in the sensors’ field of vision – they reclassified every obstacle (and recalculated their expected path) with each sensor scan.
Maybe I’m missing something but it sure sounds like Hitachi is on it:
“With regard to vehicle safety applications, Hitachi has also been developing other innovative technologies for CAVs, which includes the update of a digital map by automatic detection of road anomaly as well as control of suspension component using connected map data. As shown in a press release [5], with an advanced AI algorithm embedded inside of the stereo camera, road hazards such as pothole and road debris can be detected in real-time and shared with other vehicles. And once a vehicle receives the information of an imminent road hazard, a real-time warning could be sent to the driver via navigation system where speed and steering of the vehicle can be controlled automatically or suggestion can be provided to the human driver.”
https://www.hitachi.com/rd/sc/aiblog/023/index.html
To be possible it seems like this idea would have to be government mandated. There’s no way a dozen or more proprietary systems could communicate like that quickly and without error.
If that’s the case so be it. Government mandates aren’t inherently a bad thing.
No, but they’re not popular with the corporations to whom they would apply and who coincidentally are major campaign contributors to the politicians who would write and enact such legislation. If we get what you’re talking about there’ll be a fee for every “out-of-network” communication. Why build something and not make it a walled garden where you can monetize every possible use of the technology?
What out of network communications are you talking about? These are basically automatically generated text messages. Most local text messages are unlimited regardless of the carriers involved.
I mean that manufacturers would find a way to charge for every interaction with a vehicle made by another company. I was using “out-of-network” as analogy. Or one of them will build out the fixed infrastructure side and charge high fees for the competion’s vehicle to use their equipment.
Maybe. I can still text someone else on another network as much or as little as I want for $15/mo though. Chances are they’re not paying anything extra to get my texts either.
Good points but I think more likely with limited experience they only know if x then y. After a few close calls they get the if y then look out for ABC. But similar points but my money is on the teenager over AI. Too many dead bodies before AI has a large enough sample size, and I don’t know if the coders are even able to codify human safety is a priority or if AI can change that because independent. It’s like sports video games where you prioritize speed, height, strengths etc. What percentage is survival over minimum vehicle damages?
just wish AV where developed for mass transport first, fixed routes busses especially the ones with segregated lanes are much more predictable and better for testing them and it can help with the real demand for more frequency in public transportation
ha ha public transportation! Are you a commie? Why squander such awesome tech on the public good?
You almost got me! Before your last sentence, I thought you were serious.
Time for more coffee, apparently.
I have had these same thoughts all along. In a perfect designed road system you can design an AV to drive perfect. But an AV is just a computer that adheres to its programs if A happens do B.And in all of the previous described events I bet Noone programmed the events. Hey hit a body that was flung in the way by another vehicle do X. You can’t foresee every eventuality let alone predict the proper response. However we have had both drunk and irresponsible drivers drag bodies and cars for miles so AVs not any worse. The only thing that could solve this is a real AI program that could actual consider all options and arrive at a solution for a non programmed situation. Given how poor IA is at the current state I doubt it will ever happen and also I doubt a true obot revolution will ever happen. Now an evil villain creating a virus to overtake an AI to destroy the stock market, or interrupt food or medical delivery or response for a fortune is the most logical problem we will encounter. And it is far more deadly and harder to fight especially since Noone is attempting to fight it.
We already do. Driving tech isn’t just decision trees. It can handle unknown conditions, just not as well as we do.
Hey I agree 100%
“Are they ranting around, waving a machete… talking about the Night Weasels that follow them everywhere and rat on them to the FBI and NFL agents hiding in every cactus…?”
So are you saying you wouldn’t stay to help me with my car trouble?
I’m going to give the short version of a story: I got my truck stuck in a National Forest, about 1/4 mile from a road with a chance someone driving by. So I load up some equipment, stood next to the road and built a small fire. Equipment consisted mainly of a MAXX Axe. You know, a GIANT axe. I stood there for about 15 minutes then started thinking. No one in their right mind is going to stop for a single person standing on the side of a Forest Service Road whose only possession is a GIANT Axe (truck wasn’t visible from where I was standing.) So I put the axe back in my truck and about 30 minutes later a nice couple stopped and gave me a ride outta there, even though it was opposite to the direction they were traveling.
Perhaps focus on applications that don’t require the high levels of situational awareness that comes with navigating a dense city? Mining trucks, long haul semis, agricultural equipment, shuttle buses seem like the better starting points that people are working on. Personally, I am really looking forward to semi trucks going autonomous, given the amount of reckless and dangerous driving I see from class 8 semis with human drivers.
As far as a human driver stopping immediately and not drag the hit pedestrian though, I disagree on this one. I’ve certainly heard stories about human drivers hitting people, and not realizing it or not caring at all (such as the hit and run driver who caused the situation in the first place), so I can’t fault the AV for this latest incident. At least it slammed on the brakes before hitting the person, I wouldn’t be surprised if a human driver would have simply kept going and killed the person, or at the very least not had the reaction time to minimize the impact.
Great but once you have a computer control vehicle with no human oversight you have a weapon easily taken over to do anything the hacker wants. A 40 ton truck driven into a crowd or building? 20 40 ton trucks slamming into a building? Destroyed as easily as the twin towers on 911. Looking for basic operational safety requires more computer equipment which translates into more opportunities to hack the vehicles and create chaos.
Your fear has some basis, but relative to other hazards that we don’t really think about too much, it’s a very high bar for the bad guys to achieve. Take a look around you and see how vulnerable much of our infrastructure is for some pathological idiot to exploit. Far easier to attack (fill in the blank) than some James Bond genius running autonomous cars into crowded areas. And there’s an upside to autonomous driving, in that the presently unsafe drivers might be mitigated. I’m not in favor of drunk driving, but maybe autonomous cars could cut down on that slaughter we seem to tolerate every year.
There will always be the drunks and poor drivers but we are talking a few thousand dollars to overcome the facts. But a hacker making tens of thousands as they are already asking is a realistic next step. We need to capture or destroy any hacker.
Whose waiting for AI to do that? Those planes were flown into those buildings by HUMAN pilots. Finding a suicidal someone who can drive a truck is even easier than someone who can fly a plane.
Hell old people drive into banks all the time! Just wire up a Buick and let Grandma do the rest.
True but consider if you will an entire city, state, or country with millions of thousands of pounds bombs ar your control so specifically controlled you can put each one on the exact location you want without worry anyone loses their nerves or is over taken by a mob of brave people? You can take out cops, hospitals, fire departments and use others for gas stations, power plants, cell towers anything you want.
Depends on the target.
Those big concrete planters out front of a lot of buildings aren’t just to look pretty. They are specifically there to thwart exactly the situation you are describing, except with a human driver.
True but a big honking electric semi could make a mess of one of those.
Overpass support columns. Drop a few highway overpasses, and watch the chaos. Just one of those paralyzed a good chunk of Pittsburgh (I think) and it was a genuine accident.
That said, planning is hard. Conspiracy theorists seem to depend on two contradictory assertions:
1. The powers-that-be are too incompetent to achieve their desired ends by legitimate means.
2. The same powers-that-be are able to plan complex operations, maintain perfect operational security, and execute those complex plans to perfection, without leaving a trace of evidence.
Sorry, rant complete! (I should really save that one as a text file; it would cut down on repeated typing.)
Again depends on the building. Some of those building were designed to resist armed attack by tank forces.
High profile important ones built within the past 70 years or so tend to have flights of stairs, little to no run up to the front door (e.g. downtown roadway grid) flagpoles, trees, bollards, big decorative rocks, maybe even a sculpture or historic fighting vehicle on a pedestal in the courtyard out in front. That’s no accident. Even a security controlled a parking lot filled with employee cars and/or perpetual gridlock would be an impediment to a suicide semi.
And that’s what you can see.
If I were given a free hand to protect an important building I’d do all the above as well as layers of recessed, instantly deployable tire spikes, layers of hidden recessed bollards, ex SF security armed with RPGs or anti tank weapons, tank traps, security cameras everywhere to monitor all incoming vehicles, radiation sensors, truck scales, chemical sniffers including dogs, and that’s just what I, a layperson can think of. Some of those have been SOP for hardened targets like the Diablo canyon nuclear power plant for decades.
Feeling lucky punk? Well do ya? Go ahead, make my day!
Speaking of nuclear power plants attack from the air was also kept in mind. US nuclear power plants in the 1960s were built with a 9-11 scenario in mind, only with a 707 as the hijacked airliner because that was the biggest non military thing in the air at the time.
There’s a reason Al Qaeda didn’t use a fleet of suicide semis. They certainly had the means and that would have been a LOT easier to arrange than hijacking a few airliners so why didn’t they? They could have done that as a precursor the attack from the air to sow even more confusion and delay. The damage could have been a lot more widespread too, coast to coast.
IMO the reason was it wasn’t worth the trouble.
Very well reasoned argument. I like the employees parking lot point. It made me realize take over the AV cars driven by employees after they get passed through security.
To do what exactly? An employee’s car is even less likely to be able to damage even a basic hardened building beyond chipping some concrete and bending a staircase handrail.
As far as I can see the worst you could do is turn the lot into a demolition derby. Sure you might take out a few unpaid interns before the parking lot clears and wreck a bunch of cars while trapping everyone else inside but not much more. Security will call for help on hack proof hard lines, the national guard will be deployed to create an evacuation corridor and/or airlift out the executives from the roof while the rest of the employees are used as bait to keep the cars distracted.
Eventually someone will bring a weaponized EMP generator or manage to get a kill command through to end the fun. All in all the plot of a B grade thriller but not much of a terrorist’s master plan.
Unless of course I’m missing something. You seem to have thought this out more than I have.
So friend, what’s YOUR evil plan?
Aah yeah place a few bricks of C4 or better yet thermite in a zombie employees car overnight. Even with company security they are not doing a deep dive security in the upper management’s cars every day.
My plan if I was a disgruntled person would be take over a few AVs and test by having them block an intersection. Then program a variety of cars and public transportation to stop on the tracks get hit by BART. Crash several with bombs into a mall or sporting event or call towers. Have a few dozen go crazy on the beltway. They all Don’t have to work last enough to convince most people it isn’t an accident. All of a sudden commerce shuts down, people stay at home clapping their pants. After a month many businesses shut down. Maybe even a few banks default. It doesn’t matter if it doesn’t work the first time just do it again randomly. If TV shows taught me anything is hackers get caught by tracking the IP address. So Preprogrammed events put evil code on a thumb drive purchase used or make your own computers. Destroy each computer after use like a burner cell phone. And this is thinking about it for 10 minutes after Bing watching NCIS, the original not the copies.
Aah yeah place a few bricks of C4 or better yet thermite in a zombie employees car overnight”
Why? To put a crater in some rando parking lot? Seems like a waste of good explosive.
“Then program a variety of cars and public transportation to stop on the tracks get hit by BART”
Well good luck with that. There are no level crossings across BART tracks. The third rail disallows them.
https://urbanrail.net/am/snfr/san-francisco-bart.htm
As for the rest of your plans I have a better idea:
Are you ready? Are you sure?
OK.
Just one word:
Sideshows!
Save yourself a lot of hacking hassle and talk a few brainless mouthbreathers into doing sideshows all over town. No intelligence, artificial or otherwise needed at all! You can shut down the entire Bay Bridge for hours! Throw downtown SF into gridlock! Tie up 880 till the cows come home!
Buuahhaaa!!
Oh wait, that’s just another weekday in the Bay Area….
Yawn.
“If TV shows taught me anything
…
And this is thinking about it for 10 minutes after Bing watching NCIS, the original not the copies.”
Ah now I see the problem.
As someone who has been professionally annoyed by the CSI effect I can assure you, such TV shows have taught you nothing. If anything they’ve done you wrong. Try NOVA instead, maybe a few Great Courses. Not as sexy but a lot more accurate.
Better yet try interning in a DA’s office or a crime lab for a while. I can assure you what you see on TV is very different from the crushingly boring, chronically frustrating, underpaid and far less hottie dense, very non sexually charged realities of those jobs. Make sure to bring your anti depressants, you’ll need them!
(If it makes you feel any better an uncomfortable number of people who really should know better don’t because of chronic brainwashing by such awful TV shows.)
A lot of those require a great deal of situational awareness. A lot of these problems stem from the fact that the AV boosters underestimate the number of variables that crop up at a moment’s notice even in theoretically “simple” environments.
But you’re not wrong. I thought one of the best early implementations of autonomy was the Local Motors (RIP) Olli. It was a low speed people mover that operated along fixed and easily controlled routes. Basically a train with lower infrastructure demands and more flexibility.
And probably as a low level simple vehicle has some security area level access?
As with many problems similar to AVs and autonomous driving, the challenge isn’t the first 80%, that’s relatively easy, highway miles with no traffic, basic grid streets with little traffic or back roads where its simply follow the lane. The challenge is entirely in the remaining 20%, the edge cases. Exactly like Torch is saying, how do you program a car to say “oh crap that Altima just threw this woman under my front tires what do I do?”
To elaborate a touch more on the remaining 20% (or 10% or 5% depending on who you ask) the issue is these AV systems are all largely machine-learning driven, they have massive pools of data on which to draw, but at the same time, the vast vast majority of these pools of data that tell the car what to do are your everyday normal weather, regular traffic type situations. They don’t include large samples of data of a hit and run, someone or something being run over, because we want to avoid it, and therefore an AV with no training on what to do WILL behave differently than a human most of the time.
I suspect (off assumption, I have no experience with AV companies operating procedures) that this is compounded somewhat by these cars still being in their infancy and as we can see, on a short leash legislatively. As such, AV companies like Cruise don’t send their cars out trying to find the edge cases, so they have no training for them, and thus the cycle continues. The cars get trained on normal driving situations, they see fewer bizarre things, and as such the ML reinforcement has a harder time overriding that standard driving programming, causing extremely unfortunate behavior when things go sideways.
Don’t even get me started on Tesla’s “vision only sensing” goals and how absolutely absurd and unrealistic that is, at least most AV focused companies have the sense to use LIDAR amongst other standard L2 sensing hardware.
I agree with 80% because iceberg principle. But I totally disagree with machine learning driven. Yeah letting them operate allows the dangers to display themselves, but it is them a roomful of computer and management people arguing what 1 solution solves the problem. But as we humans know and computers don’t is there is seldom 1 answer that works and even after a dozen if then sometimes there is no right answer. And then there is also hackers taking advantage. Until we can identify and 100% stop every hacker death, prison etc in every country this won’t work. Because it is the new cheap version of a WMDs.
The reality is these systems are genuinely all trained via Machine Learning and Neural Networks. They send the cars out to gather data because it’s extremely important for an AV’s functionality to not only know what is happening, but what caused that thing to happen. This can really only be done when having a full model of all inputs/outputs of a vehicle mapped, recorded and then fed back into the ML algorithm.
AFAIK hacking is very much a concern and possibility, but there are not any major accidents/injuries/deaths or even cases of hacks occurring. Several deaths and accidents due to inattentive drivers running Autopilot/FSD/Other OEM L2 system.
The goal of AVs is noble, and very understandable, a fully automated vehicle grid WOULD absolutely result in a near elimination of traffic deaths of both motorists and pedestrians, but the reality is there is an insane amount of time, money and energy that is still needed to get there.
I agree with you on most. But it is not machine learning it is machine gathering information for humans to interpret and analyze and possibly create the perfect solution. And most likely there are dozens of different options. None perfect. Then hacking if we can’t shut down currently operating hackers we can’t stop them from easier pickings here.
And sorry I don’t believe that you can create a 100% safe system. You have human programmers, they make mistakes. You have human builders, they make mistakes. You have bean counters who history shows will accept deaths and millions in lawsuits to save 27 cents per car savings as acceptable. You have weather you have computers failing bad production processes, products going outdated or finding flaws after implementing. Remember nuclear power was the answer to cheap safe inexpensive power that would solve all the world’s problems? It just doesn’t work that way. But I admire and appreciate the pure hearted that are working towards that goal we just can’t blind ourselves to people who will cheat lie and steal to help themselves.
Remember even God who is perfect crated the devil and kept him alive. So imperfect or we are meant to have more challenges in our lives than unidirectional problems.
“Remember even God who is perfect crated the devil and kept him alive”
Pretty sure “the Devil” is just a straw man God uses to be an asshole:
Hey don’t blame me, it was that other guy!
Programing the routine driving is easy. It is the off nominal situations where computers fail and humans just deal with it via TLAR. That Looks About Right.
Sometimes TLAR is breaking the rules (crossing the double yellow to go around an accident). Sometimes TLAR is everyone collectively deciding what ‘right’ is (freshly paved roads with no lane markings).
Humans are really good at “eh, that’s about right”. Computers can’t do that (yet).
They need to focus on learning how to interpret sarcasm. Bard and ChatGPT both suck at this and I wouldn’t trust them to drive my car. It appears that Cruze and Waymo and Tesla don’t get sarcasm either.
I mean, look at KITT, dude not only understood sarcasm he was absolutely dishing it out in every other line and he almost never got into any wrecks or traffic jams!!!
This may sound cold, but I don’t. Alert me, and I may choose to override the system, pull over and help someone, but I don’t want a machine that I paid for to have any interests in mind except for my own (within the bounds of the law).
Anything else risks a slippery slope to the far-fetched but plausible trolley scenario where the car determines it can save the lives of more people in an inevitable collision by driving itself and me off a cliff.
Yep. Even the “Three Laws” fail on that one. If the car decides the person by the side of the road will die if it does nothing, it’s stopping, no matter what. Hell, the machete-wielding nutbag from Torch’s example could probably just hold up a sign that says “I’m in danger” and fool it.
Once the AI AVs interpret the 0th Law of Robotics the trolley problem becomes less obvious. The robot could determine that you’re a harm to humanity in general and yeet you off a cliff and you’d never see it coming. I mean, if you’re an absolute dickhead everyone else will see it coming and shrug when it finally happens, but my experience with absolute dickheads is that they never realize that they’re absolute dickheads so it will be a surprise to them.
I see where you are going and think it is idealistic. I was living in California when there was a moderate protest when Charles Manson was denied parole. Now in a world where “Hitler was Right” has been posted tens of thousands of times per year, using low estimates, you just have too many morons to count on common sense. So you need consensus. But consider consensus marginalize the fring it also creates wacko groups who are conspiracy theorists and foreign governments to generate chaos.
And if you are the person dieing in the middle of the road then how are you feeling? Most likely the vehicles will be used to create a block to stop traffic until emergency crews arrive. Not likely we ask Billy the mail room clerk to deliver a baby or perform a tracheatomy.
I don’t fully understand the question, but it’s not as if a selfish AV will be any worse for other drivers or pedestrians than a human driven vehicle is now.
Well the question was if you are hurt from an accident how would you feel as everyone drives by and ignores your dieing body and considers it as a roadblock to getting for dinner? You are bleeding out slowly dieing and Noone stops. Or maybe people takes pictures of you trapped under a car? You slowly die as people pass buy don’t want to be involved but take pics of you and your car accident? I am sure you are willing to die because you don’t want to interfere with other people’s dinner reservations. Or you are fine with bypass injured people but expect others to stop for you.
How is giving an AV guidance not to stop without driver input any different than every car on the road now???
If someone is dying on the side of the road, my car isn’t going to stop for them now without me telling it to, so I fail to see how asking for the same from a hypothetical self driving car is worse?
You’re forgetting the fact that the human can call for responses who are actually qualified to help a lot easier (and safer) while AI is navigating the carnage. If the human is in a position to help the car can drop off the good Samaritan and either park itself in a nearby safe place or act as a roadblock, maybe even as an ambulance if the situation calls for it.
And where is the AV Car that can do this? Can it also fly? Oh I know it is store in a top secret warehouse with car that runs on water and gets 100 mpg. Of course with the scarcity of potable water it might still cost the same as gas to operate.
“And where is the AV Car that can do this?”
Ask Honda:
“Honda’s Driver Emergency Support System aims to assist drivers who become incapacitated. It uses a camera to monitor the driver and see if their eyes are open and if their head is up.
If the driver becomes incapacitated, the system will keep the car centered in its lane while also giving audible and visible alerts to get the driver’s attention. If the driver fails to respond, the alerts will get louder and the accelerator will be disabled to prevent sudden, unintentional acceleration if the driver responds in a panic. If that doesn’t get the driver’s attention, the car’s horn and hazard lights will be activated as the vehicle slowly comes to a stop. The system can also call emergency responders as the incapacitated driver could be suffering a medical emergency such as a heart attack or stroke.”
https://www.carscoops.com/2022/11/honda-unveils-next-gen-driver-assistance-tech-will-embrace-hands-free-driving/
https://www.motortrend.com/news/honda-sensing-360-elite-active-safety-features-preview/
Its not much of a stretch to extend this concept to have the car automatically plan a route to the nearest hospital and drive an injured person there if that’s the best option.
Oh and self parking cars have been around for quite a while now:
https://cars.usnews.com/cars-trucks/advice/best-self-parking-cars
But it isn’t doing what was mentioned.
Which part specifically?
The post says there is an AVthat can drop off a person who can help then park itself. That does not exist.
“The post says there is an AVthat can drop off a person who can help then park itself. That does not exist”
Vehicles that can drop off humans and park themselves do indeed exist:
https://m.youtube.com/watch?v=juQw8q_NRa0
That’s fine by me if me and mine are the ones your trolley are otherwise going to hit.
If the decision was yours to make I’m pretty sure self preservation of human nature would have your trolley take me and mine out even if the situation was 100% the fault of your negligence.
This is exactly why autonomous vehicles need to be an all or nothing endeavor. Either ALL cars are autonomous or none of them are. Human drivers and computer drivers cannot, and should never, share the road because one of those two sets of drivers will always have far more situational awareness and far more intuition than the other set, no matter how exactly that other set follows the rules of the road.
We, as a society, are pushing these so-called AIs and autonomous cars out into the public sphere far faster than they are ready to be.