Don't Believe the AI Hype!

fubar929

Well-known member
Somehow, I don't see some mesh network actually working unless there is some standard of valuing the data and how it is interpreted. These are all proprietary systems, which one, Apple, Google, Ford...is going to give up their way to go one way that isn't theirs?

Human drivers have been successful for decades without a mesh network. Why would an AI need one?

What an AI driver needs most is "big data": observations and sensor data of thousands or millions of human drivers. And that's going to be trivially easy to acquire! Major manufacturers like Honda will eventually wake up and start putting sensors in every vehicle they sell, even the ones that don't have any autonomous driving capability. Then in exchange for giving you a free watered-down cellular network connection or free Google Maps they're going to beam all the data those sensors collect to a huge central server farm, where it will get processed, incorporated into their AI driving algorithms and then downloaded to all Hondas with autonomous driving capability. No need to share data with anyone: Honda sold 5M cars worldwide in 2017 so they could easily be collecting data on a scale that Waymo and Tesla can only dream about.

As far as networks in general go, it's important to realize that standards are possible. They're the reason that an Android device can make a phone call to an iPhone and that my Amazon Echo Dot can tell my Samsung SmartThings hub to turn off the GE smart switch that controls the lights in my driveway. Standardization and technology licensing happens all the time!
 

berth

Well-known member
AI are about reaction, not prediction.

As a "general rule", we humans have organically decided that a "2 second" following distance is a good rule of thumb that gives us enough margin to react to the unexpected.

This margin, roughly, includes the physic of the situation, the intrinsic, mental reaction time of the driver, plus the mechanical reaction time of the driver once they have determined action to be taken.

So, simply, 2 seconds should be enough for an attentive driver to see Something bad, move their foot off the accelerator, and bury the brake pedal in to the floor at which point the passengers are in the hands of the cars braking and ABS system.

Obviously there are driving, vehicle, physiological, and other conditions which can affect the wisdom of the 2 second rule.

One of my favorite little enhancements that Mercedes (and perhaps others, now) do is the cars have a heuristic that measures all sort of factors, but a key one is "how fast you get off of the accelerator". The difference in how fast we shut down the throttle between panic braking and just slowing for a stop is notable. When the system detects what it feels is a potential panic stop, during the time between when your foot is leaving the accelerator, and you're transitioning to bury the brake pedal the car is pre-loading the brake system to make it stop faster. The reaction time of the brake system is fast enough that it has enough time to actually do this, and it's supposed to reduce braking distances by 10-15%, which is not nothing.

An AI's 2 second rule would be different than a humans. it can observe, decide, react, and activate the cars systems much faster than the meat sack in the driver seat. So, it might have a 1.5 second rule instead of a 2 seconds rule.

The cars driving system goal is to keep the car properly postponed so that it's always following it's internal "1.5 second rule". That is, to ensure that the vehicle is surrounded by enough buffers that it will be able to react in time. With that capability, it has less of a need to be "predictive".
 

295566

Numbers McGee
The cars driving system goal is to keep the car properly postponed so that it's always following it's internal "1.5 second rule". That is, to ensure that the vehicle is surrounded by enough buffers that it will be able to react in time. With that capability, it has less of a need to be "predictive".

I don't think anyone in this thread has argued against the tech that exists, or will exist, that will reduce AI reaction time to much less than that of a human. On that, everyone seems to be in agreeance.

However, the point that seems to be uncertain for many, myself included, is the AI's inherent lack of an ability to be, as you state, "predictive." The best way to avoid an accident is to avoid risky situations. This is especially true while on a motorcycle. When I'm riding and see a driver eating a burger with both hands, staring down at their phone, or turned around reaching for something in the rear seat, I am able to recognize that and determine that I want to distance myself from that car/driver, and brake or accelerate accordingly.

AI simply cannot do this. It will read the car's actions as "OK" until it's not. While 9/10 times there may be enough safety cushion to avoid a collision, or room on either side to swerve or maneuver out of the way, the downside is still there.

Ultimately the tech, even with this downfall, may be safer than human drivers behind the wheel (especially with the constant distraction of cell phones). I'd like to again point out that I think AI and self-driving cars will likely make the roads much safer... however, it's an interesting conversation to have regarding the comparison between a human's ability to make decisions and judgements with only partial information, and an AI's ability to make decisions much quicker, but requiring a "complete data set."
 

Marcoose

50-50
When I'm riding and see a driver eating a burger with both hands, staring down at their phone, or turned around reaching for something in the rear seat...

I'm jealous. You evidently have far, far better sight than me. I only notice these things within 1-2 car lengths.
 

berth

Well-known member
However, the point that seems to be uncertain for many, myself included, is the AI's inherent lack of an ability to be, as you state, "predictive." The best way to avoid an accident is to avoid risky situations. This is especially true while on a motorcycle. When I'm riding and see a driver eating a burger with both hands, staring down at their phone, or turned around reaching for something in the rear seat, I am able to recognize that and determine that I want to distance myself from that car/driver, and brake or accelerate accordingly.

The reason you're making distance is so that you can react appropriately (especially if you're slowing down, rather than overtaking). Much like the AI "won't speed", it won't get impatient, and it has no ego. It has no problem being 3 seconds behind the car in front of it.

As riders, we overtake to put the problem behind us. We merge lanes to put more space between us. But everyone knows it's better to be ahead, or far behind, a crazy car than next to them.

Now, we can all visualize the scenario of passengers in a AI car yelling at the car to pull away as it "safely" follows the semi truck with bridge parts that are teetering on the trailer as the webbing straps come undone. The AI has to "wait" until the parts starting hitting the ground before it can "react" to them. The passengers see the disaster waiting to happen, the AI is confident in its margin of safety and ability to react.

AI simply cannot do this. It will read the car's actions as "OK" until it's not. While 9/10 times there may be enough safety cushion to avoid a collision, or room on either side to swerve or maneuver out of the way, the downside is still there.

But we also know that most people do not leave enough cushion. Most people drive too close, and too fast. AIs arguably will be much better "behaved". I'm waiting for the post talking about how someone is caught at a 2 lane stop light behind a pair of AI cars, both of which, on green, proceed to accelerate at the same rate up to the same speed, and simply end up blocking both lanes. Ideally there's some "don't drive next to people" logic that will let the AIs stagger, but even then they may not leave a gap enough to let someone pass. They'll be those Sunday drivers everyone hates. Probably be a time when you can tell an "AI train" that naturally formed on a freeway, all lined up, evenly spaced.

Ultimately the tech, even with this downfall, may be safer than human drivers behind the wheel (especially with the constant distraction of cell phones). I'd like to again point out that I think AI and self-driving cars will likely make the roads much safer... however, it's an interesting conversation to have regarding the comparison between a human's ability to make decisions and judgements with only partial information, and an AI's ability to make decisions much quicker, but requiring a "complete data set."

If and when the AIs start becoming dominant on the road, then the risk of sudden events should go down. Fewer red light runners. Fewer "3 lane changes to the off ramp" on the freeway. Fewer cars bolting out of driveways, fender benders, etc. etc. If there's anything I've learned over the years, apparently the only things people will risk their lives over are pulling children out of a burning building, and a left turn.

Mind, I should also mention that I'm an AI skeptic. I don't believe that it's "just around the corner". I think it's plateauing at high tech cruise control and, perhaps, slow, neighborhood automated taxis -- in good weather. The hard problems are still hard and bulk data is not necessarily the hammer to fix the problem. Modern AIs are still more hi tech slot cars than autonomous sensing bots. They still can not "see".
 

NeilInPacifica

Well-known member
AI are about reaction, not prediction.

AI will predict behaviors on the road, just like they do in other venues. Predictive analytics have been used for decades to understand who is going to buy what and when; used by marketers and quants alike. Machine learning to understand when someone is about to change lanes isn't much different from the way you do it yourself. There are myriad "features" unfolding in the scene before you...driver to your right is looking in his mirror, his car ever so slightly drifts towards your lane. These are all features that an AI can detect and factor into its decision making. If someone is angry and yelling, jamming out to music, honking their horn, swerving through traffic, etc....all are features of the landscape that the AI will capture and associate over many data points to specific outcomes. It's a matter of collecting all the data and feeding it into the model.
 

fubar929

Well-known member
However, the point that seems to be uncertain for many, myself included, is the AI's inherent lack of an ability to be, as you state, "predictive."

Mind-reading doesn't exist nor does "spidey sense" developed as a result of being bitten by a radioactive spider. So how do you, as a rider, predict what a driver is going to do? You observe that driver's behavior. You might do things like:

- Watch the driver's position within their lane; are they holding a steady line or weaving back and forth?
- Watch the driver's speed: are they maintaining a constant speed or is their speed erratic? Are they going faster than the flow of traffic, the same speed, or slower?
- Watch the angle of the front tires: are the tires pointed straight ahead or are they at an angle indicating an imminent lane change?
- Watch the following distance: is the driver tail-gating you or the driver they're following or are they leaving a reasonable gap?

Good news! An AI driver can do all of these things as well or better than a human.

Of course, I'm sure you (or Schnellbandit) will mention some crazy scenario: what happens, for example, when a blind one-armed driver turns around to offer a bite of the hamburger they're eating to the 6-foot iguana sitting in the back seat?!?!?

Yes, if the car is directly in front of you, and it's during the day, and the windows aren't tinted, and the windows aren't dirty, and the back of the car isn't piled full of crap and there isn't a rack full of bicycles mounted to the trunk and you have perfect vision it's possible you'll know that there's a blind one-armed driver feeding hamburgers to an iguana. Here's the thing though: the computer doesn't need to know exactly what the driver is doing; it just needs to know that something inside the car has changed. Computing the differences between two images or two frames of video is already possible. Game systems like the Xbox Kinect were doing a reasonable job of recognizing head and limb positions (albeit within limited distances) 8 years ago so being able to recognize the positions of the passengers in a car isn't out of the question. Not that it really matters: once the AI driver knows that there are "significant" changes happening inside the vehicle it can "predict" that the driver may not be paying attention and take appropriate action... just like you would.
 

clutchslip

Not as fast as I look.
Wow. Some of us are really enamored with Artificial Intelligence. I am guessing it might take several dozen gigabytes of calculations to do what a human does to make many driving decisions. Humans ignore irrelevant information. Something a computer can not do, because it does not know what is irrelevant. Humans can be prodictive. It is not mind reading, but it is "spidey sense" because it is based on a enormous wealth of disjointed information that forms an opinion of action based on trillions of bits of information. Computers have no idea what is relavent and what is not relevant. Humans give them limited criteria to make decisions. And that is why machines suck at driving.
 

rudolfs001

Booty Hunter
Wow. Some of us are really enamored with Artificial Intelligence. I am guessing it might take several dozen gigabytes of calculations to do what a human does to make many driving decisions. Humans ignore irrelevant information. Something a computer can not do, because it does not know what is irrelevant. Humans can be prodictive. It is not mind reading, but it is "spidey sense" because it is based on a enormous wealth of disjointed information that forms an opinion of action based on trillions of bits of information. Computers have no idea what is relavent and what is not relevant. Humans give them limited criteria to make decisions. And that is why machines suck at driving.

Humans can't drive when they're born, nor do they know what information is relevant or irrelevant. They learn over a long lifetime.

AI is the same way, the difference is that it learns much much faster than a human, and has a much more stable memory.

We're now in the very early stages of AI. Our best AI isn't even as intelligent as a toddler, but they are learning and growing, and fast. It won't be too long until they are smart enough to drive a car more safely than a human, and not too much longer until they are more intelligent than the brightest human.

It's no longer a question of "if", but a question of "when".
 

NeilInPacifica

Well-known member
Wow. Some of us are really enamored with Artificial Intelligence. I am guessing it might take several dozen gigabytes of calculations to do what a human does to make many driving decisions. Humans ignore irrelevant information. Something a computer can not do, because it does not know what is irrelevant. Humans can be prodictive. It is not mind reading, but it is "spidey sense" because it is based on a enormous wealth of disjointed information that forms an opinion of action based on trillions of bits of information. Computers have no idea what is relavent and what is not relevant. Humans give them limited criteria to make decisions. And that is why machines suck at driving.
These views were valid when everything was programmed by people. That isn't what machine learning is, the model learns from data and we can feed it way more data than any human will ever experience in a single lifetime. These models are not "true general intelligence", but they are very very good at the narrow problem set being targeted...like driving a vehicle, or finding tumors on a scan, or matching the bad guy's face from CCTV imaging, etc..
 
Top