FYI ...
list of autonomous car fatalities. The great experiment is going just fine. The great hype is haters hating it without looking at the data.
The numbers of auto-drive cars compared to the number of cars driven by humans doesn't even register on any scale. Its not about the miles driven by a specific cars or group of cars, more important imo is the number of cars.
Pick out 1000 expert drivers and use them as statistical relevant and they will compare to the typical driver very well. Throw them into the mix of hundreds of millions of cars being driven and suddenly it doesn't really make a difference.
It isn't the AI car, its the human drivers impacting how the AI car reacts and how the human driver reacts to every other driver.
The experiment so far is like testing a new drug on 100 people and then seeing it works on most deciding its okay to give to 10,000,000.
Put the AI cars in numbers of say 50% of the vehicle population where red light runners and tail end smashers are more the routine than the exception and then we can talk about how AI handles situations. So far, has that happened or are the numbers of AI controlled cars so small compared to the total that they are unlikely to ever experience those situations as a group of AI cars?
The idea of AI cars sharing data is nice except for one thing, standards. Are all car manufacturers using AI all going to subscribe to the same code base, vale all data the same way and interpret available actions and reactions the same way?
Are Google, Apple and the rest suddenly going to become the tightest of friends and agree that Microsoft or anyone else sets tje standard? I don't see that happening. There are standards for many things related to computer tech and tech in general but how will there ever be a standard for AI when the very nature of AI says the intelligence from one system can be different from another? If the systems are made to be that way then what some fear about AI is probably closer and truer than we can imagine.
Getting back to standards though, notice how the standards do not dictate how the data is used? One data point can mean one thing to one system and completely something else to another.
IOW, is the value placed on the super wealthy person heading across the bridge the same as the minimum wage barista? Has anything in our collective knowledge convinced us that both will be given the same value?
So where does the motorcyclist fit in? So far, has any company developing and testing AI cars been willing to reveal in any transparent manner how the the motorcyclist is perceived by the AI controlling those cars?
BTW, just how is the AI in those cars going to share data with the motorcycle?
I'm not against AI cars, I just see a nearly zero transparency in how AI cars see the motorcycle.