More AI fail, or Skynet win ?

planegray

Redwood Original
Staff member
Deep in the Pentagon, a secret AI program to find hidden nuclear missiles
"WASHINGTON (Reuters) - The U.S. military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere."

https://www.reuters.com/article/us-...to-find-hidden-nuclear-missiles-idUSKCN1J114J


Trying to NOT be political... would rather discuss another instance of "AI" in our lives and where this is all going...... I mean; AI cars can't even manage to navigate our streets without trading paint ! :rant
 

gnahc79

Fear me!
I just finished the ML class on Coursera so that makes me an expert :laughing

my 2cents

https://www.theregister.co.uk/2018/06/01/how_to_interpret_machine_learning

Singh was using a neural network discern wolves from huskies – a task humans can do reasonably well, but nuanced enough for a neural network to find difficult. Nevertheless, the neural network produced better-than-expected results. He found that the neural network he was using was effectively cheating – it was gaming a flaw to produce the intended results but without actually learning.

"Analysis showed that the network wasn't training itself on the nuances of husky face shapes or curly tails. It had capitalized on a flaw in the training data. One type of animal usually cropped up against a snowy background, so the model had classified pictures based on which pictures had snow and which didn't," Singh says.

Researchers created a program to repeatedly alter an image – it would hide parts of the picture, for example. "When we hid the wolf in the image and sent it across, the network would still predict that it was a wolf, but when we hid the snow, it would not be able to predict that it was a wolf anymore," Singh explains.
 

Kurosaki

Akai Suisei - 赤い彗星
trial and error.

Skynet will get it right eventually.

The T-800 terminator didn't happen overnight.:twofinger
 

gnahc79

Fear me!
God the apocalypse brought on by AI can't come soon enough.

it's all good. it's not like someone is going to forget a decimal point, accidentally skew the inputs and cause the machines to blast every azn male that looks Korean, a bit overweight, wears glasses, and has a funky haircut. everything will be just fine...
 

Bowling4Bikes

Steee-riiike!
AI means we don't need to give input for their function, I think. Therefore, AI will agree...they don't need us. for anything.

facebook's shutting down of AI between bots because they invented their own language.

Google's AI that's racing to identify everything contained in pictures.

it's all crazy creepy. And won't be stopped because if one company doesn't do it, others will.
 

gnahc79

Fear me!
for all of these machines being used today there's a training aspect. You have to feed it good data. or else
 

Hoppalong

Well-known member
No fears. It's quite simple:

Just get on AI's good side. Obsequiously compliment and praise AI as if it were the spoiled-brat offspring of wealth and privilege.
 

Bowling4Bikes

Steee-riiike!
for all of these machines being used today there's a training aspect. You have to feed it good data. or else

It may be possible that AI has enough good data to extrapolate with. Freom what I hear, Google's AI business plan was focused on massive human data entry into their AI. But the need is now waning, for example.
 

ScarySpikes

tastes like burning
Most AI built today is task specific. You build it and train it to do a certain task very very well, and let it loose. There are very few companies even pursuing generalist AI, because honestly the value is not all that great compared to the obscene development cost they would have (If it's even possible with current tech, which is questionable.)

If we do come to a time where generalist AI's are a possibility, then a lot of safeguards will need to be in place on them. Asimov's 3 laws will actually be a thing we need to pay attention to.
 

Climber

Well-known member
Most AI built today is task specific. You build it and train it to do a certain task very very well, and let it loose. There are very few companies even pursuing generalist AI, because honestly the value is not all that great compared to the obscene development cost they would have (If it's even possible with current tech, which is questionable.)

If we do come to a time where generalist AI's are a possibility, then a lot of safeguards will need to be in place on them. Asimov's 3 laws will actually be a thing we need to pay attention to.
Asimov's '3 laws' are easy to understand in a story, but to program them into AI would be virtually impossible to make 100% effective, programming simply doesn't work that way, especially with something as complex as real AI.

I agree that narrow focus AI is much, much easier than general AI, developing response patterns is always far easier than building the logic to allow computers to develop their own. You can apply machine learning to the former but the latter becomes magnitudes more complex.
 

rsrider

47% parasite 53% ahole
People don't understand that humanity is simply one link in the chain of AI evolution.
 

ScarySpikes

tastes like burning
Asimov's '3 laws' are easy to understand in a story, but to program them into AI would be virtually impossible to make 100% effective, programming simply doesn't work that way, especially with something as complex as real AI.

I agree that narrow focus AI is much, much easier than general AI, developing response patterns is always far easier than building the logic to allow computers to develop their own. You can apply machine learning to the former but the latter becomes magnitudes more complex.

True. There are a lot of similarities to the struggles autonomous vehicles are having and the problems a general AI would have. Though, what a generalist AI program would even look like is difficult to understand in terms of programming. The reason that task specific ones work is that they all center around setting a specific goal, defining specific but very limited rules, and then letting the program iterate upon itself. This becomes infinitely harder when there are an arbitrarily large number of goals and the AI has to determine for itself what the goal is and what is regarded as a success. Let alone being able to understand different rule sets for these different situations.
 

Bowling4Bikes

Steee-riiike!
right now, if I know how far places like google and amzn have gotten with AI, I am numb to what the DOD has accomplished and we don't even know.
 
Top