today i had watched a documentary on AI. 'hosted' by Michio Kaku, the show NEXTWORLD has several episodes on different future tech.
the 1st ep is on predicting the future with a big bit of it on AI
I find the statement by Kaku and others in the show of "by 2045, computers and their AI core will be as smart as a human and be able to process information better and faster then a human brain" a disturbing statement of near religious faith in science
Computers may become better and faster, but they will never be able to truly replace the human brain. AI does not have emotions or the ability to take alternate actions based on a case by case basis. It simply works using cold blunt logic.
Intelligence and computing speed needs to be tempered and filtered by the human brain. without that, can AI even allow itself to override logic and allow for the application of mitigating issues like abnormal human behavior? Or to follow ethical standards instead of logic alone?
If not, then cold logic would state that to follow 'survival of the fittest' in the most minute way is the way to go.
but human emotions, empathy and compassion does not allow us to do this. For example, we do not euthanize downs syndrome babies, we do not let the weak go intentionally hungry or not cared for to die. We have built systems and bureaucracies that override things like survival of the fittest and provide support for those who, left to logical decisions alone, would be allowed to die
Or, would an AI system in a nation's govt simply just choose to obliterate a foreign nation that is making hostile actions and just a overall nuisance to world peace (places like North Korea come to mind) and kill all of it inhabitants? this would be the logical use of our nation's military and economic power. to string along a rogue nation in peace talks and sanctions ,etc, is illogical and rarely productive. would an AI in charge just choose to nuke the place at the start?
in this show, they also give the example of a california town's police dept. the police use a predictive AI system that indicates where a crime will happen.''great! so what does the town do? they have their cops patrol only in those predicted hot spots
so what if a crime, not predicted, happens outside those zones? what about when criminals hire a hacker to either find out the patrol zones, or game the data so that the AI sends the patrols to a completely bogus zone leaving the rest of the town not patrolled? experience should show us that nearly all recent tech advancements have been followed by a way for criminals to hack it. why would this system be immune?
it is just too much faith in a fallible system. yes-any system designed by man IS fallible.