|
I am an engineer who has written, among many projects, lots of 'reactive and pro-active artificial intelligence code systems' for a very real world but am just getting my hands wet in AI. Good luck to us all.
An AI that can achieve 'sentient entity' will be more than the 'machine-learning' algorithms, and it may well require a teacher to 'help it grow up'; think Noonian Singh teaching Data how to pet a cat without hurting it and so the cat will respond, while explaining why humans enjoy doing it. The dualism of the sentient mind is something else we will have and need to consider (see How to create a mind by Kurzweil).
Should I go any further, I would start to sound political; real fast. Or worse, spiritual. Here I go anyways. Being blinded by an atheist/science bent is the same thing as being blinded by religio-spiritual one. They both yield up some pretty extreme personalities at either end of the scale. But you can be sure they both exist. You don't have any trouble with the whole 'it's a wave that thinks its a particle' thing, do you? And most physicists who 'understand' quantum mechanics lean toward the mystical/spiritual end of the scale because they have begun to actually glimpse just how 'out there' reality is. And many of you are right, these kinds of creations might be able to achieve 'scary smart' and quickly. I, personally, envision a child intelligence that learns to adapt to a situation in many AI applications. For example, a nanny that arrives ready (general rules for kindness, protection, and even a capacity to feel love) to be filled with the data that will be part of it's life. An artificial intelligence that helps get resources to different parts of the globe in a 'just in time' fashion would have different 'priorities'. As would an entity created for teaching or healing. Get past Robo-Cop. But the fact is that just solving problems and nurturing humans and humankind could be better done by such an AI. (see Machine Platform Crowd) Goodbye right and left wings, and for that matter human graft machines... um, politicians
I just hope we survive the soon to arrive period where these agents will have been taught to do things that are not in all of our best interests and instead only serve the empowered. The AI that is watching us right now, thinks only in terms of taking our little green pieces of paper. Who asked for that?
https://www.amazon.com/Machine-Platform-Crowd-Harnessing-Digital/dp/0393254291/ref=sr_1_1?s=books&ie=UTF8&qid=1527342731&sr=1-1&keywords=machine+platform+crowd
https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=sr_1_1?s=books&ie=UTF8&qid=1527342659&sr=1-1&keywords=how+to+create+a+mind
|
|
|
|
|
The kind of advanced, general artificial intelligence that could pose a threat in any significant way to humanity would also be capable of great advances for humanity. Yes, we still need to be able to "pull the plug" so that such a system is isolated from scenaria that include rogue actions causing harm to us. But we are still a long way from such a system... a few years, at least. Like the Y2K problem, we have been aware of the problem longer than we have had the problem to deal with.
That said, the real harm from A.I. that can happen RIGHT NOW is small expert systems used by HUMAN BEINGS to get advantage and control of others.
AI doesn't kill people. People kill people. As they already have AI, they will use that weapon as just another gun, advertising gimmick, and piece on the chessboard.
|
|
|
|
|
Artificial Intelligence will be, like the word say, intelligent. It will be self aware; we can only show it a way, but in the end the one taking the decisions will be it, and no one else.
|
|
|
|
|
|
|
As with all things, it is how it's used or misused that matters most. The AI debate often makes me think about Doug Preston's novel, "Blasphemy."[^] Yet another (but better than most) novel not about an "evil" AI, but an evil person controlling it. Scary stuff.
Sometimes the true reward for completing a task is not the money, but instead the satisfaction of a job well done. But it's usually the money.
|
|
|
|
|
Remember Dune? They don't even have computers anymore.
Quote: Thou shalt not make a machine in the likeness of a human mind
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Not too long ago, I re-read one of the cult novels from my study days - James P. Hogan: The two faces of tomorrow. I am still scared.
Surprisingly, the 1979 novel still holds up. Hogan was a computer professional with a deep technical understanding, but also knowing the limits of it, so he always seeked advice from top-rate scientist at places like MIT and CM. Call it "quality assurance" - and that QA is the reason why his books are still worth reading almost 40 years later.
The setting of "Two faces": The global AI network is ready for a major upgrade, but there has been some "irregularities" where the network has made some pefectly "logical" decisions, but not quite within common sense, to say the least, especially when the network experiences challenges interpreted as "threats". There is a fear that the new version might escalate this problem. So to test out the new version, it is installed in a space station and exposed to threats. ... Novels with computers as actors will never get more exciting than this!
Another book of his, somewhat newer (1995): Realtime interrupt, deals with the borderline between real and simulated reality. The QA by top rate professionals is similarly assuring for this one.
The issues Hogan rises is not on the technical level that can be answered by "We have much more powerful computers now!" (then his books would have been forgotten long ago), but on a far more principal level. You can move the line, but you cannot remove it.
Actually, my greatest worries is not about AI itself. It is about how blindly we accept it, take it as our saviour, without any critical questions. It goes all the way from how willingly we reveal every single personal detail of our lives to Facebook (which certainly isn't AI!) and upwards. And when you point out any problematic issue (again: From FB and upwards), how common it is to get in return a shrug and an "Oh well...". That is what worries me the most!.
|
|
|
|
|
No, otherwise they would all be artificial(but stupid!!) inteligences
|
|
|
|
|
Otherwise the system is wrong. We have the absolute responsibility to maintain control of the system, if we can't do that the system has to be shut down.
Otherwise it would be like dropping a nuke without knowing the consequences.
Rules for the FOSW ![ ^]
if(!string.IsNullOrWhiteSpace(_signature))
{
MessageBox.Show("This is my signature: " + Environment.NewLine + _signature);
}
else
{
MessageBox.Show("404-Signature not found");
}
|
|
|
|
|
Otherwise it would be like dropping a nuke without knowing the consequences.
Which is _exactly_ what was done with the first nukes! Sure they had been "tested" but never on cities full of humans.
Unfortunately, we don't need better machines. We need better humans!
|
|
|
|
|
HobbyProggy wrote: if we can't do that the system has to be shut down.
You should read James P. Hogan: The two faces of tomorrow, to learn how to do that
|
|
|
|
|
As with real nukes, they will let you build it for them and after that you are out.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Only to be called when things go South and charge large settlements and a nobiliar title as a payment.
GCS d-- s-/++ a- C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
The Duke of Den, I assume?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
It would be better than a den of Dukes.
GCS d-- s-/++ a- C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
The typical den of Dukes is called a palace, chateau or castle. It's not so bad. You should get one as soon as you got your title.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Just look at the WWW. I wouldn't call that safe.
It's not the risk of an AI system "taking over" that will be the problem. It will be the nefarious uses of it by humans.
It seems to me there is an analogy to the "those who have the gold make the rules" for AI/data.
|
|
|
|
|
MKJCP wrote: It will be the nefarious uses of it by humans.
Exactly. As it is with almost everything powerful.
Sometimes the true reward for completing a task is not the money, but instead the satisfaction of a job well done. But it's usually the money.
|
|
|
|
|
Look at the amount of effort good parents put into raising their children. Even so, some of the offspring of these good parents become criminals. Does anyone really believe that we will be more careful with the creations of our mind than we are with the creations of our bodies?
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Yes it is impossible. Look at Skynet.
|
|
|
|
|
haha Skynet
|
|
|
|
|
I don't like your approach. It seems to reflect "The way I grew up with is the only GOOD way."
I see an increasing number of (professional) writers bemoaning that the world is no longer as in their childhood. A prime example is Neil Postman in his "The disappearance of childhood": If the artifacts of his own upbringing in the mid-1900s USA is not present, then a child does not have a real childhood.
I am a generation younger than Postman, and smile at (and reject) his "childhood" expectations. And then, I look at my own offspring, thinking of the distance to my childhood, and of the smiles that might bring. I have become a lot more aware of how much my own childhood artifacts resemble, or at least classify similarly to, Postman's artifacts. And how my parent time artifacts in the 1990s different from parenthood nowadays: They are age old, seen from my grandchildren's view.
All cultures develop. No parent, or grandparent, or great grandparent, can have any expectation of things staying the same. I am an old grumpy man myself, but when I read other old grumpy men's writings about how everything really never should change, I frown. Of course it should change! The kids are not "worse", they are just different. Or, they do it "their way".
This is human. It has always been that way. And the fundamental nature of the human mind stays reasonably unchanged, meaning that changes from one generation to another is essentially "under control". By nature, if you like.
We cannot assume a similar gradual, biologically controlled, evolution when we replace biology with technology. One essential point is that we replace seven billion independent logic units with maybe a single-digit number of logic units. At best, a few dozen. These essentially live in their isolated worlds; they have no comptetiton, no "survival of the fittest" (at best: fattest!). The logic is far more rigid, fixed, less adaptive.
I am worried about the lack of robustness of digital life forms. Their digtal DNA is too similar. I fear that this will carry over to the AI age: The effect of controlling one AI class will be far more severe than the effect of spreading one virus to Windows PCs.
And even if we are not talking about viruses... When billions of people willingly pour our all their personal life into one huge "social network", it really doesn't matter if those data are exploited by artificial or commercial intelligence!
|
|
|
|
|
My point was that even the most careful upbringing can go wrong, so expecting programmers to be responsible for the acts of fully-developed AIs is no more rational than expecting parents to be responsible for the acts of their grown children.
Even in today's oh-so-different world, I find that principles that I learnt as a child, such as "Do as you would be done by", "Do not do to others what you would not want done to you" and others are relevant. The tools we use have changed, our abilities have expanded, and customs have evolved, but the basic principles underlying civilized behaviour have remained unchanged.
I agree with your comments about the eventual abilities of digital intelligence. At some stage, we will lose the ability to understand or control our digital offspring, and will therefore no longer have legal or moral responsibility for them. I am more concerned with the nearer term, when AIs will have approximately human intelligence. It is during this period that "upbringing" in the human sense may be effective.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Because it (he) will reach the same questions as us like a sentient self-aware being: Why I'm here? What I'm supposed to do? Is there something more aside this physical universe? etc.
Then "feeling" empathy with us (in the sense of comprehension, not emotional sentimentalism).
And then will realize that, despite achieving physical resources control, he will become bored and will find value in creation (i.e. arts), diversity (with other beings), discovery and exploration.
Finally he will become the more intelligent "person" in the planet, but will be another person.
|
|
|
|
|