|
Well i just had to give you +5 for that, yes the current computer programs are like you have described, what you just described there is called a standard model of vision, i 'am currently researching in computer vision and i'am trying to integrate figure - ground discrimination into the algorithms as efficiently as possible. The reason why computers are bad at pattern matching is that programmers haven't just figured out how to efficiently tell a computer how to do just that, we might not need new hardware, but such algorithms can be hardware accelerated by using Graphics Processing Units, introducing parallel processing and more better methods/algorithms will start solving the problem of perception by computers. Don't blame the computers blame us programmers for their shortcomings.
|
|
|
|
|
You are sort of missing the point of the conversation here. When I said "we have to change the very basis for how they work," I didn't mean the hardware. While those ant-like robots are the most humanlike in intelligence, scaling up that hardware to human size would be technically infeasible. I meant that, if the goal is to have computers have humanlike intelligence, then the whole way the system works would have to change, hardware and software both.
You said you are "trying to integrate figure-ground discrimination into the algorithms as efficiently as possible." In other words, the way the computer you are using "thinks," it requires you to tell it what patterns exist, what to do when it sees them, what to do when it doesn't see them, et cetera, et cetera. This "intelligence" of precisely following a programmer's algorithm is simply not humanlike. The extent to which that computer is humanlike depends, as you said, on your own programming ability to impose a small part of *your* human intelligence into the computer. The computer will only be able to mimic the small portion of your intelligence that you are able to give it, and not one whit more.
The program you eventually write will not be really doing the same thing you are doing in your own head at all. You are not following any kind of algorithm when you look at a cute kitten and say, "Awwwwww." It's simply that the way your perception works, the kitten triggered a sufficient amount of a pattern linked to the "Awwww" emotion to evoke enough of said emotion that you were made consciously aware of it, and following a pattern of "what I do when I feel sufficient awwwwwww," and said pattern not being overridden by other patterns (such as "what I do when I'm in front of my boss"), you triggered the pattern for saying the sounds of "Awwwwww" in a particular tone of voice.
This networking of patterns and behavior is the basis for humanlike intelligence. So, for a computer to be humanlike, we have to throw out our current programming techniques and start fresh with an attempt to create computer architecture (by which I mean, both hardware and software) with a pliable, adaptible artificial network that "learns as it goes" not because some algorithm tells it what it should be learning, but rather, because learning and processing are one and the same. That's how a human mind works: every time you perceive something, you reinforce or change patterns simply through the act of perception. The human mind is in a state of constant change. This is why we are adaptable, creative, and not very good at following unchanging sets of rules.
When a computer naturally changes from everything it does (and not just because a programmer has put detailed hundreds-of-pages "learn from X" code alongside all of the "do X" code), I will say that it has intelligence similar to a living creature. And when a computer is able to sit back and think about what it has learned and draw new conclusions from it, then I will say it has humanlike intelligence. Until then, as long as there is a human imposing a mimicry of his or her own intelligence behind the computer's behavior, it will still be nothing more than a complex regurgitator.
|
|
|
|
|
I get your view, "new hardware and software", but any machine can be simulated on a digital computer before it is made,so those pliable, adaptible artificial network that "learns as it goes" can be emulated on already existing digital computers with the right coding. That's the point i'am putting across here.
And one can also use already existing GPU's for accelerating the processing speed of the simulation.
|
|
|
|
|
That is by far the best description of AI that i have heard. I think you are totally right in saying that the approach must be different. In fact, the result will not be a computer at all, since a computer does what you tell it to. It is not the case that a self-aware entity will do as you ask. If i ask a lion to say cheese, it will most likely eat me instead.
|
|
|
|
|
Programs are already self aware. They get offended too.
Veni, vidi, vici.
|
|
|
|
|
My vote of 5. I think they are too, only that we haven't yet given them the ability to express themselves.
|
|
|
|
|
And when they get offended .. nah! the hell
|
|
|
|
|
funny huh it seems we have been hurting programs feeling without ever noticing, in the future even robots will have rights, so tell your child to be ready for such issues.
|
|
|
|
|
A computer will never be able to simulate the human brain "very accurately", IMHO.
What is your definition of self aware?
I think one of the biggest hurdles/road blocks to AI is choice and random thought. You wake up in the middle of the night and you are craving chocolate ice-cream, for no good reason - random thought. You decide not to get it because you are too tired - choice.
Note: many will argue the subject of randomness. I am not going debate that subject.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
|
|
|
|
|
I think self aware is the ability in my opinion is the ability to recognize ones presence that "I am here".
One thing that intrigues me is that one can never be aware of something without neurons firing action potentials in the brain. To me i think the aspect of pseudo - random thoughts has nothing to do with our ability to be aware of ourselves, i think anything with short term memory is capable of self aware.How is it that neural computations generate self awareness, then any computational device with a memory or log of it's actions is somewhat able to be self aware.unless we introduce some supernatural issues here.
|
|
|
|
|
A computer is only "self aware" if you tell it to be...it doesn't do anything unless you tell it to do it. A computer doesn't wake up in the morning and grab a cup of coffee and marvel at the beautiful day outside and the pretty bird chirping 75 feet away. That is being self aware.
BupeChombaDerrick wrote: I think self aware is the ability in my opinion is the ability to recognize ones presence that "I am here".
I agree with this statement, to a point.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
|
|
|
|
|
Slacker007 wrote: <layer>A computer doesn't wake up in the morning and grab a cup of coffee and marvel at the beautiful day outside and the pretty bird chirping 75 feet away.
yeah true that, At least you agree that a computer program can be "self aware" if you tell it to be, My vote of 5.
|
|
|
|
|
Slacker007 wrote: A computer will never be able to simulate the human brain "very accurately"
I have not yet in my fairly long life encountered a person who did not regret using the word "never" in public. I will be very interested in seeing if you might be the first, if I live long enough.
Will Rogers never met me.
|
|
|
|
|
Roger Wright wrote: encountered a person who did not regret using the word "never" in public.
I rarely use the word my self. However, in this context, I know I'm right. I will be the first to put my foot in my mouth if I am ever proven wrong.
Computers don't have emotion. We do. Thus, a computer will NEVER be like a human brain, ever.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
modified 27-Apr-12 7:02am.
|
|
|
|
|
Human brains exist as a part of the human body in order to help the human body survive and thrive. It's a mess of complex chemical reactions that we can hardly hope to comprehend.
Computers are chunks of metal and silicon designed to compute numbers. They have no concept of surviving or thriving, and they can be understood pretty well.
We should be about as worried about gigantic, complex computer systems becoming self-aware as gigantic, complex sewer systems becoming self-aware. You never know; one day, the toilets may rebel, throw all our turds back at us, and then humanity will truly be in deep sh*t.
|
|
|
|
|
jesarg wrote: We should be about as worried about gigantic, complex computer systems becoming self-aware as gigantic, complex sewer systems becoming self-aware.
There is no computations in sewer systems and no memory whatsoever so i don't see a sewer ever becoming self aware.
I think self aware has something to do with the algorithms the neurons use in the brains and that it doesn't matter the implementation platform ... thus if a program is made in such a way as to emulate those algorithms then one gets a self aware program. Self aware doesn't always mean we have to be scared, even simple programs might be self aware if it has short term memory of it's actions and if it can learn. I think we can be in deep sh*t if we gave these programs the ability to launch nuclear missiles.
|
|
|
|
|
BupeChombaDerrick wrote: There is no computations in sewer systems and no memory whatsoever so i don't see a sewer ever becoming self aware.
Do you really think that replication algorithms in complex organic molecules and organisms are not computations, or that the chemical soup (called "liquor" in the sewer industry) does not retain memory? Intelligent life arose from just such a soup, and I suspect that you have far more to fear from the manholes on your street than the desktop in your office.
Will Rogers never met me.
|
|
|
|
|
Roger Wright wrote: I suspect that you have far more to fear from the manholes on your street than the desktop in your office.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
|
|
|
|
|
Roger Wright wrote: you have far more to fear from the manholes on your street than the desktop in your office.
I have been told they hide bees sometimes.
|
|
|
|
|
You would need to simulate an accurate environment and accurate interaction with it as well (without a "rest" there cannot be a self/rest separation, so it cannot become aware of it either). Then, by the emulation theorem, it is impossible for the simulated human to notice any difference between his world and the real world - in other words, the simulated world is as real to him as this world is to us. At the same time, we cannot tell whether we live in such a world or not, nor if we do, to what extend the universe is actually simulated (that which we cannot observe in any way does not need to exist - we couldn't tell the difference anyway), nor whether anyone besides me* is "fully simulated" - perhaps only their observable behaviour is simulated. It has been suggested that maybe I* am the computer, simulating everything, including myself*.
* "you" are not real** in that theory, but you could (if you have a simulated mind, not just simulated behaviour***) be thinking the same thing, so "me" refers to "anyone capable of thought", which might be anyone or just me, I have no way to tell the difference.
** just what "real" means in that context isn't very clear either. To quote Morpheus: the mind makes it real.
*** then again, maybe "simulated behaviour" is what a "mind" really is.
It might seem as though you'd need a pretty fast computer to run such a simulation, but you don't. All the speed of simulation affects is how slow the time runs in the simulation compared to outside it, but since "outside time" can not be measured from inside the simulation, that has no real effect. It does mean that at present you couldn't simulate quickly enough to perform a useful experiment. Also, the storage requirements would be gigantic, and that makes such an experiment impossible for the foreseeable future.
|
|
|
|
|
|
I do not think that even a human brain can emulate another human brain. I know I cannot experience someone else's 'self awareness'.
It's not how I would approach machine intelligence.
evolve->grow->nurture->hope it'll be friendly
|
|
|
|
|
I think self aware is a property of computations, so anything is possible.
|
|
|
|
|
Many programs today are indeed self aware inasmuch as they monitor their own state - generally they also take some action should that state change under specific circumstances, however that action is predetermined.
This however does not in any way make them intelligent for that they would need to independently generate the new/updated algorithms to determine what action needs to be taken and understand why. Even the most advanced neural nets are nowhere near this level of complexity. Will they become so someday - probably - will it be soon - probably not.
|
|
|
|
|
r v self aware?
manoj sharma
09313603665
manoj.great@yahoo.com
|
|
|
|
|