|
BupeChombaDerrick wrote: a word processor is aware if a keyboard button is pressed
Not really in terms of "self aware", but it is "aware" of key presses when the system signals that a key press has been done. Two completely different definitions. It just sits there idling waiting for a keyboard press event to be triggered.
My idea of "self awareness" is one in which something can take care of itself without necessarily having interaction with something else. A word processor cannot and will not do anything until the user interacts with it.
BupeChombaDerrick wrote: to be self aware a program only needs to have some self monitoring capability
Which still boils down to someone coding that capability.
Looking through this entire thread, there are a lot of interesting points made by everyone...
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Paul Conrad wrote: My idea of "self awareness" is one in which something can take care of itself without necessarily having interaction with something else.
But we interact with our environment and our own internal states thus we respond to environmental stimulation as well as to feedback from some of our internal mechanisms. Without this interaction we can be completely oblivious of our own existence.
Paul Conrad wrote: Which still boils down to someone coding that capability.
Even humans are hardwired to be self aware following some form of design.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Cogito ergo sum ("I think, therefore I am") proposed by René Descartes.... I know this will probably really open it up some...
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Well it seems you're already pretty convinced of what "self awareness" is (you defined it for us in an earlier post), so I won't comment on that.
What I found most compelling is your signature: "...every year find you a better .. woman."
SWEET! A better (new?) woman every year! I might actually enjo
NO WAIT, HONEY, I DIDN'T WRITE THAT.
PUT THE BAT DOWN!
OW. OW. OW.
David
---------
Empirical studies indicate that 20% of the people drink 80% of the beer. With C++ developers, the rule is that 80% of the developers understand at most 20% of the language. It is not the same 20% for different people, so don't count on them to understand each other's code.
http://yosefk.com/c++fqa/picture.html#fqa-6.6
---------
|
|
|
|
|
The "better man or woman" part is merely to be gender insensitive not that I'am a woman, I am a man. Just want to make women feel free to see that they are being considered in the statement. so you can only "HONEY" me if you were a woman and I can only "HONEY" a woman.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Your "Self Awareness" sounds to me more like "Self Sufficient" therefore under your definition anyone under the age of say 12 is non "Self Aware"
Self Awareness has to do with comprehending that the item/animal/person is what is doing/sensing/thinking something at that time. If the entity in question has the ability to understand and modify its action based on what it is receiving from its senses, Then the entity can act accordingly and modify its behavior and FUTURE behavior (however long or short that may be base on memory length) to either repeat or avoid the stimuli it received when it performed said action.
Thanks,
T
|
|
|
|
|
Excellent point. Yes, self sufficient is more what I am thinking. I like your definition of self aware. A +5 for that
""Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
That person would still be aware of self. He or she would just be confused .
|
|
|
|
|
yeah, but i think memory has something to do with self awareness.
|
|
|
|
|
Would that mean that people with alzheimer's disease aren't self aware?
|
|
|
|
|
No, they are self aware because alzheimer's disease affects long term memory,but short term memory maybe responsible for self awareness.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man.”
|
|
|
|
|
|
Well but at least they are aware of what they recognize.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
|
Nice robot, any further advancement in this field will definitely lead to a completely independent robot (of course not one equipped with a deadly machine gun) that can go around and learn about it's environment. Nice link
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
I would argue that in this case Qbo did NOT pass the "mirror test"[^].
To do that Qbo would have to be shown a picture of "himself" and learn it and be able to differentiate between the picture and the reflection. (The training would have to be done very carefully so that a response indicating self-awareness could not be constructed just from a phrase assembly algorithm.)
There's nothing in the video of Qbo's response that indicates an understanding that what "he" saw was not a "representation" of him (picture), but was actually the instance of self.
|
|
|
|
|
I agree, Q'bo can now (after this training) understanding or recognize that he is seeing a "Q'bo" in front of him, the verbal out put that "This is me" is nothing but a string variable they used during compilation.
If the robot could have seen that the object in the mirror was performing the exact same actions as it was at the exact same time and made the leap to understand that it was the same object that it was seeing THEN it would have reached a major milestone towards self awareness. A baby isn't told that what it is seeing in a mirror is itself, you can actually watch the comprehension wash over the babies face as it makes this realization.
This is the major difference in how a computer learns versus how a human learns I believe. Humans have the ability to make these "Leaps" in their learning and understanding of stimuli, Computers lack that ability at this point in time.
|
|
|
|
|
Binding 100,000 items to a list box can be just silly ... no way, that can be only silly!
|
|
|
|
|
Just trying to see what that has to do with thinking outside the box.
|
|
|
|
|
Asking if computers think is like asking if submarines swim.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
I like that thought but it's not computers but computer programs in question, if our creativity is as a result of neural computations can't we give computer programs the same creativity by emulating those computations? The brain must use some algorithm or a set of algorithms to generate what we call intelligence and self aware. Though not sure about that.
|
|
|
|
|
We can give computers similar creativity. By their nature, computers may require a different approach to achieve intelligence and creativity.
While computers are currently serial in nature (with limited parallelism) the brain is massively parallel with many millions of neurons working simultaneously.
The brain also seems to employ both discrete and continuous forms of knowledge representation and processing. Neurons fire at various frequencies (continuous) and with continuous impulse levels from other neurons and continuous thresholds. However a single neuron firing is a discrete event.
With such radically different architectures, it's natural to expect different algorithms may be appropriate to produce intelligence.
Whatever approach turns out to be successful, we can expect computers to eventually be millions of times faster than humans, since their hardware is extensible. A future society may need to build limitations into intelligent computers in positions of power to prevent them from ruling us. Sort of like Issac Asimov's Three Laws of Robotics.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
now that i like
|
|
|
|
|
Our intelligence is not a result of computations. We are consummate pattern-matchers. Here is a really simplified version of how it goes: When we perceive something, it causes a certain bunch of sensory neurons to fire, which correspond directly to that perception. The neurons connected to those sensory neurons fire in turn if they recognize a pattern there -- for example, some neurons only fire if they see a vertical bar traveling from left to right, or other specific patterns like that. Then the next level of connected neurons fire if they recognize a particular pattern in the level before them, and so forth. We learn by building up patterns of patterns. The match to a pattern pops up automatically, or in other words, perceiving and recalling a matching previous pattern happen because the perception and the recall are linked by sharing the same set of neurons in the middle.
For an example of how this works, take driving. When you first got behind the wheel as a kid, everything seemed very unfamiliar. All the knobs were confusing, and you probably had to concentrate to remember which pedal was which. You probably had trouble recognizing following distances and when to turn to fit into a parking space and that kind of thing. But with practice, your brain began to recognize and store the patterns of driving, until almost all of driving became subconscious pattern-matching -- the lines on the road should be at particular distances, the feel of the brake matches to how quickly or how slowly the car comes to a stop, et cetera -- we don't have to think about any of these things because they match stored patterns in our minds. We don't have to consciously think about anything unless it breaks our expectations. Unexpected or unknown things draw our attention because they defy the patterns we know.
In contrast, a computer is terrible at pattern-matching. Many, many man-years went into the Google search algorithm, but really, what it's doing is trying to mimic the natural human ability to glance over a list and recognize what you are looking for out of it. This very basic ability has to be painstakingly coded into the computer. If you lined up a bunch of toys and asked a preschooler to hand you the "meanest one," the preschooler will be able to match his or her idea of "meanness" to the various traits of the toys and decide which one is the most mean. The computer, on the other hand, has no ability to take the concept of "mean" and expand it to apply to a toy, *unless a human writes an algorithm designed to do just that.* In other words, computers are 100% dependent on programmers for their pattern-matching intelligence. Even computers that "learn" things only learn whatever it is the programmer told them to learn. They don't independently gather data about the world around them and apply it creatively, using thought and consideration; instead, they simply follow a strict set of rules determining what data they will gather, how they will gather it, how they will interpret it, and how they will regurgitate it later.
Computers have an opposite kind of intelligence from humans. Their intelligence is related to their ability to perfectly remember exact things. Humans are terrible at remembering exact things -- most people can't even remember the rules of grammar for writing their own native language, and we can even forget things that are very important to us, such as the phone number of that hot (chick|dude|not applicable) we met last night. Computers are good at exactly remembering and carrying out particular algorithms and equations; most humans struggle with algebra. In other words, we are not good at computing, we are good at recognizing things. Computers are not good at recognizing things, but they are good at computing.
So, to answer your original question, if we want to make computers creative in the same way that humans are creative, we have to change the very basics of how they work. The closest we have to a humanlike intelligence is in "neural networks," which mimic the neural-connectivity pattern-matching I was describing, sometimes in robots and sometimes in virtual worlds. Most current neural networks are about as smart as insects. This is because building an artificial neural network of the complexity of the human brain is currently not feasible, given the state of current technology.
|
|
|
|
|
Well i just had to give you +5 for that, yes the current computer programs are like you have described, what you just described there is called a standard model of vision, i 'am currently researching in computer vision and i'am trying to integrate figure - ground discrimination into the algorithms as efficiently as possible. The reason why computers are bad at pattern matching is that programmers haven't just figured out how to efficiently tell a computer how to do just that, we might not need new hardware, but such algorithms can be hardware accelerated by using Graphics Processing Units, introducing parallel processing and more better methods/algorithms will start solving the problem of perception by computers. Don't blame the computers blame us programmers for their shortcomings.
|
|
|
|