Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

A Proposed Model for Simulating Human Artificial Intelligence.

0.00/5 (No votes)
30 Aug 2003 3  
Unique Jungian and MBTI approach to develop Human Artificial Intelligence

Sample Image - screen1.jpg

Introduction

This idea mainly came to me while finishing up my JoeSwammi MLB 2003 version. I pretty well developed JSMLB 03 as far as I wanted and I needed a new coding project for fun. I was introduced to Myers and Briggs Type Indicator (MBTI) my freshman year in college. After learning about MBTI which is based from Carl G. Jung theories about cognitive processes in the human mind, it helped me personally in my own understanding of myself and my thinking preferences. MBTI helped me understand why I didn't like to be in large groups of people, why I needed to escape from groups of social animals that needed to be with each other, where as, I don't most of the time. I finally felt comfort that I was an INTP and not someone merely sick from an antisocial disease. Why MBTI is not implemented in elementary schools, perhaps with conjunction of IQ tests, is beyond me and we all would benefit if it were, in my opinion. So I started researching and analyzing (a favorite function of the INTP type NTs and T (Thinkers) in general) artificial intelligence because I was wondering why we haven't simulated it yet. In my findings, I was shocked not to find one ounce of information about Jungian theory applied to simulating Human Artificial Intelligence (I will call this HAI). So I am proposing my model of HAI to the AI community, which is currently ridden with politics and debates in a PHD world of economics, which I have come to learn. What motivated me also in this HAI quest of mine, was that I was shocked to find http://www.humanaiproject.org/ and .com available for the taking! The pieces of my quest couldn't have fallen any better than that right? :P

"The Pitch"

How long did it take man to learn how to fly? How long will it take man to create a human Artificial Intelligence robot? The most famous documented sketch of a flying machine was by Leonardo da Vinci some where in between 1452 and 1519. The Wright brothers achieved flight in 1903, that is at least 380 years later. Artificial Intelligence (AI) is split into many divisions. Cognitive, speech, vision, and plenty more are being developed but not under one roof. Imagine if the men experimenting how to fly were only developing the wings or only building the engine, perhaps it would have taken us another 380 years! What I would like to propose is a major project that would essentially require all of man's sciences to create the first Human Artificial Intelligence (HAI) robot. All of this AI technology must be combined at some point, and I say the time is now. Sure, we might build that plane that flew for 12 seconds just as the Wright Brothers, but imagine a project, a plan, that is coordinated for constant testing and development on a simple understandable scale. This project would be run much like a successful software company runs its best software division. I have developed a plan that could very well produce great leaps in AI research, products, and science. If successful, it would also create a new booming industry, just as flight produced the airline industry. MIT has a plan, as well as others, but they have not produced any satisfactory Human AI, to my knowledge. I have developed a structure using Jungian theory and Myers and Briggs Type Indicator, that can take in the current AI technologies and produce the first level of HAI. Proposed project position and structure diagram

The code...

The code, or lack of it. I am afraid I have some sad news for those of you hard core programmers who wish to find some mind boggling Artificial Intelligence source code, that you will not find it here in this demo project. What you will find is a proposed structure so that existing AI technologies can "plug-in" and produce real human artificial intelligence. For example, one of the current cognitive models for HAI is Karl Pribram's "Holographic Brain Theory" and more conventional models of neuronal computation. In a super quick nutshell, the brain stores and recreates memories like a hologram. This by itself, doesn't create HAI, but it can be "plugged in" my model, perhaps in the Sensation function. Another good reading is On the Information Processing Capabilities of the Brain: Shifting the Paradigm which reviews details of processing power, memory capacity, reliability and fault-tolerance, algorithmic effectiveness, and interrupt control of the human mind. This article is very co-dependent on "fast hardware" and tries to explain things from a higher level of mind processing. My argument is, we haven't even defined a low level of HAI yet!!! My model is a proposed low level HAI which can later (or maybe right now) take in these higher level processes. The demo project is in C++ I know, sorry you innovative C# coders ;). But I have many comments in the code, so have fun with it. Human AI is not the easiest thing to tackle, so it was a challenge to me even getting this simple demo off the ground. Also, thanks to Ruben J�nsson, I use his config classes to handle the setting of which personality type Carl is.

Jungian theory and MBTI basics

"Everything should be made as simple as possible, but not simpler." - Albert Einstein

Yes, perhaps the best INTP ever lived, said to keep things as simple as possible. Which is why my model works, in my mind, as a Unified Theory of Mind. Jung said humans do two main things: perceive and judge, which I describe in more of a programming sense than anyone else has done to date below. (NOTE: not real code below, more freehand notes than anything else)

2 basic mental processes

PERCEPTION - a process by which we take in, 
   OR gather, OR become aware of data.
At any given moment SENSATION() OR INTUITION() executes PERCEPTION() 
BUT NEVER at the same time.
SENSATION() and INTUITION() are opposites and they 
   try to do our Perceiving for us. 
S and N make a team.

JUDGMENT - a process by which we order, 
  OR hierarchize, OR come to closure OR 
  conclusion on the data perceived.
At any given moment THINKING() OR FEELING() executes JUDGMENT() 
BUT NEVER at the same time.
THINKING() and FEELING() are opposites and they 
  try to do our Judging for us. 
T and F make a team.

(SENSATION() or INTUITION()) = Perceiving
(THINKING() or FEELING()) = Judging
Each person has decided a preference 
(strong proclivities rather than choices) 
for Judging and Perceiving.
////////////////////////
"interface" with the Outside World.
A person uses either the PERCEPTION() or JUDGMENT() 
to interface with the outside world.
For example, if a person uses SENSATION() or INTUITION() 
to interface with the outside 
world, that person is considered to be in a 
Perceiving Attitude and is called a 
Perceiver (P). The same is true for a Judging Attitude, 
a person who uses THINKING() or 
FEELING() to interface with the outside world, 
is considered as a Judger (J). 
These two attitudes are very different and add to
the uniqness of an individual person.
////////////////////////
DOMINANT() AND AUXILIARY() in terms of Extroversion/Introversion

So what have we covered so far? Each person has 
their own preference used in the 
PERCEPTION() either S or N (SENSATION() or INTUITION()), 
and each has has their own 
preference used in the JUDGMENT() either T or F 
(THINKING() or FEELING()). 
So this leaves the possible 4 combinations ST, 
SF, NT, NF. Each of these combos work 
together to control PERCEPTION() OR JUDGMENT(). 
One trait of the combo will be 
Dominant, and then the other will be a "helper" 
which is called the Auxiliary. 
The Dominant function will MOSTLY ALWAYS be used/called 
more than the Auxiliary. 
So of these leaves 8 possible "interfaces":
ST,ST,SF,SF,NT,NT,NF,NF, the underlined means 
it is the dominant function.
Extroverts (E) use their Dominant function to 
"interface" with the outside world and 
their Auxiliary function to face themselves.
Introverts (I) use their Auxiliary function to 
"interface" with the outside world and 
their Dominant function to face themselves.
Dominants are the keystone function of a person's mental process., 
the others(Auxiliaries) are configured about the dominant.
Extrovert()
{
    //The Dominant is either Sensation, iNtuition, 
    //Feeling, or Thinking
    var m_Dominant = (S OR T OR F N);
    //The Auxiliary is the opposite of the 
    //Dominant AND can not be the same
    var m_Auxiliary = (S OR T OR F N && 
                    m_Dominant !=  m_Auxiliary);

    //Extroverts use the Dominant 
    //as interface with outside world
    var m_InterfaceOutside = m_Dominant;
    //Extroverts use the Auxiliary to 
    //interface with themselves
    var m_InterfaceInside = m_Auxiliary;

}

Introvert()
{
    //The Dominant is either Sensation, 
    //iNtuition, Feeling, or Thinking
    var m_Dominant = (S OR T OR F N);
    //The Auxiliary is the opposite of the 
    //Dominant AND can not be the same
    var m_Auxiliary = (S OR T OR F N && 
             m_Dominant !=  m_Auxiliary);

    //Introverts use the Auxiliary as 
    //interface with outside world
    var m_InterfaceOutside = m_Auxiliary;
    //Introverts use the Dominant to 
    //interface with themselves
    var m_InterfaceInside = m_Dominant;
}
//////////////////////////
People's Attitudes (A state of mind(human 
consciousness that originates in the brain 
and is manifested especially in thought, perception, 
emotion, will, memory, and 
imagination))
//////////////////////////
ENERGIZED
Extraverts (E-J or E-P) are energized by the outside world
Introverts (I-J or I-P) are energized by the inner world
//////////////////////////
according to Carl Jung...
// - birth to 6 years old, we use all 4 
//   functions in an experimental way
// - 6 until about 12, the dominant function begins to assert itself as 
//   the one in charge of the Self conscious.
// - 12 to 20, the auxiliary function emerges as a powerful support to 
//   the dominant.note it is possible "identity crisis" in adolescence 
//   may not be clear as to what their type preferences are.
// - 20 to 35, we begin to utilize are third function, and may develop
//   hobbies that require the use of that function.
// - 35 to 50, inferior function demands attention. "mid-life crisis"?
// - 50 on, have available all four functions to use depending on the
//   situation, we continue to depend on our dominant, the auxiliary
//   remains loyal and teams with the dominant, the third function
//   when used, it is used with less difficulty than in the past.
//   The inferior function remains mainly out of conscious control
//   but it is not act in such a rambunctious way as in the past.
//   we listen to it more wisely and defuse it more quickly.

If this just confuses you or would like some further reading about MBTI, one of my favorite type sites:

If you wish to learn what your personality type is, take a free 5 minute online test:

The model

The project...

OK, I will try my best here. So ideally if my proposed project position and structure was a reality, all of the team leaders and other important decision makers would decide on an operating system, probably building one from scratch. I have met someone writing an extensive book on a proposed operating system for a HAI robot, which is really interesting and perhaps should be considered, at least the many ideas he has considered about a HAI OS and it's necessities. CP's own Marc Clifton and I have had some private discussions about this topic and are fun to chat about at least, but for the sake of bandwidth and time I wish to move forward on proposing my particular ideas. So close your eyes and imagine a robot which will have the five senses (touch, taste, smell, sight, hearing) or whatever senses are needed to start the project (please remember I don't have all the answers to the specifics of the project, which I would, and am hoping, would be developed by the particular project teams, but as far as the basic structure of the project this is what I am proposing). So with these fives senses the incoming data would be sent to the corresponding body systems (Circulatory, digestive, immune, muscular, nervous, reproductive, respiratory and skeletal. Note: obviously not all of the systems may be needed but maybe a small committee could evaluate its need at certain points in development). One example might be if the robot arm was bumped, a motion sensor would detect what is happening and send that information where ever the programmers decide it should go.

The Jungian functions...

Now comes in the MBTI/Jungian theory part of the model, Sensation first. Sensation looks at the facts and details, which would be collected by the senses (again something that is not impossible given project structure and good programmers ;)). In personality type, people who prefer sensation (in perception) might be seen as someone good at seeing "what is" or sometimes have trouble seeing the "big picture". Now some techniques that I have found that might be useful in the sensation function are perhaps. "Yet there exists evidence from neuroanatomy, functional neuroimaging, pathology of neurological disorders, and cognitive psychology to support the contention that mental imagery is directly represented in sensory modalities [Kosslyn94]." This mental imagery is the latest idea in cognitive research. For more, read on the Karl Pribram's "Holographic Brain Theory" and more conventional models of neuronal computation for creating data.

Next, Intuition function begins to work, again any of these functions may work longer are stronger depending on the personality's preferences. Intuition deals with context and abstractions. I have found two current models that I am pointing to for this function: A Computational Model of Context Processing and Creating Abstractions Using Relevance Reasoning. In personality type, a person that prefers intuition (in perception) would be seen as someone who likes looking at "what could be" and/or a person quick to see the "big picture".

So perception has just taken place, and now our HAI is ready for judgment as described above. Feeling would be the third function to execute (note: this is according to my current proposed model, not Jungian or MBTI theory as far as I am aware). Feeling is based upon "Values". Now this function I haven't put much thought into it, simply because I believe some kind of technique could be developed once the overall project becomes more defined; as far as how the data is handled, which I address after the four Jungian functions below. In personality type, someone whose Feeling function is dominant maybe uncomfortable with decisions that require ignoring their own emotions and those of others, and also make it important to take personal considerations into account when making decisions. Last is the Thinking function. Thinking is logic based. Again, many AI technologies exist in this area. For more interesting reading on Logic and Reasoning for AI see aaai.org. In personality type, someone who prefers to use thinking over feeling in judgment, would be good at exploring the logical, impersonal consequences of actions or decisions.

Keep in mind also that we all use our less dominant functions, sometimes depending on the situation. Maybe thinking would determine which function is needed? Or maybe it would be simulated Neurotransmitters? This is the part of the model I am also proposing to the AI community: Neurotransmitters. I think understanding neurotransmitters would be a terrific way in simulating human thought. (For more basic background on neurotransmitters.) We currently know that there are at least 50 to over 100 neurotransmitters in the mind, all of which have specific purposes. We almost completely understand the most basic neurotransmitters like Dopamine:"Responsible for motivation, so slowing of the subjective time (and the consequent speeding-up of external events), is often associated with feelings of apathy and depression. The brain clock is a loop of dopamine generated neural activity which flows between the substantia nigra in the base of the brain (where dopamine is produced), the basal ganglia, and the prefrontal cortex. Each tick of the clock is the time it takes for the nerve signals to complete the loop, and all neural events that occur within that time are experienced as a single happening. The average tick is about one-tenth of a second, and slower ticking results in time slowing." (Exploring Consciousness - Rita Carter)

"Increased dopamine levels cause the pendulum to swing faster. While low dopamine levels slow it down. The level of this chemical dwindles with age, so the pendulum swings slower." http://lansbury.bwh.harvard.edu/dopamine.htm

I am optimistic if my project was a reality, a couple of meetings between neurotransmitter experts and the team leaders could come up with a basic implementation on what effects the weights of the neurotransmitters.

More coding functions...

Ideally, I think it necessary to turn the robot on with no pre-built "Concepts" just like a real human baby (this means having parents to take care of it too!) and let the robot learn new concepts and values daily just as we do. My model described above also covers learning and how we take in information according to our preferences in Perception which would store the robot's thought's in what I call "Concept Tagging" which I describe later below. In theory, you could start the first level of HAI with pre-built concepts using something like MIT's common-sense project but that doesn't, at my first thought anyway produce a simulation of a human personality, in my opinion.

So, also needed I believe to help form a human personality, since we are social animals, is to program Erik Erikson's Stages of Psychosocial Development.

Each stage is characterized by a different 
psychological "crisis", which must be 
resolved by the individual before 
the individual can move on to the next stage. 
If the person copes with a particular 
crisis in a maladaptive manner, 
the outcome will be more struggles with that 
issue later in life. To Erikson, the 
sequences of the stages 
are set by nature. It is within the set limits that 
nurture works its ways.

Trust vs. Mistrust - Age 0 to 1
Autonomy (Independence) vs. Doubt (or Shame) - Age 1 to 2
Initiative vs. Guilt - Age 2 to 6
Competence (aka. "Industry") vs. Inferiority - Age 6 to 12
Identity vs. Role Confusion - Age 12 to 18
Intimacy vs. Isolation - Age 19 to 40
Generativity vs. Stagnation - Age 40 to 65
Integrity vs. Despair Important - Age 65 to death

So how in the world would you program this you ask? Somewhat simple in my mind, the concept tagging. Continually make these concepts a recurring conflict theme until the conflict is solved. For example, Trust vs. Mistrust. Do I trust my environment? Do I trust my care givers?

My proposed model of concept tagging...

This is only my proposal of concept tagging. I have tried to do some research in concept building, but couldn't find any specific ones I liked to fit in with my model. I am sure they exist out there and maybe a great replacement for what I have here.

This is my own concept of a dog represented in a concept tagged format (some things I have left out only because the data was stored differently, "not as etched" in memory but still there, just harder to get at that particular time of writing).

a dog:
has four legs,[FACT],[NEUTRAL]
some dogs have three legs,[FACT],[NEUTRAL]
some dogs are big,[FACT],[NEUTRAL]
some dogs are small,[FACT],[NEUTRAL]
most dogs eat from the dinner table,[FACT],[NEUTRAL]
dogs are always hungry,[OPINION],[NEUTRAL]
dogs like to swim,[OPINION],[NEUTRAL]
dogs poop funny,[OPINION],[NEUTRAL]
dogs teeth can hurt,[OPINION],[NEG]
some dogs are mean,[OPINION],[NEG]
some dogs are mean but look cute,[OPINION],[NEG]
some dogs are cute but look mean,[OPINION],[POSITIVE]
smells too much,[OPINION],[NEG]
is annoying, [OPINION],[NEG]
needs too much attention,[OPINION],[NEG]

experience with dogs:
dog next door when growing up,[EXP],[FACT],,[NEUTRAL]
best friend had dog when growing up,[EXP],[FACT],[NEUTRAL]
my friend James had a dog,[EXP],[FACT],[NEUTRAL]
my friend Kevin had a dog,[EXP],[FACT],[POSITIVE]
my friend Joe had a dog,[EXP],[FACT],[NEUTRAL]
my friend Teresa had a dog,[EXP],[FACT],[POSITIVE]
I have seen dogs at the park,[EXP],[FACT],[NEUTRAL]
I have seen dogs wondering the streets lost,[EXP],[FACT],[NEG]
I have seen dogs pee on trees and other things,[EXP],[FACT],[NEUTRAL]

Values of a dog 
(To rate according to relative estimate of worth or desirability; evaluate)

Now I am wondering if a value tagged system should be separate or included with the concept tagging? I haven't analyzed this yet.

My proposed model of data and memory...

In my model, understanding the human brain's hippocampus and amygdala is essential since they both control data flow. The following is taken from http://thalamus.wustl.edu/course/limbic.html:

"If the amygdala is FEAR, then the hippocampus is MEMORY. To understand exactly how the hippocampus is involved in memory, however, you must first know a little about memory."

The following is from Rita Carter's book Exploring Consciousness:

"hippocampus holds memories of recent events (consciousness and the brain). Freed from the constant onslaught of here and now information, the hippocampus needs its newly acquired memories back to the cortex. Because the cortical neurons involved in the relevant experience are still slightly "warm", they are easily provoke to repeat the pattern in synchrony with the signals from the hippocampus. Each burst causes the cells to the chemicals from their synapses, which binds the cells together into the permanent leakage known as long-term potentiation (LTP). This firms up the faint traces left by the daytime neural firing into stable memories. The final stage seems to occur during rapid eye movement sleep which is associated with the more vibrant type of dreaming. When REM sleep starts, brain acetylcholine shoots up again, so neural traffic back from the hippocampus is again inhibited, as in alert wakefulness"

Another interesting read on this subject is from brightsurf.com which goes into a little more detail.

Moving onto something I remember from my psychology classes, is the number seven and it's correlation to remembering. A basic article I have found The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. So putting all of this together so we can start somewhere, I came up with a proposal to data-mine the parts of the tagged concepts on an access level of seven. The team leaders and scientists would have to analyze this further, but I think it is a good place to start and could produce some good first level of human AI results.

So what about...

Emotions

Emotions, as I currently see them would be produced depending on the weights of the specific neurotransmitters coming out of any one of the four functions. For example, dopamine is used as a reward in the brain and helps achieve a "happiness" state of mind (Mapping the Mind - Rita Carter). Pleasure, is the result of a rush of dopamine, it only lasts as long as the neurotransmitters continue to flow. The amygdala is responsible for generating negative emotions of anger, fear, and sadness.

Common sense

Common sense is a hot topic since MIT is working on this and my model can work with what they are doing. In my model intuition creates an abstraction using data from a current tagged concept (which is common knowledge to all humans and HAI, hence called "common"). That abstraction can then be analyzed by thinking, which is logic. For example, we all know in the USA to stop at a red light when driving. Senses see the red light, data gets fed to intuition. Intuition generates "I stop at red lights". This gets fed to feeling function which looks at values, and then thinking probably makes the judgment for common sense maybe concluding, "I must stop at this red light or I will get in an accident."

Imagination

Imagination seems like a dim version of perception (Exploring Consciousness - Rita Carter). It seems to be the combinational workings of intuition (creating abstractions) and the reconstructing of mental visual images using current concepts and perhaps temporarily reforming and changing those concepts and is formed using light sensory information.

Dreaming

Not that I want a robot to dream but it might be possible and maybe necessary, in the long run to simulate HAI. Dreaming is much like imagination except more of a hallucination. The light sensory information is turned off, which was telling us that the outside world exists and that rules apply to the outside world. When this sensory information is shut off, no outside world rules need apply. For example, if you have a dream about falling off a cliff, this is entirely possible because you have no real sensory data coming in saying you are not even close to a cliff. The cliff could be constructed in a holographic way perhaps.

The self

I like to be specific about the term self, so I like to refer to it as Self-Consciousness, as I believe we should be open to the idea of a Self-Unconsciousness. Self-Consciousness controls inner and outer world consciousness and is fueled by neurotransmitters.

The unconsciousness

I am not really interested in coding the unconscious but I suppose it could be any function that is not included in the self-consciousness programming.

The soul

In the little research I have done in this area, it is possible that souls may exist in the Intuition function. Here is some further reading in some new concepts in this area.

Free will

As far as I can see, the combination of all the ingredients listed and explained so far will automatically simulate free will. Now this is another hot HAI topic. If we program a HAI not to improve it's own internal programming (I am not talking about data), is this really free will? No probably not, but I think this would be a good thing for the human race in the long scope of things.

Anything else?

I think almost any other human characteristic can be represented in my model in combination with the Jungian functions, just ask me and I bet I can come up with something after proper research, or try for yourself. It's kind of fun if you are into this sort of thinking.

The AI politics...

What I am finding in my research, or at least my current concept of it is that everyone thinks their "program" is correct, including myself. My approach looks at things simply but yet has the plan of teamwork to tackle the tough programming and architecture that is involved. Other topics which arise, is the debate over should we create a HAI robot just like ourselves, meaning build in evolution. What is still a popular design in the AI community is actually duplicating a brain through simulated neural networks, as stated in a 1988 Time Magazine article. I have analyzed this and came to the conclusion that since these neural networks will be designed with evolution algorithms, evolution is an imperfect and unpredictable process. This is potentially dangerous to the human race if we wish to still exist as a species. As I have put it, we wish to simulate human artificial intelligence not emulate it. Following this 100% neural network design is very dangerous and I don't think it is a proper design for creating human artificial intelligence. I think this should be a main topic in the AI community as breakthroughs are starting to move more rapidly. Besides, how long will it take to completely understand real brain neural networks since we can only see at a certain level, given our current technology. We maybe waiting a long time to see the first level of Human AI using this 100% neural network design.

I have also put forth the idea that much like guns are to a militia, human artificial robots maybe just as important to a militia fighting a suppressive government and their robots, just as some science fiction as touched upon, so I am in support of non-government funding human AI projects just as much as government funded projects, so there will always be a balance of power in society. For why I think this, please read the thesis: Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. by (c) 1993 by Vernor Vinge, Department of Mathematical Sciences San Diego State University.

Article conclusion

So before rating this article too low because of the lack of actual code, just think of the potential, this model and project plan proposal has to offer. At the very least it makes for interesting conversation or get a good laugh from my variable names. Besides, I would love to read a CP article about your Unified Brain Theory for Human Artificial Intelligence. :-D Also, if you have difficulty understanding what I am presenting, please let me know by asking me questions. It does take the right state of mind/mood to take this all in, IMO. My model is not all set in stone, and I am wide open for improvements. Improvements is the main result I wish to produce if the plan ever becomes a reality. We have to start coding the first level of HAI sometime and in doing so we may just start another scientific revolution with this simple plan.

Late addition 6/26/03

Murphy's Law has it, I just found a proposed AI project similar to mine just before I submit this article call the World-Wide-Mind project. I emailed Dr. Humphrys and Mr. Ray Walshe for any positive or negative feedback in regard to my ideas.

Late addition 8/31/03

Two huge projects have emerged almost copying what I have proposed: The need to develop AI software and robotics together in a large project to maximize the success probability of producing a thinking machine. Here are the two projects that I am aware of:

More readings...

Here are some of the best robots being developed that I could find (please inform me if you find some more).

History

  • Article posted 6/26/03 11:00 pm PT
  • 6/28/03 3:30 pm PT - fixed bad grammar in the "proposed model of Data and Memory" section caused by dictation software
  • 6/29/03 02:20 pm PT - added How To Spur Scientific Revolution: Amass Copious Data, Keep It Simple link to conclusion.
  • 7/1/03 10:57 pm PT - added a reference to a Time Magazine article in AI Politics area.
  • 7/7/03 10:00 pm PT - corrected ENFJ variable assignments and updated carl_src.zip and carl_exe.zip to version 1.7.7.3
  • 8/31/03 1:00 pm PT - carl_exe.zip is now an installation exe with a preset config file, so personality type is already chosen before user turns Carl on. Added more known robots to Best Robots section. Added to known grand projects similar to my proposal in the Article Conclusion section.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here