20110726

4 Ways Technology Can Enable Your Inner Introvert By Philip Bump

Our always-on society is, in fact, becoming a Golden Age for introverts, in which it has become easier to carve out time for oneself

If my research -- conducted primarily via Netflix -- is correct, America used to be a paradise for introverts. If you weren't a lone cowboy riding the range in a driving snow, you lived on a farm miles from town, opening your front door onto a field of seven-foot-tall corn stalks. Social interactions were planned weeks in advance. (Elections are held on Tuesdays, after all, because that was the soonest people could get to the county seat.) In a time when towns tried to encourage interaction by scheduling seasonal barn dances, the pressure to attend a friend's cocktail party was obviously far lower. Introverts had weeks to come up with good excuses -- and all sorts of ailments (whooping cough, scarlet fever) to blame.
What some describe as an always-on society is, in fact, becoming a Golden Age for introverts.

Then the industrial revolution ruined it. Encouraging people to move to cities, the new world forced interactions from the moment you left your house. Telegraphs made it simple for people to send you messages and telephones then removed even the need to answer your door. Cars were invented, meaning you had no excuse for not traveling across town. Then planes removed any excuse to not travel across the country. The darkest hours for introverts were at hand.

But technology, long the domain of the geeky introvert, stepped up to the challenge. A brilliant first volley was the answering machine: ostensibly a device meant to ensure that a call wasn't missed, it quickly became a tool to ensure that you could miss any call you wanted.

Technology has steadily gained ground. What some describe as an always-on society is, in fact, becoming a Golden Age for introverts, in which it has become easier than ever to carve out time for oneself while meeting the needs of our extroverted friends. That's a key distinction: we live in a time in which introverts can regularly mask their introversion if they so desire.

It's worth considering, of course, what introverts actually find challenging about social interactions. For a thorough, thoughtful answer to that question, see this 2003 piece from The Atlantic. For a cursory and superficial one, read on.

For introverts like myself, it takes energy to engage with other people. Doing so requires thoughtfulness. It's tiring. Expending energy, for us, isn't energizing. Please note: we're not talking about shyness, some character flaw. The problem isn't with the introvert -- it's with the demands you make on the introvert. An introvert can't force an extrovert to sit quietly in a room and read a book, but extroverts (and the stigmas they've inadvertently created) can impose social demands with ease.

So how are we helped by the technology our nerdy allies have built?

The illusion of busyness. You know what I did over the weekend? Took a road trip to Baltimore, attended two work-related parties, and spent most of Sunday offline, hiking in the woods.

Yeah, no I didn't. But with a few simple posts on Facebook, changing my status on GChat, it's simple to pretend that I did. I could spend all weekend at home -- which I did (it was hot out) -- and no one would be the wiser. I can make it appear that I've met society's request that I "live life to the full," while living my life to the full in my own way.

Serial communication at work. In the Mad Men days, everyone worked together in one location, walking to each others' desks or offices, or exchanging occasional memos. Now? We're in offices all over the place, using email. We sit quietly hunched over laptops, transitioning even our water cooler conversations to our keyboards.

Email is often fingered as a key factor in the lamentable perpetual accessibility characterizing modern American communication. But it isn't. It allows you to respond when you're ready to do so. In fact, sometimes not responding to email in a timely fashion can give the impression that you're already busy doing other things. Which helps create the space that introverts need.

Serial communication everywhere else. This is maybe the most remarkable achievement. Interacting with people primarily online or serially is now the norm. It's easier to send a message to a friend on Facebook than to call; even for extraverts, it ensures that the outreach isn't a waste of time.

The reduction of communication to information-sharing. Moreover, people expect streamlined transfers of information. A text message, a Facebook message, a tweet -- each is a discrete, articulated piece of information being shared. Rather than riding the texture of a live conversation to figure out how to give and receive information, people are now used to simply pushing their thoughts out into the world, to be responded to at some undetermined future point. Even voicemail messages are now more often the point of a phone call than an actual conversation.

A quick interlude to talk about the psychology of introversion. Feel free to skip the next paragraph if you're not interested, though introverts will definitely find it engaging.

First popularized by Carl Jung, the word introversion describes exactly what you'd assume: a tendency be focused inwards (intro-) as opposed to the external focus of extraverts. As Wikipedia states, introversion is "the state of okay I think that's enough pretending." I have a secret to share with you. If you haven't heard of Slydial, delay not one second further. It is a tool that allows you to connect directly to someone else's voicemail without giving them time to answer the phone. Brilliant, right? But obviously, we can't let everyone know about it, or everyone will catch on to the fact that you're intentionally avoiding them. (For the record, friends and family, I've never used this tool at all.) What follows is the link to the site, masked to deter the casual observer. You are hereby sworn to secrecy. The Stockholm trials were a success, as you can see from the partial data set from the 2004 study in .CSV format. So, in other words, Jung was right.

I speak of the struggle between introverts and extroverts in antagonistic terms. But it shouldn't be considered that way. Extroverts, we love you. We just don't want to talk to you all the time. Happily, we live in a time when the expectation that we do so is much lower. We've reached an elegant balance between the two factions, one that doesn't require that we all become rugged cowboys, singing "Home on the Range," as we push our herd on to Topeka.

Even though that's what I said I'm doing right now on Facebook.
"Getting Started" Guide to Cybernetics
What does the word "cybernetics" mean?
"Cybernetics" comes from a Greek word meaning "the art of steering".

Cybernetics is about having a goal and taking action to achieve that goal.

Knowing whether you have reached your goal (or at least are getting closer to it) requires "feedback",
a concept that comes from cybernetics.

From the Greek, "cybernetics" evolved into Latin as "governor". Draw your own conclusions.
When did cybernetics begin?
Cybernetics as a process operating in nature has been around for a long time.

Cybernetics as a concept in society has been around at least since Plato used it to refer to government.
In modern times, the term became widespread because Norbert Wiener wrote a book called "Cybernetics" in 1948. His sub-title was "control and communication in the animal and machine". This was important because it connects control (a.k.a., actions taken in hope of achieving goals) with communication (a.k.a., connection and information flow between the actor and the environment). So, Wiener is pointing out that effective action requires communication. Wiener's sub-title also states that both animals (biological systems) and machines (non-biological or "artificial" systems) can operate according to cybernetic principles. This was an explicit recognition that both living and non-living systems can have purpose. A scary idea in 1948.
What's the connection between "cybernetics" and "cyberspace"?
According to the author William Gibson, who coined the term "cyberspace" in 1982:
“Cyber” is from the Greek word for navigator. Norbert Wiener coined “cybernetics” around 1948 to denote the study of “teleological mechanisms” [systems that embody goals]. NY Times Sunday Magazine 2007
Artificial Intelligence and cybernetics: Aren't they the same thing?
No way. Keep reading below. Amaze your friends.

This content was written for an encyclopedia and the early paragraphs explain foundational concepts. You can always skip down links at page bottom if you want to see videos or read more about what cyberspace has to say about cybernetics.

CYBERNETICS — A Definition

Artificial Intelligence and cybernetics: Aren't they the same thing? Or, isn't one about computers and the other about robots? The answer to these questions is emphatically, No.

Researchers in Artificial Intelligence (AI) use computer technology to build intelligent machines; they consider implementation (that is, working examples) as the most important result. Practitioners of cybernetics use models of organizations, feedback, goals, and conversation to understand the capacity and limits of any system (technological, biological, or social); they consider powerful descriptions as the most important result.

The field of AI first flourished in the 1960s as the concept of universal computation [Minsky 1967], the cultural view of the brain as a computer, and the availability of digital computing machines came together to paint a future where computers were at least as smart as humans. The field of cybernetics came into being in the late 1940s when concepts of information, feedback, and regulation [Wiener 1948] were generalized from specific applications in engineering to systems in general, including systems of living organisms, abstract intelligent processes, and language.

Origins of "cybernetics"

The term itself began its rise to popularity in 1947 when Norbert Wiener used it to name a discipline apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology. Wiener, Arturo Rosenblueth, and Julian Bigelow needed a name for their new discipline, and they adapted a Greek word meaning "the art of steering" to evoke the rich interaction of goals, predictions, actions, feedback, and response in systems of all kinds (the term "governor" derives from the same root) [Wiener 1948]. Early applications in the control of physical systems (aiming artillery, designing electrical circuits, and maneuvering simple robots) clarified the fundamental roles of these concepts in engineering; but the relevance to social systems and the softer sciences was also clear from the start. Many researchers from the 1940s through 1960 worked solidly within the tradition of cybernetics without necessarily using the term, some likely (R. Buckminster Fuller) but many less obviously (Gregory Bateson, Margaret Mead).

Limits to knowing

In working to derive functional models common to all systems, early cybernetic researchers quickly realized that their "science of observed systems" cannot be divorced from "a science of observing systems" — because it is we who observe [von Foerster 1974]. The cybernetic approach is centrally concerned with this unavoidable limitation of what we can know: our own subjectivity. In this way cybernetics is aptly called "applied epistemology". At minimum, its utility is the production of useful descriptions, and, specifically, descriptions that include the observer in the description. The shift of interest in cybernetics from "observed systems" — physical systems such as thermostats or complex auto-pilots — to "observing systems" — language-oriented systems such as science or social systems — explicitly incorporates the observer into the description, while maintaining a foundation in feedback, goals, and information. It applies the cybernetic frame to the process of cybernetics itself. This shift is often characterized as a transition from 'first-order cybernetics' to 'second-order cybernetics. Cybernetic descriptions of psychology, language, arts, performance, or intelligence (to name a few) may be quite different from more conventional, hard "scientific" views — although cybernetics can be rigorous too. Implementation may then follow in software and/or hardware, or in the design of social, managerial, and other classes of interpersonal systems.

Origins of AI in cybernetics

Ironically but logically, AI and cybernetics have each gone in and out of fashion and influence in the search for machine intelligence. Cybernetics started in advance of AI, but AI dominated between 1960 and 1985, when repeated failures to achieve its claim of building "intelligent machines" finally caught up with it. These difficulties in AI led to renewed search for solutions that mirror prior approaches of cybernetics. Warren McCulloch and Walter Pitts were the first to propose a synthesis of neurophysiology and logic that tied the capabilities of brains to the limits of Turing computability [McCulloch & Pitts 1965]. The euphoria that followed spawned the field of AI [Lettvin 1989] along with early work on computation in neural nets, or, as then called, perceptrons. However the fashion of symbolic computing rose to squelch perceptron research in the 1960s, followed by its resurgence in the late 1980s. However this is not to say that current fashion in neural nets is a return to where cybernetics has been. Much of the modern work in neural nets rests in the philosophical tradition of AI and not that of cybernetics.

Philosophy of cybernetics

AI is predicated on the presumption that knowledge is a commodity that can be stored inside of a machine, and that the application of such stored knowledge to the real world constitutes intelligence [Minsky 1968]. Only within such a "realist" view of the world can, for example, semantic networks and rule-based expert systems appear to be a route to intelligent machines. Cybernetics in contrast has evolved from a "constructivist" view of the world [von Glasersfeld 1987] where objectivity derives from shared agreement about meaning, and where information (or intelligence for that matter) is an attribute of an interaction rather than a commodity stored in a computer [Winograd & Flores 1986]. These differences are not merely semantic in character, but rather determine fundamentally the source and direction of research performed from a cybernetic, versus an AI, stance.
(c) Paul Pangaro 1990
Underlying philosophical differences between AI and cybernetics are displayed by showing how they each construe the terms in the central column. For example, the concept of "representation" is understood quite differently in the two fields. Relations on the left are causal arrows and reflect the reductionist reasoning inherent in AI's "realist" perspective that via our nervous systems we discover the-world-as-it-is. Relations on the right are non-hierarchical and circular to reflect a "constructivist" perspective, where the world is invented (in contrast to being discovered) by an intelligence acting in a social tradition and creating shared meaning via hermeneutic (circular, self-defining) processes. The implications of these differences are very great and touch on recent efforts to reproduce the brain [Hawkins 2004, IBM/EPFL 2004] which maintain roots in the paradigm of "brain as computer". These approaches hold the same limitations of digital symbolic computing and are neither likely to explain, nor to reproduce, the functioning of the nervous system.

Influences

Winograd and Flores credit the influence of Humberto Maturana, a biologist who recasts the concepts of "language" and "living system" with a cybernetic eye [Maturana & Varela 1988], in shifting their opinions away from the AI perspective. They quote Maturana: "Learning is not a process of accumulation of representations of the environment; it is a continuous process of transformation of behavior through continuous change in the capacity of the nervous system to synthesize it. Recall does not depend on the indefinite retention of a structural invariant that represents an entity (an idea, image or symbol), but on the functional ability of the system to create, when certain recurrent demands are given, a behavior that satisfies the recurrent demands or that the observer would class as a reenacting of a previous one." [Maturana 1980] Cybernetics has directly affected software for intelligent training, knowledge representation, cognitive modeling, computer-supported coöperative work, and neural modeling. Useful results have been demonstrated in all these areas. Like AI, however, cybernetics has not produced recognizable solutions to the machine intelligence problem, not at least for domains considered complex in the metrics of symbolic processing. Many beguiling artifacts have been produced with an appeal more familiar in an entertainment medium or to organic life than a piece of software [Pask 1971]. Meantime, in a repetition of history in the 1950s, the influence of cybernetics is felt throughout the hard and soft sciences, as well as in AI. This time however it is cybernetics' epistemological stance — that all human knowing is constrained by our perceptions and our beliefs, and hence is subjective — that is its contribution to these fields. We must continue to wait to see if cybernetics leads to breakthroughs in the construction of intelligent artifacts of the complexity of a nervous system, or a brain.

Cybernetics Today

The term "cybernetics" has been widely misunderstood, perhaps for two broad reasons. First, its identity and boundary are difficult to grasp. The nature of its concepts and the breadth of its applications, as described above, make it difficult for non-practitioners to form a clear concept of cybernetics. This holds even for professionals of all sorts, as cybernetics never became a popular discipline in its own right; rather, its concepts and viewpoints seeped into many other disciplines, from sociology and psychology to design methods and post-modern thought. Second, the advent of the prefix "cyb" or "cyber" as a referent to either robots ("cyborgs") or the Internet ("cyberspace") further diluted its meaning, to the point of serious confusion to everyone except the small number of cybernetic experts.

However, the concepts and origins of cybernetics have become of greater interest recently, especially since around the year 2000. Lack of success by AI to create intelligent machines has increased curiosity toward alternative views of what a brain does [Ashby 1960] and alternative views of the biology of cognition [Maturana 1970]. There is growing recognition of the value of a "science of subjectivity" that encompasses both objective and subjective interactions, including conversation [Pask 1976]. Designers are rediscovering the influence of cybernetics on the tradition of 20th-century design methods, and the need for rigorous models of goals, interaction, and system limitations for the successful development of complex products and services, such as those delivered via today's software networks. And, as in any social cycle, students of history reach back with minds more open than was possible at the inception of cybernetics, to reinterpret the meaning and contribution of a previous era

Such a short summary as this cannot represent the range and depth of cybernetics, and the reader is encouraged to do further research on the topic. There is good material, though sometimes not authoritative, at Wikipedia.org.

Bibliography

Ashby, W. Ross, Design for a Brain. London: Chapman and Hall, 1960.
Hawkins, Jeff and Blakeslee, Sandra, On Intelligence. Times Books, 2004.
IBM/Ecole Polytechnique Fédérale de Lausanne (EPFL), http://bluebrainproject.epfl.ch/, 2004.
Lettvin, Jerome Y., "Introduction to Volume 1" in W S McCulloch., Volume 1, ed., Rook McCulloch, Salinas, California: Intersystems Publications, 1989, 7-20.

McCulloch, Warren S. and Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity", in Embodiments of Mind by Warren S. McCulloch. Cambridge, Massachusetts: The MIT Press, 1965, 19-39.

Maturana, Humberto R., Biology of Cognition, 1970. Reprinted in Maturana, Humberto R. and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living. Dordrecht: Reidel, 1980, 2-62.

Maturana, Humberto R. and Francisco J. Varela, The Tree of Knowledge. Boston and London: New Science Library, Shambala Publications, Inc, 1988.

Minsky, Marvin, Computation: Finite and Infinite Machines. New Jersey: Prentice Hall, Inc., 1967.

Minsky, Marvin, ed., Semantic Information Processing. Cambridge, Massachusetts: The MIT Press, 1968.

Pask, Gordon, "A Comment, a Case History and a Plan". In Cybernetic Serendipity, ed, J. Reichardt. Rapp and Carroll, 1970. Reprinted in Cybernetics, Art and Ideas, ed., J. Reichardt. London: Studio Vista, 1971, 76-99.
Pask, Gordon, Conversation Theory. New York: Elsevier Scientific, 1976. [40MB PDF]

von Foerster, Heinz, ed., Cybernetics of Cybernetics. Sponsored by a grant from the Point Foundation to the Biological Computer Laboratory, University of Illinois, Urbana, Illinois, 1974.

von Glasersfeld, Ernst, The Construction of Knowledge, Contributions to Conceptual Semantics. Seaside, California: Intersystems Publications, 1987.

Wiener, Norbert, Cybernetics, or control and communication in the animal and the machine. Cambridge, Massachusetts: The Technology Press; New York: John Wiley & Sons, Inc., 1948.

Winograd, Terry and Fernando Flores, Understanding Computers And Cognition: A New Foundation for Design. Norwood, New Jersey: Ablex Publishing Corporation, 1986.

-end-

[Origin of this content: In 1990 Heinz von Foerster was approached by Macmillan to compose the entry on cybernetics for their 1991 Encyclopedia of Computers and von Foerster kindly referred them to me. The published text was (c) Macmillan Publishing while incorporating a figure created for an earlier purpose. Over time, updates, extensions, and clarifications have been incorporated into the text above. - Paul Pangaro, 3 August 2006]

Intelligence, Personality, Politics, and Happiness


Web pages that link to this post usually consist of a discussion thread whose participants’ views of the post vary from “I told you so” to “that doesn’t square with me/my experience” or “MBTI is all wet because…”.  Those who take the former position tend to be persons of above-average intelligence whose MBTI types correlate well with high intelligence. Those who take the latter two positions tend to be persons who are defensive about their personality types, which do not correlate well with high intelligence. Such persons should take a deep breath and remember that high intelligence (of the abstract-reasoning-book-learning kind measured by IQ tests) is widely distributed throughout the population. As I say below, ” I am not claiming that a small subset of MBTI types accounts for all high-IQ persons, nor am I claiming that a small subset of MBTI types is populated entirely by high-IQ persons.” All I am saying is that the bits of evidence which I have compiled suggest that high intelligence is more likely — but far from exclusively — to be found among persons with certain MBTI types.

The correlations between intelligence, political leanings, and happiness are admittedly more tenuous. But they are plausible.


IQ AND PERSONALITY

A few years ago I came across some statistics about the personality traits of high-IQ persons (those who are in the top 2 percent of the population).* The statistics pertain to a widely used personality test called the Myers-Briggs Type Indicator (MBTI), which I have taken twice. In the MBTI there are four pairs of complementary personality traits, called preferences: Extraverted/Introverted, Sensing/iNtuitive, Thinking/Feeling, and Judging/Perceiving. Thus, there are 16 possible personality types in the MBTI: ESTJ, ENTJ, ESFJ, ESFP, and so on. (For an introduction to MBTI, summaries of types, criticisms of MBTI, and links to other sources, see this article at Wikipedia. A straightforward description of the theory of MBTI and the personality traits can be found here. Detailed descriptions of the 16 types are given here.)

In summary, here is what the statistics indicate about the correlation between personality traits and IQ:
  • Other personality traits being the same, an iNtuitive person (one who grasps patterns and seeks possibilities) is 25 times more likely to have a high IQ than a Sensing person (one who focuses on sensory details and the here-and-now).
  • Again, other traits being the same, an Introverted person is 2.6 times more likely to have a high IQ than one who is Extraverted; a Thinking (logic-oriented) person is 4.5 times more likely to have a high IQ than a Feeling (people-oriented) person; and a Judging person (one who seeks closure) is 1.6 times as likely to have a high IQ than a Perceiving person (one who likes to keep his options open).
  • Moreover, if you encounter an INTJ, there is a 22% probability that his IQ places him in the top 2 percent of the population. (Disclosure: I am an INTJ.) Next are INTP, at 14%; ENTJ, 8%; ENTP, 5%; and INFJ, 5%. (The next highest type is the INFP at 3%.) The  five types (INTJ, INTP, ENTJ, ENTP, and INFJ) account for 78% of the high-IQ population but only 15% of the total population.**
  • Four of the five most-intelligent types are NTs, as one would expect, given the probabilities cited above. Those same probabilities lead to the dominance of INTJs and INTPs, which account for 49% of the Mensa membership but only 5% of the general population.**
  • Persons with the S preference bring up the rear, when it comes to taking IQ tests.**
A person who encountered this post when it was at Liberty Corner claims that “one would expect to see the whole spectrum of intelligences within each personality type.” Well, one does see just that, but high intelligence is skewed toward the five types listed above. I am not claiming that a small subset of MBTI types accounts for all high-IQ persons, nor am I claiming that a small subset of MBTI types is populated entirely by high-IQ persons.

I acknowledge reservations about MBTI, such as those discussed in the Wikipedia article. An inherent shortcoming of psychological tests (as opposed to intelligence tests) is that they rely on subjective responses (e.g., my favorite color might be black today and blue tomorrow). But I do not accept this criticism:
[S]ome researchers expected that scores would show a bimodal distribution with peaks near the ends of the scales, but found that scores on the individual subscales were actually distributed in a centrally peaked manner similar to a normal distribution. A cut-off exists at the center of the subscale such that a score on one side is classified as one type, and a score on the other side as the opposite type. This fails to support the concept of type: the norm is for people to lie near the middle of the subscale.[6][7][8][33][42]
Why was “it was expected” that scores on a subscale (E/I, S/N, T/F, J/P) would show a bimodal distribution? How often does one encounter a person who is at the extreme end of any subscale? Not often, I wager, except in places where such extremes are likely to be clustered (e.g., Extraverts in acting classes, Introverts in monasteries). The cut-off at the center of each subscale is arbitrary; it simply affords a shorthand characterization of a person’s dominant traits. But anyone who takes an MBTI (or equivalent instrument) is given his scores on each of the subscales, so that he knows the strength (or weakness) of his tendencies.

Regarding other points of criticism: It is possible, of course, that a person who is familiar with MBTI tends to see in others the characteristics of their known MBTI types (i.e., confirmation bias). But has that tendency been confirmed by rigorous testing? Such testing would examine the contrary case, that is, the ability of a person to predict the type of a person whom he knows well (e.g., a co-worker or relative). The supposed vagueness of the descriptions of the 16 types arises from the complexity of human personality; but there are differences among the descriptions, just as there are differences among individuals. If only half of the persons who take the MBTI are able to guess their types before taking it, does that invalidate MBTI or does it point to a more likely phenomenon, namely, that introspection is a personality-related trait, one that is more common among Introverts than Extraverts. A good MBTI instrument cuts through self-deception and self-flattery by asking the same set of questions in many different ways, and in ways that do not make any particular answer seem like the “right” one.

My considerable exposure to high-IQ scientists in 30 years of working with them is suggestive. Most of them seemed to exhibit the traits of INTJs and INTPs. And those who took an MBTI test were found to be INTJs and INTPs.

IQ AND POLITICS

It is hard to find clear, concise analyses of the relationship between IQ and political leanings. I offer the following in evidence that very high-IQ individuals lean strongly toward libertarian positions.
The Triple Nine Society (TNS) limits its membership to persons with IQs in the top 0.1% of the population. In an undated survey (probably conducted in 2000, given the questions about the perceived intelligence of certain presidential candidates), members of TNS gave their views on several topics (in addition to speculating about the candidates’ intelligence): subsidies, taxation, civil regulation, business regulation, health care, regulation of genetic engineering, data privacy, death penalty, and use of military force.

The results speak for themselves. Those members of TNS who took the survey clearly have strong (if not unanimous) libertarian leanings.


THE BOTTOM LINE

 If you are very intelligent — with an IQ that puts you in the top 2% of the population — you are most likely to be an INTJ, INTP, ENTJ, ENTP, or INFJ, in that order.

*     *     *
Footnotes:
* I apologize for not having documented the source of the statistics that I cite here. I dimly recall finding them on or via the website of American Mensa, but I am not certain of that. And I can no longer find the source by searching the web. I did transcribe the statistics to a spreadsheet, which I still have. So, the numbers are real, even if their source is now lost to me.

** Estimates of the distribution of  MBTI types  in the U.S. population are given in two tables on page 4 of “Estimated Frequencies of the Types in the United States Population,” published by the Center for Applications of Psychological Type. One table gives estimates of the distribution of the population by preference (E, I, N, S, etc.). The other table give estimates of the distribution of the population among all 16 MBTI types. The statistics for members of Mensa were broken down by preferences, not by types; therefore I had to use the values for preferences to estimate the frequencies of the 16 types among members of Mensa. For consistency, I used the distribution of the preferences among the U.S. population to estimate the frequencies of the 16 types among the population, rather than use the frequencies provided for each type. For example, the fraction of the population that is INTJ comes to 0.029 (2.9%) when the values for I (0.507), N (0.267), T (0.402), and J (0.541) are multiplied. But the detailed table has INTJs as 2.1% of the population. In sum, there are discrepancies between the computed and given values of the 16 types in the population. The most striking discrepancy is for the INFJ type. When estimated from the frequencies of the four preferences, INFJs are 4.4% of the population; the table of values for all 16 types gives the percentage of INFJs as 1.5%.
Using the distribution given for the 16 types leads to somewhat different results:
  • There is a 31% probability that an INTJ’s his IQ places him in the top 2 percent of the population. Next are INFJ, at 14%; ENTJ, 13%; and INTP, 10%. (The next highest type is the ENTP at 4%.) The  four types (INTJ, INFJ, ENTJ, AND INTP) account for 72% of the high-IQ population but only 9% of the total population. The top five types (including ENTPs) account for 78% of the high-IQ population but only 12% of the total population.
  • Four of the five most-intelligent types are NTs, as one would expect, given the probabilities cited earlier. But, in terms of the likelihood of having an IQ, this method moves INFJs into second place, a percentage point ahead of ENTJs.
  • In any event, the same five types dominate, and all five types have a preference for iNtuitive thinking.
  • As before, persons with the S preference generally lag their peers when it comes to IQ tests.

20110716

Activation Costs

In chemistry, activation energy is a term introduced in 1889 by the Swedish scientist Svante Arrhenius, that is defined as the energy that must be overcome in order for a chemical reaction to occur.
In this article, I propose that:
  • Every action you take has an activation cost (perhaps zero)
  • These costs vary from person to person
  • These costs can change over time
  • Activation costs explain a lot of akrasia
After proposing that, I’d like to explore:
  • Factors that increase activation costs
  • Factors that decrease activation costs
Every action a person takes has an activation cost. The activation cost of a consistent, deeply embedded habit is zero. It happens almost automatically. The activation cost for most people in the United States to exercising is fairly high, and most people are inconsistent about exercising. However, there are people who – every single day – begin by putting their running shoes on and running. Their activation cost to running is effectively zero.

These costs vary from person to person. In the daily running example above, the activation cost to the runner is low. The runner simply starts running in the morning. For most people, it’s higher for a variety of reasons we’ll get to in a moment. The running example is fairly obvious, but you’ll also see phenomenon like a neat person saying to a sloppy one, “Why don’t you clean your desk? … just f’ing do it, man.” Assuming the messy person indeed wants to have a clean desk, then it’s likely the messy person has a higher activation cost to cleaning his desk. (He could also have less energy/willpower)

These costs can change over time. If the every-morning-runner suffers from a prolonged illness or injury and ceases to run, restarting the program might have a much higher activation cost for a variety of reasons we’ll cover in a moment.

Finally, I’d like to propose that activation costs explain a lot of akrasia and procrastination. Akrasia is defined as “acting against one’s better judgment.” I think it’s possible that an action a person wishes to take has higher activation costs than they have available energy for activation at the moment. There is emerging literature on limited willpower and “ego depletion,” here’s Wikipedia on the topic:
Ego depletion refers to the idea that self-control or willpower is an exhaustible resource that can be used up. When that energy is low (rather than high), mental activity that requires self-control is impaired. In other words, using one’s self-control impairs the ability to control one’s self later on. In this sense, the idea of (limited) willpower is correct.
While this is anecdotal, I believe that starting a desired action is frequently the hardest part, and usually the part that requires the most ego/will/energy. Thus, the activation cost. Continuing in motion is not as difficult as starting – as activating.

This implies that there would be two effective ways to beat akrasia-based procrastination. The first would be to lower the activation cost; the second would be to increase energy/willpower/ego available for activation.

Both are valid approaches, but I think lowering activation costs is more sustainable. I think there’s local maximums of energy that can be achieved, and it’s likely that even the most successful and industrious people will go through low energy periods. Obviously, by lowering an activation cost to zero or near zero, it becomes trivial to do the action as much as is desired.

Some people have a zero activation cost to go running, and do it every day for the benefit of their health. Some people have zero activation cost to cleaning their desk, and do it whenever they realize its messy. Some people have a zero activation cost to self-promote/self-market, and thus they’re frequently talking themselves up, promoting, and otherwise trying to get people to pay attention to their work. Most of us have higher activation costs to go running, clean a desk, or to market/promote something. Thus, it burns a lot more energy and is actually effectively impossible to complete the action sometimes.

The following factors seem to increase activation cost (not a complete list):
The following factors seem to decrease activation cost (not a complete list):
  • Deadline urgency
  • Constraints (and thus, lack of opportunity cost)
  • Momentum
  • Grouping/batching tasks together
  • Structured Procrastination
  • Very clear, straightforward instructions
  • Long term habits
  • Cached-self effects
  • Feeling like something is a game
Additionally, another way to go anti-akrasia is to increase energy levels through good diet, exercise, mental health, breathing, collaboration, good work environment, nature, adequate rest and relaxation. Some of these might additionally lower activation costs in addition to increasing energy.

I believe the most effective way to do activities you want to do is to decrease their activation cost to as close to zero as possible. This implies you should defeat ugh fields, reduce trivial inconveniences and barriers, de-compartmentalize (and get something to protect), untangle your identity from the action you’re taking, and find as clear instructions as possible. Also, deadlines, constraints, momentum, grouping and batching tasks, structured procrastination, clear instructions, establishing habits, setting up helpful cached-self effects and reducing negative ones, and treating activities to be done as a game all seem to be of value.

I would be excited for more discussion on this topic. I believe activation costs are a large part of what causes procrastination akrasia, and reducing activation costs will help us get what we want.

The Cognitive Cost of Doing Things

There's no such thing as a free lunch, and that goes for your brain, too. Every time you amass the willpower to do anything, it has mental costs. Writer and strategist Sebastian Marshall identifies a few of those cognitive costs to understand how to get more done while conserving as much of your mental reserve as possible.
 
What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world?

I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.

When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started:
Activation Energy – As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time – making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines – just getting started.

Opportunity cost – We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this – a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you – Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.

Inertia – Eliezer Yudkowskoy wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.

The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started – that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).

Ego/willpower depletion – The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't – Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm – when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.

Neurosis/fear/etc – Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities – especially ones that risk social embarrassment.

I never realized how strong these forces were until I tried to break free of them – whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence – it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was… frightening's not quite the right word, but something like that.

Some tasks can be legitimately said to be "neurosis-inducing" – that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.

Altering of hormonal balance – A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body – immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.

Maintenance costs from the idea re-emerging in your thoughts – Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet – it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.

Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.

Conclusion

I identified these seven as the mental/cognitive costs to trying to do something -
  • Activation Energy
  • Opportunity cost
  • Inertia
  • Ego/willpower depletion
  • Neurosis/fear/etc
  • Altering of hormonal balance
  • Maintenance costs from the idea re-emerging in your thoughts
I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time.

Thoughts on other costs, or ways to reduce these are very welcome.

20110712

What is the Monkeysphere?

"One death is a tragedy. One million deaths is a statistic."
-Kevin Federline

What do monkeys have to do with war, oppression, crime, racism and even e-mail spam? You'll see that all of the random ass-headed cruelty of the world will suddenly make perfect sense once we go Inside the Monkeysphere.

"What the Hell is the Monkeysphere?"

 First, picture a monkey. A monkey dressed like a little pirate, if that helps you. We'll call him Slappy.

Imagine you have Slappy as a pet. Imagine a personality for him. Maybe you and he have little pirate monkey adventures and maybe even join up to fight crime. Think how sad you'd be if Slappy died.

Now, imagine you get four more monkeys. We'll call them Tito, Bubbles, Marcel and ShitTosser. Imagine personalities for each of them now. Maybe one is aggressive, one is affectionate, one is quiet, the other just throws shit all the time. But they're all your personal monkey friends.

Now imagine a hundred monkeys.

Not so easy now, is it? So how many monkeys would you have to own before you couldn't remember their names? At what point, in your mind, do your beloved pets become just a faceless sea of monkey? Even though each one is every bit the monkey Slappy was, there's a certain point where you will no longer really care if one of them dies.

So how many monkeys would it take before you stopped caring?That's not a rhetorical question. We actually know the number.

"So this whole thing is your crusade against monkey overpopulation? I'll have my monkey castrated this very day!"

Uh, no. It'll become clear in a moment.

You see, monkey experts performed a monkey study a while back, and discovered that the size of the monkey's monkey brain determined the size of the monkey groups the monkeys formed. The bigger the brain, the bigger the little societies they built.

They cut up so many monkey brains, in fact, that they found they could actually take a brain they had never seen before and from it they could accurately predict what size tribes that species of creature formed.

Most monkeys operate in troupes of 50 or so. But somebody slipped them a slightly larger brain and they estimated the ideal group or society for this particular animal was about 150.

That brain, of course, was human. Probably from a homeless man they snatched off the streets.

"So that's the big news? That humans are God's big-budget sequel to the monkey? Who didn't know that?"

It goes much, much deeper than that. Let's try an example.

Famous news talking guy Tim Russert tells a charming story about his father, in his book Big Russ and Me (the title referring to his on-and-off romance with actor Russell Crowe). Russert's dad used to take half an hour to carefully box up any broken glass before taking it to the trash. Why? Because "The trash guy might cut his hands."

That this was such an unusual thing to do illustrates my monkey point. None of us spend much time worrying about the garbage man's welfare even though he performs a crucial role in not forcing us to live in a cave carved from a mountain of our own filth. We don't usually consider his safety or comfort at all and if we do, it's not in the same way we would worry over our best friend or wife or girlfriend or even our dog.

People toss half-full bottles of drain cleaner right into the barrel, without a second thought of what would happen if the trash man got it splattered into his eyes. Why? Because the trash guy exists outside the Monkeysphere
.
"There's that word again..."

The Monkeysphere is the group of people who each of us, using our monkeyish brains, are able to conceptualize as people. If the monkey scientists are monkey right, it's physically impossible for this to be a number much larger than 150.

Most of us do not have room in our Monkeysphere for our friendly neighborhood sanitation worker. So, we don't think of him as a person. We think of him as The Thing That Makes The Trash Go Away.

And even if you happen to know and like your particular garbage man, at one point or another we all have limits to our sphere of monkey concern. It's the way our brains are built. We each have a certain circle of people who we think of as people, usually our own friends and family and neighbors, and then maybe some classmates or coworkers or church or suicide cult.

Those who exist outside that core group of a few dozen people are not people to us. They're sort of one-dimensional bit characters.

Remember the first time, as a kid, you met one of your school teachers outside the classroom? Maybe you saw old Miss Puckerson at Taco Bell eating refried beans through a straw, or saw your principal walking out of a dildo shop. Do you remember that surreal feeling you had when you saw these people actually had lives outside the classroom?

I mean, they're not people. They're teachers.

"So? What difference does all this make?"

 Oh, not much. It's just the one single reason society doesn't work.

It's like this: which would upset you more, your best friend dying, or a dozen kids across town getting killed because their bus collided with a truck hauling killer bees? Which would hit you harder, your Mom dying, or seeing on the news that 15,000 people died in an earthquake in Iran?

They're all humans and they are all equally dead. But the closer to our Monkeysphere they are, the more it means to us. Just as your death won't mean anything to the Chinese or, for that matter, hardly anyone else more than 100 feet or so from where you're sitting right now.

"Why should I feel bad for them? I don't even know those people!"

Exactly. This is so ingrained that to even suggest you should feel their deaths as deeply as that of your best friend sounds a little ridiculous. We are hard-wired to have a drastic double standard for the people inside our Monkeysphere versus the 99.999% of the world's population who are on the outside.

Think about this the next time you get really pissed off in traffic, when you start throwing finger gestures and wedging your head out of the window to scream, "LEARN TO FUCKING DRIVE, FUCKER!!" Try to imagine acting like that in a smaller group. Like if you're standing in an elevator with two friends and a coworker, and the friend goes to hit a button and accidentally punches the wrong one. Would you lean over, your mouth two inches from her ear, and scream "LEARN TO OPERATE THE FUCKING ELEVATOR BUTTONS, SHITCAMEL!!"

They'd think you'd gone insane. We all go a little insane, though, when we get in a group larger than the Monkeysphere. That's why you get that weird feeling of anonymous invincibility when you're sitting in a large crowd, screaming curses at a football player you'd never dare say to his face.

"Well, I'm nice to strangers. Have you considered that maybe you're just an asshole?"

Sure, you probably don't go out of your way to be mean to strangers. You don't go out of your way to be mean to stray dogs, either.

The problem is that eventually, the needs of you or those within your Monkeysphere will require screwing someone outside it (even if that need is just venting some tension and anger via exaggerated insults). This is why most of us wouldn't dream of stealing money from the pocket of the old lady next door, but don't mind stealing cable, adding a shady exemption on our tax return, or quietly celebrating when they forget to charge us for something at the restaurant.

You may have a list of rationalizations long enough to circle the Earth, but the truth is that in our monkey brains the old woman next door is a human being while the cable company is a big, cold, faceless machine. That the company is, in reality, nothing but a group of people every bit as human as the old lady, or that some kind old ladies actually work there and would lose their jobs if enough cable were stolen, rarely occurs to us.

That's one of the ingenious things about the big-time religions, by the way. The old religious writers knew it was easier to put the screws to a stranger, so they taught us to get a personal idea of a God in our heads who says, "No matter who you hurt, you're really hurting me. Also, I can crush you like a grape." You must admit that if they weren't writing words inspired by the Almighty, they at least understood the Monkeysphere.

It's everywhere. Once you grasp the concept, you can see examples all around you. You'll walk the streets in a daze, like Roddy Piper after putting on his X-ray sunglasses in They Live.

But wait, because this gets much bigger and much, much stranger...

"So you're going to tell us that this Monkeysphere thing runs the whole world? Also, They Live sucked."

Go flip on the radio. Listen to the conservative talk about "The Government" as if it were some huge, lurking dragon ready to eat you and your paycheck whole. Never mind that the government is made up of people and that all of that money they take goes into the pockets of human beings. Talk radio's Rush Limbaugh is known to tip 50% at restaurants, but flies into a broadcast tirade if even half that dollar amount is deducted from his paycheck by "The Government." That's despite the fact that the money helps that very same single mom he had no problem tipping in her capacity as a waitress.

Now click over to a liberal show now, listen to them describe "Multinational Corporations" in the same diabolical terms, an evil black force that belches smoke and poisons water and enslaves humanity. Isn't it strange how, say, a lone man who carves and sells children's toys in his basement is a sweetheart who just loves bringing joy at Christmas, but a big-time toy corporation (which brings toys to millions of kids at Christmas) is an inhuman soul-grinding greed machine? Strangely enough, if the kindly lone toy making guy made enough toys and hired enough people and expanded to enough shops, we'd eventually stop seeing it as a toy-making shop and start seeing it as the fiery Orc factories of Mordor.

And if you've just thought, "Well, those talk show hosts are just a bunch of egomaniacal blowhards anyway," you've just done it again, turned real humans into two-word cartoon characters. It's no surprise, you do it with pretty much all six billion human beings outside the Monkeysphere.

"So I'm supposed to suddenly start worrying about six billion strangers? That's not even possible!"

That's right, it isn't possible. That's the point.

What is hard to understand is that it's also impossible for them to care about you.

That's why they don't mind stealing your stereo or vandalizing your house or cutting your wages or raising your taxes or bombing your office building or choking your computer with spam advertising diet and penis drugs they know don't work. You're outside their Monkeysphere. In their mind, you're just a vague shape with a pocket full of money for the taking.

Think of Osama Bin Laden. Did you just picture a camouflaged man hiding in a cave, drawing up suicide missions? Or are you thinking of a man who gets hungry and has a favorite food and who had a childhood crush on a girl and who has athlete's foot and chronic headaches and wakes up in the morning with a boner and loves volleyball?

Something in you, just now, probably was offended by that. You think there's an effort to build sympathy for the murderous fuck. Isn't it strange how simply knowing random human facts about him immediately tugs at your sympathy strings? He comes closer to your Monkeysphere, he takes on dimension.

Now, the cold truth is this Bin Laden is just as desperately in need of a bullet to the skull as the raving four-color caricature on some redneck's T-shirt. The key to understanding people like him, though, is realizing that we are the caricature on his T-shirt.

"So you're using monkeys to claim that we're all a bunch of Osama Bin Ladens?"

Sort of.

Listen to any 16 year-old kid with his first job, going on and on about how the boss is screwing him and the government is screwing him even more ("What's FICA?!?!" he screams as he looks at his first paycheck).

Then watch that same kid at work, as he drops a hamburger patty on the floor, picks it up, and slaps in on a bun and serves it to a customer.

In that one dropped burger he has everything he needs to understand those black-hearted politicians and corporate bosses. They see him in the exact same way he sees the customers lined up at the burger counter. Which is, just barely.

In both cases, for the guy making the burger and the guy running Exxon, getting through the workweek and collecting the paycheck are all that matters. No thought is given to the real human unhappiness being spread by doing it shittily (ever gotten so sick from food poisoning you thought your stomach lining was going to fly out of your mouth?) That many customers or employees just can't fit inside the Monkeysphere.

The kid will protest that he shouldn't have to care for the customers for minimum wage, but the truth is if a man doesn't feel sympathy for his fellow man at $6.00 an hour, he won't feel anything more at $600,000 a year.

Or, to look at it the other way, if we're allowed to be indifferent and even resentful to the masses for $6.00 an hour, just think of how angry the some Pakistani man is allowed to be when he's making the equivalent of six dollars a week.

"You've used the word 'monkey' more than 50 times, but the same principle hardly applies. Humans have been to the moon. Let's see the monkeys do that."

It doesn't matter. It's just an issue of degree.

There's a reason why legendary monkeytician Charles Darwin and his assistant, Jeje (pronounced "heyhey") Santiago deduced that humans and chimps were evolutionary cousins. As sophisticated as we are (compare our advanced sewage treatment plants to the chimps' primitive technique of hurling the feces with their bare hands), the inescapable truth is we are just as limited by our mental hardware.

The primary difference is that monkeys are happy to stay in small groups and rarely interact with others outside their monkey gang. This is why they rarely go to war, though when they do it is widely thought to be hilarious. Humans, however, require cars and oil and quality manufactured goods by the fine folks at 3M and Japanese video games and worldwide internets and, most importantly, governments. All of these things take groups larger than 150 people to maintain effectively. Thus, we routinely find ourselves functioning in bunches larger than our primate brains are able to cope with.

This is where the problems begin. Like a fragile naked human pyramid, we are simultaneously supporting and resenting each other. We bitch out loud about our soul-sucking job as an anonymous face on an assembly line, while at the exact same time riding in a car that only an assembly line could have produced. It's a constant contradiction that has left us pissed off and joining informal wrestling clubs in basements.

This is why I think it was with a great burden of sadness that Darwin turned to his assistant and lamented, "Jeje, we're the monkeys."

"Oh, no you didn't."

If you think about it, our entire society has evolved around the limitations of the Monkeysphere. There is a reason why all of the really phat-ass nations with the biggest SUV's with the shiniest 22-inch rims all have some kind of representative democracy (where you vote for people to do the governing for you) and all of them are, to some degree, capitalist (where people actually get to buy property and keep some of what they earn).

A representative democracy allows a small group of people to make all of the decisions, while letting us common people feel like we're doing something by going to a polling place every couple of years and pulling a lever that, in reality, has about the same effect as the darkness knob on your toaster. We can simultaneously feel like we're in charge while being contained enough that we can't cause any real monkey mayhem once we fly into one of our screeching, arm-flapping monkey frenzies ("A woman showed her boob at the Super Bowl! We want a boob and football ban immediately!")

Conversely, some people in the distant past naively thought they could sit all of the millions of monkeys down and say, "Okay, everybody go pick the bananas, then bring them here, and we'll distribute them with a complex formula determining banana need! Now go gather bananas for the good of society!" For the monkeys it was a confused, comical, tree-humping disaster.

Later, a far more realistic man sat the monkeys down and said, "You want bananas? Each of you go get your own. I'm taking a nap." That man, of course, was German philosopher Hans Capitalism.

As long as everybody gets their own bananas and shares with the few in their Monkeysphere, the system will thrive even though nobody is even trying to make the system thrive. This is perhaps how Ayn Rand would have put it, had she not been such a hateful bitch.

Then, some time in the Third Century, French philosopher Pierre "Frenchy" LaFrench invented racism.

This was a way of simplifying the too-complex-for-monkeys world by imagining all people of a certain race as being the same person, thinking they all have the same attitudes and mannerisms and tastes in food and clothes and music. It sort of works, as long as we think of that person as being a good person ("Those Asians are so hard-working and precise and well-mannered!") but when we start seeing them as being one, giant, gaping asshole (the French, ironically) our monkey happiness again breaks down.

It's not all the French's fault. The truth is, all of these monkey management schemes only go so far. For instance, today one in four Americans has some kind of mental illness, usually depression. One in four. Watch a basketball game. The odds are at least two of those people on the floor are mentally ill. Look around your house; if everybody else there seems okay, it's you.

Is it any surprise? You turn on the news and see a whole special on the Obesity Epidemic. You've had this worry laid on your shoulders about millions of other people eating too much. What exactly are you supposed to do about the eating habits of 80 million people you don't even know? You've taken on the pork-laden burden of all these people outside the Monkeysphere and you now carry that useless weight of worry like, you know, some kind of animal on your back.

"So what exactly are we supposed to do about all this?"

 First, train yourself to get suspicious every time you see simplicity. Any claim that the root of a problem is simple should be treated the same as a claim that the root of a problem is Bigfoot. Simplicity and Bigfoot are found in the real world with about the same frequency.

So reject binary thinking of "good vs. bad" or "us vs. them." Know problems cannot be solved with clever slogans and over-simplified step-by-step programs.

You can do that by following these simple steps. We like to call this plan the T.R.Y. plan:

First, TOTAL MORON. That is, accept the fact THAT YOU ARE ONE. We all are.

That really annoying person you know, the one who's always spouting bullshit, the person who always thinks they're right? Well, the odds are that for somebody else, you're that person. So take the amount you think you know, reduce it by 99.999%, and then you'll have an idea of how much you actually know regarding things outside your Monkeysphere.

Second, UNDERSTAND that there are no Supermonkeys. Just monkeys. Those guys on TV you see, giving the inspirational seminars, teaching you how to reach your potential and become rich and successful like them? You know how they made their money? By giving seminars. For the most part, the only thing they do well is convince others they do everything well.

No, the universal moron principal established in No. 1 above applies here, too. Don't pretend politicians are somehow supposed to be immune to all the backhanded fuckery we all do in our daily lives and don't laugh and point when the preacher gets caught on video snorting cocaine off a prostitute's ass. A good exercise is to picture your hero--whoever it is--passed out on his lawn, naked from the waist down. The odds are it's happened at some point. Even Gandhi may have had hotel rooms and dead hookers in his past.

And don't even think about ignoring advice from a moral teacher just because the source enjoys the ol' Colombian Nose Candy from time to time. We're all members of varying species of hypocrite (or did you tell them at the job interview that you once called in sick to spend a day leveling up on World of Warcraft?) Don't use your heroes' vices as an excuse to let yours run wild.

And finally, DON'T LET ANYBODY simplify it for you. The world cannot be made simple. Anyone who tries to paint a picture of the world in basic comic book colors is most likely trying to use you as a pawn.

So just remember: T-R-Y. Go forth and do likewise, gents. Copies of our book are available in the lobby.