S E A R C H ( wut r u lookng fr)

Cognition Don't Cut It No More: The Ideopsychology of Ideology

Cognition Don't Cut It No More: The Ideopsychology of Ideology 

§1:Cognitive models of understanding political phenomenon are popular.

§1.1: Definition: Political phenomenon = decision making or decision influencing (A: voting; B: signalling positions; C: signalling values; D: persuading; E: debating; F: conversing, etc., The last three amount to formal variations on the first three in that getting someone other than yourself to vote, signal, or value in the same way as yourself  has a possible feedback effect, that of  the person you've convinced going on to then get someone other than themselves to vote, signal, or value the same. ABC=DEF=ABC=DE...).

§1.2: Definition: Popular = more easily found when consuming media, more talked about or utilized  in institutions and apparatuses of meaningful and significant social influence (high culture) and folk-populations (low culture). That is, a mixture of clout, social capital, prestige, etc., coupled with the availability and ease of consumption. Supply and demand, though the producer demands the consumer to consume their supply, the consumer has not demanded this supply directly, i.e., a r t i f i c i a l l y  d e m a n d e d  s u p p l y (i.e. Oedipalized desire - when daddy says 'well, this is what you really want isn't it' i.e. externally legitimized-mediated 'knowledge').

(We will define cognitive model shortly).

§1.3: Evidence: From high culture influence machines such as Harvard and Yale, to low culture second order influence recycler junk websites  for 'regular' folk, cognitive models are pushed as being the best - i.e. most scientific - contemporary method for understanding political phenomenon.

The main concept employed is 'cognitive dissonance' which originates from a  psychologist's 1957 thesis which has gone virtually unchallenged over the last 60 years. Let that sink in - imagine a scientific concept being used daily yet remaining unchallenged for half a century (i d e o l o g y -or- s c r i p t u r e ?). It's either flawless, or untouchable in the same way Whitey Bulger was.

This is of course consistent with the fact that the holy trinity of academia from which the notion of common sense is legitimated (or de-legitimated), ideopsychology, ideosociology, and philosophistry (the last to lesser extent) took an extreme cognitive turn in the late 50s, early 60s, one which has left us with a mental health system crushed by Cognitive Behavioral Therapy, a sociopolitical philosophical paradigm known as Cognitive Capitalism (and critical diagnoses that reify this paradigm), as well as concepts such as Fredric Jameson's 'cognitive mapping.'

§2: My Position: In addition to simply misconstruing the nature of the human mind, cognitive models fail to grasp the emotional and aesthetic complexity of choice in a postmodern, hypercapitalist landscape.

§2.1: Definition: Cognitive model: Under a cognitive model, things like 'facts,' and 'reality' along with their counterparts 'accuracy' and 'articulation' are tied up with notions such as 'information' and, ultimately, other archaisms such as 'logic' and 'reason.' Of course, these are not 'bad' or archaic things per say. To disavow these outright would be psychotic, however the emphasized importance and heavy reliance on these concepts within the cognitive model are ideological signs that point us to the kind of basic assumptions Rorty would associate with the 'metaphysics of the mirror,' i.e. the old idea that a better position or thought is one that aligns clearly and accurately with a relatively fixed outside world. The internal should match the external as to be reproducible and communicable to others. Rorty writes: 

"The picture which holds traditional philosophy captive is  that of the mind as a great mirror, containing various representations -some  accurate,  some  not...Without the notion of the mind as mirror, the notion of knowledge as accuracy of representation would not have suggested itself.  Without this  latter  notion, the strategy common to  Descartes and Kant-getting more accurate representations  by  inspecting,  repairing,  and polishing the  mirror, so  to  speak-would not have made sense..." (Rorty pg. 12).

The implications of this immense philosophical legacy (naturalism, enlightenment, whatever)  upon cognitive science - i.e positivism - should be self explanatory.

But we're not about to beat a dead horse...well, maybe a few blows couldn't hurt?

§2.2: Cognitive Dissonance: Let's discuss an article on a concept derived from the cognitive model, cognitive dissonance. Let's take How to Convince Someone When Facts Fail published in the reputable and popular magazine Scientific American. Author Michael Shermer (who identifies as a libertarian, though this seems to simply mean maintaining a relatively leftist view with a few conservative leaning positions sprinkled in here and there) writes:

"...when you present people with facts that are contrary to their deepest held beliefs...people seem to double down on their beliefs in the teeth of overwhelming evidence against them...Creationists...Antivaxxers...The 9/11 truthers...Climate deniers...Obama birthers...In these examples, proponents' deepest held worldviews were perceived to be threatened by skeptics, making facts the enemy...This power of belief over evidence is the result of...cognitive dissonance... or the uncomfortable tension that comes from holding two conflicting thoughts simultaneously...corrections actually increase misperceptions...because it threatens their worldview...For example, subjects were given fake newspaper articles that confirmed widespread misconceptions, such as that there were weapons of mass destruction in Iraq. When subjects were then given a corrective article that WMD were never found, liberals who opposed the war accepted the new article and rejected the old, whereas conservatives who supported the war did the opposite... and more: they reported being even more convinced there were WMD after the correction, arguing that this only proved that Saddam Hussein hid or destroyed them...If corrective facts only make matters worse, what can we do to convince people of the error of their beliefs? ...1 keep emotions out of the exchange, 2 discuss, don't attack...3 listen carefully and try to articulate the other position accurately, 4 show respect, 5 acknowledge that you understand why someone might hold that opinion, and 6 try to show how changing facts does not necessarily mean changing worldviews. These strategies may not always work to change people's minds, but now that the nation has just been put through a political fact-check wringer, they may help reduce unnecessary divisiveness."

What are our takeaways? As an oeuvre, Mr. Shermer is clearly someone who needs to convince others to believe certain things and disbelieve certain other things. Mr. Shermer clearly wants the reader - let's name him John - to believe that nutty conservatives are wrong and liberals are right. This is not too controversial. Who really sympathizes with anti-vaxxers anyways? More importantly than the content is the structure, as Mr. Shermer also clearly wants John to be able to convince others to believe what John believes, which is ultimately what Mr. Shermer believes.

Let's go line by line (L) now and summarize while also picking out memorable words and phrases;
L1: fact and belief are opposed to one another.
L2: evidence is associated with fact and again opposed to belief
L3-4: fact / evidence and belief / worldview are opposed to one another once again which = cognitive dissonance
L10: "corrective facts" is used then followed up with a doozy of a question "what can we do to convince people of the error of their beliefs?"
L11: Shermer then plays therapist in giving us a diagnosis and prescription we didn't ask for (he does this worse in another article where he relies on folk sayings, Buddhism, and poppsychology to support the disavowal of feelings and the triumph of reason). Doc Sherm says don't be emotional, discuss, listen, accurately articulate, be respectful and understanding, etc.

Isn't the neurotic need to convince others and its counterpart - convincing others that they should be convincing others  - all a bit weird and creepy, never mind arrogant and misguided?

§2.3 Facts and Beliefs Diagrammed With a Hint of Kant: Here's some hard facts: Facts and beliefs aren't opposed - they aren't an exclusive disjunction (either/or), but merely disjunctive-conjunctive (and/or). One can believe a fact or not believe a fact. One can believe one fact and not believe another fact. One can believe a fact and act on it or not act it. One can use a fact to ground a belief or believe out of faith. Beliefs can inform facts; facts can inform beliefs; beliefs can be facts; facts can be believed, etc. (and this is far from any neoliberal subjective relativism...).  Belief and fact are  synthetic categories of thought that at times mutually influence each other, while at times remaining distinct but not opposed. This is the genius of Freud's discovery of the unconscious, not to mention the general nature of the prefix 'un.' As Kantian psychoanalysts like Ogden and Bollas point out, there are known knowns, unknown unknowns, unknown knowns, and known unknowns (and if you associate this with Zizek's analysis of Rumsfeld, which is excellent I might add, its because he comes to a similar conclusion in 2008 via Kant and Lacan as Bollas does in 1987 via more clinically oriented but still Kantian psychoanalysis. Zizek, possibly due to his narrow Lacanian-Marxian philosophical and therefore almost anti-clinical approach to psychoanalysis, was either  not aware of Bollas, or due to Zizek's laissez-faire approach to acknowledging others' ideas,  has ignored him and thus pseudo-plagiarized  by omission. But who cares anyways). The importance of this is that the 'un' as in unconscious, uncanny, or unknown, relates to the psychoanalytical concept of disavowal, knowing something without letting it effect your course of thought or action - i.e., an unconscious fact or belief. A common example is 'undead,' a creature which is neither dead nor alive, but animated dead, i.e. the living dead. 'Un' is "just outside a given category with whose members it shares a salient function...or to a peripheral member of a given category..." (McCawley 2005 pg. 330-331). In other words, it is a lesser version of  a category, or a version that shares enough qualities of a category but is not quite that thing.

Facts and believes, along with the Un prefix, speak to different kinds of relations inside and between different kinds of categories.

Fact = a triangular relation between internal, external, and other (a third party who shares the same or similar relation between internal and external and thereby feedbacks the relation giving it ontological depth through mimicry). This lines up well with basic entry-level Lacanian theory - the signifier or mediation (castration and the oedipal triangle [un=disavowal or failure to achieve a symbolic sense of sexual difference], the other who reflects back or is deposited into) is required to interact with the world in a way in which fact and non fact can be meaningful distinguish (see below).


Belief = a curvilinear relation between internal and internal, with the soft bending part of the relation's vector brushing against a minimal external limit but in a way that does not necessarily require a third party's validation (at least not consciously - this is, in old Freudian terms, a sort of libidinal cathexis to a projected partial object which is imagined and treated as the real thing, i.e., closer to primary process or psychosis, or schizocybernetics [see below]).

This brings us to our next point: It's a transcendental error  (Kant-daddy) to treat belief and fact like anything other than synthetic categories of understanding. Phrases like "corrective facts" utilized in response to an other's belief miss the mark completely; it's like saying 'this apple doesn't taste square enough' or 'circle doesn't smell loud enough.' On a less abstract level - and this is something people do all the time - its like going to a horror movie and then critiquing it for not having had enough romantic comedy bits. It's a categorical error.

A belief is neither incorrect nor correct, but believed or unbelieved.  Likewise, when Shermer writes "what can we do to convince people of the error of their beliefs?" he asks a question that makes the same transcendental error - beliefs cannot be erroneous, only believed or not believed. Beliefs can be constructive or destructive, they can be immoral or moral (in so far as this has to do with human value systems per Nietzsche), but they can't be correct or incorrect, erroneous or nonerroneous, etc. Can one even think of an incorrect belief? An erroneous belief? An error emerges when an internal motivation - a belief - (a real virtual) is actualized into the real, the real being an intersubjective system of values and affects assembled part and parcel with force and power, both corporeal and incorporeal. The error occurs when I try and actualize the belief in a milieu (plane of consistency) that does not support the actualization, or, as they might say in computer lingo, when an operating system does not support a certain program, etc. Error and factual discrepancy emerge on the level of execution in a universe with others and rules set up between others and categorical limits of objects. In nonjargon: If I believe I can fly and I jump out of a window, sure, it has been a  belief with a destructive outcome to my body and mind, but it was not erroneous. It causes my friends and family sadness, and the EMTs frustration, which prompts the general proposition 'this is an incorrect belief,' which is nothing more than a little moralism. A belief is a brief intensity. I believe that if I press the red button a door will open. I pressed the red button. The door did not open. My belief was my belief and it was, in the moment, not erroneous or factually incorrect. Now I need a new belief. Or more buttons. This is all just a diluted way of making the old Freudian - and then Althusserian or Zizekian - point that we grasp the world through and with belief (ideology). Belief is part of the fabric of the real, not something separate to it. We may belief a fact and act on it, or unbelief a fact and disavow it.

Ok, things are getting a little too dense, even stupid. I know I said we would not beat a dead horse...only a few blows...but we need to visit one more popular cognitive model as an example - the overton window.

§2.4: Closing the Overton Window: As one author writes, it's "A once-obscure poli-sci concept" developed by free market libertarian think tankers which was "scantily" yet exclusively employed by the right (to the extent that it was dismissed as rightwing rhetoric) which is just now "having its moment in the sun" with the leftist critiques of Trump, a popularization which has left the overton window discourse so overcrowded that it is hard to tell what is simply white noise and what is signal (a problem we will return too...).

As (polemical) 'proof' of both the concept's shortcoming and popularity, take this short Vox video How Trump Makes Extreme things Look Normal centered on the overton window. Considering its a Vox video, its nearly 'un'watchable, so I will assume the reader did not watch it - or watched it and just didn't retain anything... The overton window is a political science concept referring to the window of normal ideas the public is willing to accept..."everything outside is radical...unthinkable." And the theory goes - as the video argues - to move public opinion, start extreme in one direction, say the right, introduce your ideas and discourses, and then move less extreme but still extreme (little less right). The public is willing to accept an extreme idea as it seems less extreme when compared to the VERY extreme idea. The argument of the almost 8 minute video is that Trump is so crazy he makes crazy conservatives fall into a position of being depicted as progressive or left by media outlets.

Besides the fact that the concept claims scientific status despite being unfalsifiable and therefore unverifiable (a drafty concept - if we can't disprove it as to ascertain the concepts limits and therefore its appropriate place of application, how can we tell when we are really experiencing the phenomenon it is supposed to describe or when to apply the model? [this is the heuristic vs. predictive distinction - see two blogs for more on this]), as one author puts it, "perhaps the Overton Window’s biggest drawback...It tells us more about the handful of activists who supposedly move the window than the voters whose opinions actually change. While Trump has certainly lowered the standard of debate on the right, he didn’t have to move the consensus rightward; he played to a bloc of voters who already found his proposals desirable...To chalk [Trump's] successes to shock tactics is to ignore the long-simmering populism that swept both right and left in the presidential primaries. It is impossible to understand what drives these movements without engaging with the economic and cultural circumstances that underpin them."

Some of this is not super important for what we are doing here. What is important is the following:
A large portion of the country (Trump's populist base) actually emotionally identifies with Trump's inappropriate behavior. So it's not a matter of decent people making an error or mistake via a cognitive distortion (comparisons between scaled qualities or quantities - more extreme and less extreme views  creating an illusion of a middle - as indicated by the overton scale) but by a dormant and internal value system having a space to proliferate in. Votes emotionally identified (believed in) Trump. Now we're back in the land of beliefs.

§3: Alternative Model: No one has ever formed a complex personal position based on comparing the facts. And people don't hold and contrast two ideas together when making a political decision. We, as organisms, are emotional before we are cognitive. Most of us never develop past that. People have gut reactions or emotional knee jerk responses to experiences. And the experience one reacts to is filtered through character (i.e., the dynamism of gut reactions, emotional responses, and the negative sides of these, i.e., defenses erected around preventing or ejecting intolerable feelings). Cognitive models are scientific abstractions - reductions of the complex emotional and aesthetic experience of the human - extracted from the real, theoretically and artificially (heavy and inappropriate reliance on a priori rather than a posteriori) enhanced for clarity, reprojected onto the human and then called 'natural' (from earlier: a r t i f i c i a l l y  d e m a n d e d  s u p p l y [i.e. Oedipalized desire - when daddy-science says 'well, this is what you really want [[thought]] isn't it'). This is not too different than when one political side of the divide claims the other is voting against there interest. There is no voting against one's interesting, only voting against other's interests in a way that the other cannot put themselves in the perspective to understand the supposed interests concerned.

This isn't anti-science. It's critical epistemology. And it's consistent with a slew of research on various levels (pop economics, psychology, sociology, high culture, low culture, etc.), that not only do we vote by our emotions - voting being a particular choice to make between marketed products - but we make very general life choices, like what size coffee to buy, by our emotions. Cognition just doesn't cut it anymore.

So what do we do then? First, we stop pretending talking it out or debating the facts does anything meaningful (and I am a talk therapist for fuck's sake, so this means something to me!). Deleuze and Guattari remind us that classical philosophy and democracy, both of which are bound up with naturalist-representational and rational-debate / intelligible discussion, i.e. cognitive assumptions and models, are both bastard abominations of the Greeks:


“The best one can say about discussions is that they take things no farther, since the participants never talk about the same thing. Of what concern is it...that someone has such a view, and thinks this or that, if the problems at stake are not stated? And when they are stated, it is no longer a matter of discussing but rather one of creating concepts for the undiscussible problem posed. Communication always comes too early or too late, and when it comes to creating, conversation is always superfluous. Sometimes philosophy is turned into the idea of a perpetual discussion, as "communicative rationality," or as "universal democratic conversation." Nothing is less exact...it never takes place on the same plane...All these debaters and communications are inspired by ressentiment. They speak only of themselves...Debate is unbearable...in Socrates was philosophy not a free discussion among friends? Is it not, as the conversation of free men, the summit of Greek sociability? In fact, Socrates constantly made all discussion impossible, both in the short form of the contest of questions and answers and in the long form of a rivalry between discourses. He turned the friend into the friend of the single concept, and the concept into the pitiless monologue that eliminates the rivals one by one.” (What is Philosophy? pg. 28-29).


It is indisputable that media outlets preach to the choir, and friend groups gather around relatively consistent political beliefs, and in the cases in which they don't, no one has ever really been persuaded meaningfully to think on a belief that contradicts their own based on the simple facts. And if someone is convinced, they were already ready to believe. As D and G remind us in Anti-Oedipus, "no one has ever died from contradictions." No more talking about the facts. Who cares. Meanwhile, academic debate, high culture which is separated from the low culture friends and media, lapses into resentful Socratic competition - oneupmanship, showing off, etc. It's no coincidence the Greek's invented the Olympics too.

The mention of Nietzsche's ressentiment here is paramount as it introduces him into the text. For Nietzsche - and for clinical psychoanalysts who study counter transference inductions and projective identifications with patients - an argument, or really any statement, is a 'will to power.' The concept is simple - I either put into the world something I want to see (extending my power), or if I lack the power to put into the world what I want to see, I identify myself with someone who does have the power to put into the world what I want to see. I exert my power, or team up with someone who exerts power I agree with. Example from earlier: Sherman writes an article of which, in basic terms, the function is to convince others to believe what Sherman is saying. Sherman's will to power is to have people align with his worldview, his will. Sherman makes references and appeals to others and to the field of science. This is Sherman's identification for legitimacy with others' wills. If I share Sherman's article on Facebook, I symbolically communicate an identification with Sherman's will as he has more power than me (i.e., social capital, clout, etc.).

So, going with Nietzsche and psychoanalysis, instead, focus on belief and what emotional investments people have in those beliefs. Go with what will do they want to impose on the world. We will be doing the opposite of what Doc Sherm said earlier. Earlier we only summarized Sherm's prescription - don't be emotional, discuss, listen, accurately articulate, be respectful and understanding, etc. With this in mind, here is an alternative to the cognitive model in the form of psychopolitical diagnostic tool - a structural questionnaire - one can apply. We will first introduce the model, then expand on its form, content, implications, and assumptions.

Person A should ask person B the following:

1: Are there errors, flaws, mistakes, of any nature, which you (person B) acknowledge in your own position (person B)? 

- 1a: If B answers yes ask: How do you understand or make sense of these errors? Now move on to question 2. 
- - - - 1b: If B answers no ask: Could there hypothetically or theoretically be errors, flaws, etc. in your position? 
- - - - - - 1c: IF B answers yes to 1b, return to question 1a - how would you or could you understand these hypothetical / theoretical errors? 
- - - - - - - - 1d: If B answers no to 1b, ask 'If there are no errors, mistakes, flaws, and you cannot imagine any errors, mistakes, flaws, then am I simply to believe what you believe otherwise be wrong?
- - - - - - - - - - 1e: If B answers yes to 1d, ask 'How did your position come to be so strong and foolproof?' (At this point, mature discussion between two or more varying perspectives is not currently possible and the position in question is being bolstered to the level of idealistic-perfection out of some sort of defensive insecurity. When people are not threatened but are instead bolstered in return they will - if they are oppositional / willful - admit to an error or flaw as to disagree with your agreeing with them [reverse psychology] or be surprised at the event of not being challenged and perhaps be willing to provide a flexible opinion on their position [give a little / take a little]. If this is not the case, the person may not be ready to discuss their position or belief, and no amount of discussion is likely to persuade them).

Person B has admitted a flaw, mistake, or error - hypothetically or actually. You should have some data of person B's understanding of this flaw, mistake, error. Now, restate your understanding of their understanding of their understanding so you are both on a close enough page. 

2: Now ask 'could this error in your position make your position untenable or unappealing for some people while making it more tenable and more appealing to others? 
- - - - 2a: If yes, ask, as in 1a, how person B understands this.
- - - - - - 2b: If no, ask if the appeal / unappeal could be thought about hypothetically or theoretically. 
- - - - - - - - 2c: If yes, return to 2a - how do you understand this?
- - - - - - - - - - 2d: if no, ask 'if there exists no population for whom your position may be less appealing or untenable, then am I or other populations who might claim your position as untenable or unappealing simply to conform to your position as it is objectively good for all?'
- - - - - - - - - - - - 2e: if yes, ask how the position became so objectively good....

Here, question 2 and its a,b,c,d,e sub questions mirrors exactly 1 and a,b,c,d,e sub questions. The point is to "state the stakes" as D and G mention. Both parties are coming to an understanding of the same concept, on the same plane. Only when people aren't talking past each other and have a somewhat clear notion of someone's investment in a position can discussion take place. This also helps gauge the maturity of held position. No mature person will admit to holding a perfect, flawless etc. position. And if there is a flaw in a position, there is the possibility for another to take issue with the flaw. Latent here - and this is where the real utility of this proposed model lies - is the idea that, on the off chance  you wish to waste time 'debating' someone (getting them to believe what you believe, act how you act), then you need to debate on their terms and use aspects of their position which they have identified themselves. Turn the others' discourse on them.


Here's why this is different than what Sherman said. We have summarized so far, but here's what he really said 

"If corrective facts only make matters worse, what can we do to convince people of the error of their beliefs? From my experience, 1 keep emotions out of the exchange, 2 discuss, don't attack (no ad hominem and no ad Hitlerum), 3 listen carefully and try to articulate the other position accurately, 4 show respect, 5 acknowledge that you understand why someone might hold that opinion, and 6 try to show how changing facts does not necessarily mean changing worldviews. These strategies may not always work to change people's minds, but now that the nation has just been put through a political fact-check wringer, they may help reduce unnecessary divisiveness." 

We take issue with 1, 3, 4, 5, and 6. 2 is ok,

1: For our model, emotions are key to beliefs and beliefs are key to how people interpret and utilize facts. Understand the other person's emotional investment, and understand your own emotional investment in your position.
2: Refraining from attacking is good, though be mindful of how your comments or questions could still be interpreted as attacks by someone who is defensive.
3: For our model, there is no need to listen carefully and articulate, but to allow one's self to feel how the other may feel - i.e., again, the emphasis is emotional, not cognitive.
4: Respect only works if you really feel it, otherwise it is inauthentic and will be felt as so and thus may result in a defensive position of the other.
5: "acknowledge that you understand why someone might hold that opinion" - This is worrisome. It displays a fake understanding. It usually amounts to 'I get why you think that, but you're still wrong.' Understanding 'why' someone 'might' think a certain way puts a distance between both debaters, it effectively says, on a symbolic level, 'I don't see it your way, but I can see how you would see it that way.' The point of our proposed model is to actually see it the way the other person sees it - feel it in your bones.
6: This is simply psychotic - on the one hand, Sherman and people like him seem to imply that a worldview should be based on scientific facts which would imply that 'changing facts' -  which I assume means getting the other to see how they may have been wrong or mistaken on the facts - must directly mean a change in worldviews. In the case where it does not mean this, it is hard to debate someone (a way of getting someone to believe what you believe, i.e., change them) while also claiming they don't have to change (this is disavowal, a double bind, having your cake and eating it too).

Cognitive models of understanding political phenomenon are popular, but ultimately, cognition don't cut it no more. If we want to talk politics, we need to talk emotional and personal investments in others' beliefs. And this has good reason to remain unpopular.