The assumption built into everything
In 1990, Peter Salovey and John Mayer published a paper defining emotional intelligence as the ability to perceive, use, understand, and manage emotions. That paper became the foundation of every EQ assessment, every therapy intake form, every wellness app built in the three decades since.
It carries an implicit assumption that almost nobody questions: that perceiving emotions means being able to name them.
The Toronto Alexithymia Scale (Taylor, Bagby & Parker, 1994) — the most widely used measure of emotional awareness — has twenty items. Nearly all of them test one thing: verbal access to feelings. "I am often confused about what emotion I am feeling." "It is difficult for me to find the right words for my feelings." "I find it hard to describe how I feel about people."
Score poorly, and the clinical conclusion is low emotional awareness. But what if the person has rich emotional data — just not in words?
This is the verbal bias. It runs through the research, the clinical tools, and every app that opens with "How are you feeling today?" It is so pervasive that most people who process emotions non-verbally conclude they are broken, rather than concluding that the question is wrong.
What the research actually shows
Three bodies of research, developed independently over several decades, converge on the same finding: people process emotions through distinct channels, and the verbal channel is only one of them.
Multiple Code Theory
Wilma Bucci's Multiple Code Theory — developed from decades of psychoanalytic process research — identifies three processing modes that operate in parallel (Bucci, 1997):
Subsymbolic processing. Emotional information encoded as body sensations, motor patterns, and visceral responses. The tightness in your chest before you know you are angry. The stomach drop before you know you are afraid. This mode operates below conscious awareness and does not require — or necessarily produce — verbal labels.
Symbolic nonverbal processing. Emotional information encoded as mental imagery. Scenes replaying, faces frozen in expression, spatial metaphors. A person who "keeps seeing" a painful moment is not stuck — they are processing emotionally through imagery, which carries more emotional charge than verbal descriptions of the same content (Holmes & Mathews, 2010).
Symbolic verbal processing. Emotional information encoded as language. Named feelings, narrative structure, fine-grained distinctions between similar emotions. This is the mode that nearly all emotional intelligence tools assume is primary.
Bucci's key insight, which she stated explicitly: these modes operate simultaneously, and individuals differ in which mode dominates their processing. A person who primarily processes through imagery is not less emotionally aware than a person who processes through words. They are aware through a different channel.
The clinical implication: therapists who rely exclusively on verbal report miss most of what is happening for subsymbolic and imagery-dominant patients. The patient is not resistant. They are being asked to report from a channel they do not primarily use.
Levels of Emotional Awareness
Richard Lane and Gary Schwartz (1987) proposed a developmental model of emotional awareness with five levels, from undifferentiated physical sensations to complex blended emotions. The model is hierarchical — higher levels are considered more developed.
But the hierarchy has a problem. The "highest" level involves articulating blended emotions in language: "I feel a mix of resentment and relief, with guilt underneath." A person who experiences emotions as vivid imagery — precise, detailed, emotionally rich — but cannot narrate them fluently scores lower than a person who produces articulate emotional descriptions that may or may not reflect what they actually feel.
The model captures something real. Emotional awareness does develop. But it conflates development with verbalization. A somatic processor who detects micro-shifts in body sensation before they escalate to full emotions has a form of awareness that the scale does not credit. An analytical processor who can map their emotional trigger patterns across months has sophistication the model cannot see.
The alexithymia paradox
Alexithymia — literally "no words for emotions" — was identified by Sifneos (1973) in psychosomatic patients who struggled to describe their feelings. The construct has three components, measured by the TAS-20:
- Difficulty identifying feelings (DIF)
- Difficulty describing feelings (DDF)
- Externally-oriented thinking (EOT)
Roughly 10% of the general population scores in the alexithymic range, with higher prevalence in autism spectrum conditions, PTSD, eating disorders, and chronic pain populations (Taylor, Bagby & Parker, 1997). The TAS-20 is used in thousands of studies. It is, by any measure, a successful instrument.
Here is the problem. The construct treats verbal difficulty as a global emotional deficit. If you cannot describe your feelings in words, you are classified as having impaired emotional processing.
But the research itself shows this is incomplete.
Studies using physiological measures demonstrate that people scoring high on the TAS-20 often have normal or elevated physiological emotional responses — heart rate variability, skin conductance, startle response (Peasley-Miklus et al., 2016; Connelly & Denney, 2007). Their bodies react. Their autonomic nervous systems respond to emotional stimuli. The signal is there.
What is missing is not the emotion. What is missing is the verbal report.
This is the alexithymia paradox: people classified as emotionally unaware are often having full emotional responses — they just cannot access them through language. Their processing happens in channels the instrument does not measure.
Affect labeling — and its limits
Matthew Lieberman's fMRI research (2007) demonstrated that putting feelings into words — affect labeling — reduces amygdala activation. This became a cornerstone finding: "name it to tame it." Therapists use it. Apps use it. The entire mood-tracking industry is built on it.
But the popular interpretation overgeneralizes the finding. The research shows that verbal labeling reduces emotional intensity. It does not show that verbal labeling is the only way to regulate, or that it works equally well for everyone.
For someone whose emotional processing is primarily somatic or imagistic, forcing a premature verbal label may interrupt a more effective processing pathway rather than completing it. Eugene Gendlin's focusing research (1981) demonstrates this directly: when clients are guided to attend to the "felt sense" — the body's pre-verbal emotional signal — emotional processing often proceeds more effectively than when they are pushed to name the feeling.
The distinction matters. Affect labeling is a real tool. It is not the only tool. And for a substantial portion of the population, it may not be the best first tool.
Three processing channels and one meta-style
These research threads point to a consistent finding: people differ in which channel they primarily use to access emotional information. We identify three channels — each mapping directly to one of Bucci's processing modes — plus an analytical meta-style that describes how some people organize emotional data across channels.
Visual
Emotions are experienced as mental images. When recalling a painful conversation, the visual processor does not think "I felt hurt." They see the other person's face. They replay the room, the lighting, where everyone was sitting. Emotional memories are stored as movies, not memos.
Research grounding: Bucci's symbolic nonverbal mode. Holmes and Mathews (2010) found that imagery carries significantly more emotional charge than verbal processing of the same content — imagery is a "hotter" emotional channel. Individual differences in mental imagery vividness are well-established (Kosslyn, 1994) and predict emotional processing style.
What it sounds like:
- "I keep seeing the look on her face."
- "When I think about it, I picture myself standing in the hallway."
- "There is this image that keeps coming back."
The right question: "What do you see when you think about it?"
Somatic
Emotions are experienced as body sensations first. Chest tightness, stomach knots, jaw clenching, heat rising, heaviness in the limbs. The body signals before the mind translates.
Research grounding: Bucci's subsymbolic mode. Damasio's somatic marker hypothesis (1994) proposed that body-based emotional signals guide decision-making, often before conscious awareness. Craig's interoception research (2002) shows that individual differences in body-signal awareness predict emotional processing capacity. Gendlin's focusing technique (1981) is built entirely on the premise that the body carries emotional information the verbal mind has not yet formulated.
What it sounds like:
- "My chest gets tight when he talks to me like that."
- "I feel it in my stomach before I know what it is."
- "My whole body gets heavy."
The right question: "Where in your body do you notice something right now?"
Verbal
Emotions are experienced through language. Named feelings, narrative structure, precise distinctions between similar emotions. When a verbal processor says "I feel a mix of resentment and guilt," they are not just reporting — they are processing. The act of finding the right word is itself the emotional work.
Research grounding: Pennebaker's expressive writing paradigm (1997) demonstrated that writing about emotional experiences produces measurable health improvements — but the paradigm is inherently verbal. Feldman Barrett's emotional granularity research (2004) shows that people with richer emotion vocabularies make finer emotional distinctions, perceiving and responding to emotional states with greater precision. This is the style that the entire EQ testing apparatus was designed to measure.
What it sounds like:
- "I think what I am actually feeling is grief, not anger."
- "The word that fits best is 'hollowed out.'"
- "I felt a mix of resentment and relief."
The right question: "What word best captures what you are experiencing?"
Analytical (meta-style)
The three channels above map directly to Bucci's three processing modes. Analytical is different — it is not a channel for accessing emotions but a meta-cognitive layer for organizing them. We include it because it describes a pattern we observe frequently: people whose dominant relationship with emotions is through pattern-mapping and cause-effect reasoning, potentially across any of the three underlying channels. This is our practical extension of the framework, not a claim about Bucci's model.
Emotions are understood through patterns, causes, and systematic reasoning. The analytical processor may not feel anger in real-time, but they can map the pattern: every time their autonomy is threatened, they withdraw. Every time withdrawal happens, it lasts exactly two days. The emotional intelligence operates at the system level — not in the moment, but across moments.
Research grounding: Lane and Schwartz's cognitive-developmental model places systematic understanding at the higher levels of emotional awareness. Gross's cognitive reappraisal research (1998) shows that reframing emotional situations changes the emotional response itself — a fundamentally analytical operation. The "externally-oriented thinking" dimension of the TAS-20 is typically scored as a deficit, but in practice it often reflects an organizational style: the person understands emotions through their causes and patterns rather than through felt experience.
What it sounds like:
- "I can see the pattern — every time he does X, I react with Y."
- "This is the same thing that happened last November."
- "The root cause is not the comment. It is that I was already depleted."
The right question: "What pattern do you notice?"
Info
Most people show a primary channel and may also display the analytical meta-style. The framework describes how you tend to access and organize emotional information — your defaults, not a permanent identity. These shift with context, fatigue, and deliberate practice.
Why this is not "learning styles"
This comparison will come up, so let us address it directly.
The VARK model (visual, auditory, read/write, kinesthetic) claimed that matching instructional format to a student's preferred style improves learning outcomes. Pashler et al. (2008) conducted a systematic review and found no credible evidence for this claim. The learning styles framework has been widely criticized and is not considered scientifically supported.
The emotional processing styles framework makes a fundamentally different claim.
Learning styles claim: Matching the format of instruction to a preferred modality improves how much someone learns.
Processing styles claim: People differ in which channel they use to access emotional information, and asking questions through an unused channel produces silence — not because the data is absent, but because the access point is blocked.
The evidence for the processing styles claim is the alexithymia paradox itself: high TAS-20 scores (verbal difficulty) coexisting with normal physiological emotional arousal. The body processes the emotion. The verbal system does not report it. This is not a preference — it is a measurement gap.
The practical implication is not "visual learners should see pictures." It is: if you ask a somatic processor "what are you feeling?" and they say "I don't know," try asking "what do you notice in your body?" The emotional data is the same. The access point is different. The first question draws a blank. The second produces data. This is observable, testable, and consistent with decades of research on individual differences in emotional processing.
We are also transparent that analytical is our practical extension of the framework, not a fourth channel in Bucci's model. Three channels are research-grounded. The meta-style is a product design choice informed by clinical observation.
What this means for tool design
The entire emotional wellness app market is built for verbal processors. Consider what these tools ask:
- "How are you feeling today?" (requires a feeling word)
- "Select your mood from this list" (requires matching experience to pre-existing labels)
- "Write about what is on your mind" (requires narrative verbal processing)
- "Name three emotions you experienced today" (requires emotion vocabulary)
For the substantial portion of the population that does not primarily process through language, these prompts produce one of two outcomes: silence ("I don't know") or performance (picking a word that sounds right without connecting to actual experience). Neither outcome produces emotional growth. Both outcomes reinforce the belief that the person is bad at emotions.
An adaptive tool would detect which channel a person primarily uses and route questions accordingly. The same emotional territory, accessed through the right door.
The emotion gap
One of the most informative signals in emotional processing is the gap between what someone names and what they express.
A person selects "tired" from a mood list. Their writing describes a situation where they were excluded from a decision, uses language dense with agency and control ("they didn't even ask," "I had no say"), and returns to the topic three times. The expressed emotion is closer to frustration or powerlessness than fatigue.
This gap is not a mistake to correct. It is data. It reveals the boundary of the person's current emotional vocabulary — the edge where their labels end and their experience continues. Over time, as the gap narrows, emotional granularity increases. Not because the person tries harder, but because the right questions surface what was always there.
What we are building
Senself is built on this framework. The AI assessment detects your processing style — not through a questionnaire, but through a short conversation that analyzes how you naturally describe experience. Then every interaction adapts: the check-in questions match your channel, the AI reads what you write and proposes emotions you may not have named, and over time the system learns your specific vocabulary and patterns.
The thesis is simple: emotional intelligence tools that assume one processing style fail for everyone else. Adapting to the person is not a feature. It is the entire point.
References
Bucci, W. (1997). Psychoanalysis and cognitive science: A multiple code theory. Guilford Press.
Connelly, M., & Denney, D. R. (2007). Regulation of emotions during experimental stress in alexithymia. Journal of Psychosomatic Research, 62(6), 649-656.
Craig, A. D. (2002). How do you feel? Interoception: The sense of the physiological condition of the body. Nature Reviews Neuroscience, 3(8), 655-666.
Damasio, A. R. (1994). Descartes' error: Emotion, reason, and the human brain. Putnam.
Feldman Barrett, L. (2004). Feelings or words? Understanding the content in self-report ratings of experienced emotion. Journal of Personality and Social Psychology, 87(2), 266-281.
Gendlin, E. T. (1981). Focusing. Bantam Books.
Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2(3), 271-299.
Holmes, E. A., & Mathews, A. (2010). Mental imagery in emotion and emotional disorders. Clinical Psychology Review, 30(3), 349-362.
Kosslyn, S. M. (1994). Image and brain: The resolution of the imagery debate. MIT Press.
Lane, R. D., & Schwartz, G. E. (1987). Levels of emotional awareness: A cognitive-developmental theory and its application to psychopathology. American Journal of Psychiatry, 144(2), 133-143.
Lieberman, M. D., et al. (2007). Putting feelings into words: Affect labeling disrupts amygdala activity in response to affective stimuli. Psychological Science, 18(5), 421-428.
Pashler, H., et al. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
Peasley-Miklus, C. E., et al. (2016). Alexithymia and physiological reactivity to emotional stimuli: A systematic review. Biological Psychology, 114, 1-12.
Pennebaker, J. W. (1997). Writing about emotional experiences as a therapeutic process. Psychological Science, 8(3), 162-166.
Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition and Personality, 9(3), 185-211.
Sifneos, P. E. (1973). The prevalence of "alexithymic" characteristics in psychosomatic patients. Psychotherapy and Psychosomatics, 22(2-6), 255-262.
Taylor, G. J., Bagby, R. M., & Parker, J. D. A. (1994). The alexithymia construct: A potential paradigm for psychosomatic medicine. Psychosomatics, 32(2), 153-164.
Taylor, G. J., Bagby, R. M., & Parker, J. D. A. (1997). Disorders of affect regulation: Alexithymia in medical and psychiatric illness. Cambridge University Press.
Want to discover your processing style?
The AI assessment takes three minutes. No questionnaire — the AI listens to how you describe your experience and identifies your natural channels. Free, no sign-up required.