The AxiomCortex™ Scientific Framework 3.0.0
This is the definitive public documentation of the proprietary Cognitive AI engine that powers TeamStation AI's talent evaluation. It outlines the core scientific pillars, the 44 mathematical models and algorithms, and the bias mitigation strategies that allow us to de-risk hiring.
Core Scientific Pillars
1. Neuro-Psychometric Profiling
Utilizes a LTIE to quantify traits like Architectural Instinct and Problem-Solving Agility from conversational data.
2. Advanced NLP & Semantic Analysis
Employs a suite of NLP techniques to analyze language patterns, semantic meaning, and conceptual understanding, independent of jargon.
3. Cortex Calibration Layer (Bias Mitigation)
4. Behavioral Deconstruction (Beyond STAR)
Methodology: Self-Governing NLP & Phasic Micro-Chunking
The operational backbone of Axiom Cortex is its novel approach to executing complex NLP tasks: a Self-Governing, Self-Learning Phasic Micro-Chunking NLP-based Prompt Engineering technique. This methodology is designed for maximum accuracy, token efficiency, and minimal external dependencies, allowing the LLM itself to perform the core analytical heavy lifting across 44 distinct algorithmic passes.
Comprehensive Review of Core Functions, Formulas, and Algorithms
Function: Transcript Ingestion & Pre-processing
The initial phase where the raw video interview transcript is cleaned, speaker-diarized, and segmented into question-answer pairs.
- Algorithm 1: Utterance Normalization: Removes filler words, and standardizes punctuation.
- Algorithm 2: Speaker Diarization: Correctly attributes text to 'Interviewer' or 'Candidate'.
- Algorithm 3: Q/A Segmentation: Identifies and isolates discrete question and answer blocks for individual analysis.
Protocol: Phasic Micro-Chunking Analysis
For each Q/A pair, the system performs a multi-pass analysis. This involves breaking the answer down into "micro-chunks" (individual sentences or clauses) and analyzing them for specific signals.
- Algorithm 4: Key Concept Extraction: Uses NER (Named Entity Recognition) to identify technical terms and concepts.
- Algorithm 5: Argument Structure Mapping: Maps the logical flow of the candidate's explanation.
- Algorithm 6: Evidence-to-Blueprint Comparison: Compares extracted concepts against a pre-defined "ideal answer blueprint" for the question.
Formulas: BARS (Behaviorally Anchored Rating Scales) Scoring
A suite of algorithms (7-19) scores the answer chunk against multiple behavioral axioms. Each B-Axiom has its own scoring function.
- Algorithm 7 (B_P - Procedural Knowledge): Scores the correctness and completeness of the process described by the candidate.
- Algorithm 8 (B_M - Mental Model): Scores the underlying logic and conceptual soundness of the explanation.
- Algorithm 9 (B_A - Accuracy): Scores the factual correctness of the technical statements.
- Algorithm 10 (B_C - Clarity): Scores the clarity and conciseness of the explanation, after calibration for linguistic factors.
- Algorithm 11 (B_L - Cognitive Load): Measures linguistic markers of cognitive strain (hesitations, restarts) to assess difficulty.
Formula: B-Axiom Score (BAS_q) for a given question `q`:
BAS_q = (w_p*B_P) + (w_m*B_M) + (w_a*B_A) + (w_c*B_C) - (w_l*B_L)
- Where `w` denotes the weight for each axiom.
Protocol: Latent Trait Inference Engine (LTIE)
This is the core psychometric engine (Algorithms 20-35) that synthesizes scores from multiple questions to infer the four key latent traits.
- Algorithm 20-24 (Architectural Instinct): Aggregates scores from systems design questions, focusing on B_M (Mental Model).
- Algorithm 25-29 (Problem-Solving Agility): Aggregates scores from novel or unexpected questions, focusing on the ability to adapt.
- Algorithm 30-35 (Learning Orientation & Collaborative Mindset): Analyzes behavioral questions and "authenticityIncidents" across the entire transcript.
Formula: Latent Trait Inference Score (LTIS_trait)
LTIS_trait = Σ(w_q * BAS_q) * λ_ccl
- Where `w_q` is the relevance weight of question `q` for the trait, `BAS_q` is the overall B-Axiom score for that question, and `λ_ccl` is the Cortex Calibration Layer coefficient.
Protocol: Cortex Calibration Layer (Bias Mitigation)
A set of algorithms (36-41) designed to identify and neutralize sources of bias in the evaluation.
- Algorithm 36: Linguistic Fluency Normalization: Identifies non-native speaker patterns (e.g., grammatical errors, phonetic approximations) and instructs the scoring model to focus on the conceptual content, not the delivery. Generates the `λ_ccl` coefficient.
- Algorithm 37: Authenticity Incident Detection: Flags instances of intellectual honesty (e.g., "I don't know," "I'm not the best at that"). This positively weights the Learning Orientation score.
- Algorithm 38: Jargon vs. First-Principles Detection: Determines if a candidate is using buzzwords without understanding (negative signal) or explaining concepts from fundamentals (positive signal).
Formula: Conceptual Fidelity Score (CFS)
CFS = S_sem * (1 - P_jargon)
- Where `S_sem` is semantic similarity and `P_jargon` is a penalty for over-reliance on jargon.
Formulas: Final Synthesis & Risk Analysis
The final algorithms (42-44) synthesize all data into the executive summary and risk mitigation plan.
Algorithm 42: Metacognitive Conviction Index (MCI): Correlates a candidate's self-assessed confidence with their measured accuracy to gauge self-awareness.
MCI = 1 - |(C_self - A_norm) / (C_self + A_norm)|
- Where `C_self` is self-assessed confidence and `A_norm` is normalized accuracy. An MCI close to 1 is a strong positive signal.
- Algorithm 43: Risk Triangulation: Identifies areas where a candidate's scores fall below the ideal profile for a specific trait and cross-references this with admissions of weakness (authenticityIncidents) to generate a specific, evidence-backed risk factor.
- Algorithm 44: Final Score Aggregation: Computes the final weighted average score based on all latent traits, providing the top-line "Strong Hire / Hire / No Hire" recommendation.
Read the Full Peer-Reviewed Paper
For a deeper academic analysis of the foundational principles, access the complete research paper on the Social Science Research Network (SSRN).
View on SSRN