Methodology
How TerraIQ Detects Greenwashing
TerraIQ uses an NLP-powered analysis engine to cross-reference corporate sustainability claims against independent data sources. Every score is evidence-backed, weighted by category, and reproducible.
Data Sources
TerraIQ ingests and cross-references data from multiple independent sources to ensure findings are grounded in verifiable evidence, not a single dataset.
Analysis Pipeline
Each investigation follows a five-stage pipeline designed to systematically separate genuine sustainability efforts from greenwashing.
Document Ingestion
SEC filings (10-K, 10-Q), ESG disclosures, sustainability reports, and press releases are collected and parsed into structured text. For SEC-registered companies, filings are retrieved directly from EDGAR.
Claim Extraction
NLP models identify and extract specific sustainability claims — emissions targets, renewable energy commitments, net-zero pledges, supply chain standards, and marketing language. Each claim is categorized and tagged.
Reality Cross-Reference
Extracted claims are compared against independent databases: EPA emissions records, Banking on Climate Chaos financing data, CDP disclosures, and tracked corporate pledges. Discrepancies are flagged automatically.
Contradiction Mapping
Each discrepancy is analyzed for severity (Minor, Moderate, Major, Critical) based on the magnitude of the gap between claim and reality, the materiality of the issue, and the availability of counter-evidence.
Weighted Scoring
Contradictions are weighted by category importance and severity, then aggregated into a composite Greenwashing Score (0-100). Reports are generated with audience-specific summaries for investors, regulators, boards, and consumers.
Scoring Categories
The Greenwashing Score is not a single metric — it is a weighted composite across six categories. Categories are weighted by their materiality to actual environmental impact.
Scoring Rubric
Validation & Limitations
Multi-source verification
Every contradiction requires corroborating evidence from at least one independent source. Claims are not flagged based on a single dataset — cross-referencing reduces false positives and ensures findings are defensible.
Severity calibration
Severity ratings account for the magnitude of discrepancy, the materiality of the claim (e.g., Scope 1 emissions vs. marketing language), and whether the company has disclosed corrections or updates.
Known limitations
Analysis depends on publicly available data. Private companies with limited disclosure may receive incomplete assessments. Scores reflect contradictions in public claims — they do not capture undisclosed initiatives or in-progress improvements not yet reported.
Team
Built by Jean Lin and Abhi Chundru.