Tools · tool-stack supermatrix · landscape 14 × 16

Fourteen tools, sixteen dimensions: scannable along every axis.

One row per tool, one column per attribute. The first column group describes the tool (category, what is measured, data source, engines, mandatory or mandate-dependent stack). The middle column group is the capability matrix: nine capabilities as a dot grid, so that what each tool covers — and where two tools overlap — is visible at a glance. The last column holds the decisive demarcation sentence. The table scrolls horizontally; the tool column stays sticky on the left.

Methodology sector-invariant: clusters /01 to /07 measure what they measure, regardless of the mandate sector. Cluster /08 mandate-dependent carries the sector-specific application duties, with empirical anchor in the 1&1 telco mandate and qualitative transfer notes for Finance, Insurance and Commerce.

Deepening on the Tools sub-page
Capability scale Core function Partial / secondary function Not covered Methodologically blind
Stack column P Mandatory stack M Mandate-dependent
Scroll horizontally · 16 columns ← → tool column stays sticky
Tool Category · layer What it measures & does Data source Engines / coverage Stack LLM
visi-
bility
DE
SERP
index
AI over-
views
Back-
links
Fact
check
Site
crawl /
schema
Index
push
Report-
ing
Manual
QA /
draft
Demarcation: why no other tool replaces it
/01 LLM visibility · multi-engine
Peec AILLM visibility MonitoringExternal LLM answers Share of voice and citation sources over broad, automatically generated query sets. Time series per engine, competitive comparison, source-domain analysis. LLM API
polling
ChatGPT, Claude, Gemini, Perplexity, AI Overviews P Only tool that fits all four chat engines + AI Overviews in parallel into one dashboard. Answers "how large is my share of the discourse?", not "do I hit this specific question?" (Rankscale).
RankscalePrompt-set tracking MonitoringDefined prompt sets Position and appearance in user-defined prompt sets. Drift detection per prompt, time series per engine. LLM API
polling
(controlled)
ChatGPT, Claude, Gemini, Perplexity P Prompt-set-centric: we define the prompts; Peec AI generates them automatically. 1&1 runs both in parallel: the two data models are not convertible into each other. Answers "do I hit this conversion-relevant question?"
/02 Classical SERP & AI overviews · DE
Sistrix PlusDE SEO + AIO MonitoringDE SERP index DE visibility index on the deepest commercially available keyword dataset; AI-Overview appearance per keyword as a SERP feature. In-house
SERP crawl
Google DE (primary), additional markets secondary P The only tool with the keyword depth in the German SERP indispensable for DACH. Sees AI Overviews as a SERP feature, Peec AI as an LLM-answer component; same surface, different measurement method, both needed.
/03 Content factual accuracy
Scrunch AIAccuracy layer QAFacts in LLM answers Factual accuracy of LLM answers against the target value. Flags hallucinations and outdated statements, such as incorrect tariff prices. LLM polling
+ target / actual
comparison
ChatGPT, Claude, Gemini, Perplexity P The only layer that measures not visibility but content factual accuracy. Mandatory for tariff and product details. Visibility tools would count a false statement as a success.
/04 Technical site & crawler directives
Screaming FrogSEO crawler Diagnostics · QAOwn site Schema.org validity, llms.txt, robots.txt, meta, canonicals, indexability. End-to-end crawl of the own site. In-house
site crawl
Any domain (technical audit view) P The only tool that inspects the own site from the inside. All others measure external effect; Screaming Frog inspects the precondition, i.e. what crawlers get to see at all.
/05 Reporting
Looker StudioDashboard layer ReportingAggregation Measures nothing itself. Aggregates data from all tools via connectors into stakeholder dashboards. Live URLs, automated e-mail reports. Connectors
(GSC, Sistrix,
Sheets, …)
Configurable per mandate P Pure visualisation layer, no data sourcing. Does not replace source tools, but replaces hand-built status reports and PowerPoint exports.
/06 Index distribution
GSC + IndexNowIndex push Distribution · monitoringGoogle + Bing Indexing status at Google & Bing; instant push via IndexNow into the Bing index, propagates downstream to ChatGPT Search, Copilot, Perplexity. Google +
Bing APIs
Google, Bing, indirectly ChatGPT Search, Copilot, Perplexity P The only distribution tool: actively changes indexing status instead of merely observing it. Propagates through to three chat engines without those engines offering submission APIs of their own.
/07 Manual QA & draft engines · one per provider
Claude ProAnthropic Production · QAAnthropic models Manual answer sampling in Claude for QA, draft authoring for briefs, method texts, structures. Direct
UI access
Claude (Sonnet / Opus current) P Sampling against the Anthropic engine. Catches tonality drift, hallucinations, citation patterns that aggregate tracking does not surface.
ChatGPT PlusOpenAI Production · QAOpenAI models Manual QA and draft engine analogous to Claude Pro, against OpenAI models. Browsing and memory functions for live research. Direct
UI access
GPT family (incl. ChatGPT Search) P Necessary because OpenAI models have the largest distribution window in DACH; manual sampling here has the highest leverage.
Perplexity ProPerplexity Production · QAPerplexity Manual QA and draft engine. Citations are exposed in the UI, allowing direct inspection of the citation pattern. Direct
UI access
Perplexity (Sonar & routing models) P The only chat surface that makes citation logic transparent. The primary inspection tool for citation-hook engineering (method phase 03).
/08 Mandate-dependent · where the mandate carries it
Four-sector transfer Cluster /08 is the sector lever of the stack. Telco carries the empirical anchor, e.g. tariff-price drift in cluster /03 and parallel tracking in the 1&1 mandate. Finance demands cluster /03 for condition and yield fact-checking plus cluster /05 for the BaFin-compliant reporting chain. Insurance sharpens cluster /03 on policy conditions and cluster /07 on VVG-compliant advisory documentation. Commerce turns cluster /04 onto Schema.org product validity and cluster /06 onto index distribution for seasonal tempo. Same tools, different sector duties: invented sector tools are not introduced here.
AhrefsBacklinks + intl. Monitoring · diagnosticsGlobal Backlink profiles, referring domains, anchor texts, international SEO visibility, AI-Overview visibility internationally. In-house
web crawl
Global, all major SERPs M The only backlink layer. Sistrix covers DE keyword visibility, not backlink depth. Joins as soon as the mandate carries hub-and-spoke or scales beyond DACH.
Surfer SEOor comparable DiagnosticsPer brief Topical authority and entity coverage of a planned piece against the SERP top results. Brief review before production. NLP analysis
of SERP top
Per brief / keyword set M The only pre-production layer: bites before publication. Replaceable by internal briefing in well-led editorial teams with their own topic map.
ProfoundEnterprise GEO MonitoringEU multi-language LLM visibility across multiple markets and languages. Enterprise reporting, role / tenant separation. Functionally comparable to Peec AI, built for multi-country scale. LLM API
polling
(enterprise)
ChatGPT, Claude, Gemini, Perplexity, EU multi-language M Replacement for Peec AI on EU-wide rollout, not additional. Same function, different scaling class. Chosen when tracking runs in parallel across more than three languages / markets.
Brandwatch / TalkwalkerSocial listening MonitoringSocial + forum Brand mentions, sentiment, topic momentum across social, news, forums. Volume time series, demographics, crisis alerts. Social API +
web crawl
Twitter/X, Reddit (limited), news, forums, blogs M Covers the discourse layer that LLM tracking does not see: sources from which LLM answers later draw material. Joins only when the mandate methodologically carries social / PR levers.
02 · Meta · overlaps and gaps

Where do tools overlap deliberately?

Redundancy we do not resolve, because the data models are not convertible into each other.

  • Peec AI ⇄ RankscaleBroad automatic SoV vs. position in defined prompt sets. Each answers a different question. 1&1 runs both in parallel.
  • Sistrix ⇄ Peec AI
    (AI Overviews)
    AI Overviews as a SERP feature (Sistrix) vs. as an LLM-answer component (Peec AI). Double measurement reveals whether an AIO carries as SERP or as answer.
  • Tracking ⇄ manual QAPeec AI / Rankscale deliver the time series; Claude Pro / ChatGPT Plus / Perplexity Pro deliver the sample. Aggregate does not see tonality drift.
  • Screaming Frog ⇄ GSCPre-crawl view (Frog) vs. post-crawl view (GSC). A site can be technically clean and still not be indexed; both stay.

Where do real gaps remain?

What the stack does not reliably cover, and how it is compensated.

  • Reddit / communityNo tool reliably measures Reddit discourse that surfaces as a source in ChatGPT answers. GummySearch discontinued 30 November 2025. Compensation: manual research per mandate at the briefing phase.
  • Conversational
    long tail
    Search-backed tools are blind to conversational queries. Compensated via manually curated prompt sets in Rankscale + samples in the three chat engines.
  • Voice surfacesAlexa, Siri, Google Assistant, in-car: captured in no tool. Methodological answer: name openly rather than compensate. Volume in 2026 marginal.
  • Citation
    attribution
    None of the tools reliably answers why an LLM picks a particular source. Closed via method (phase 03: citation hooks, front-loading per Indig / Aggarwal), not via tool output.