Northbridge · Study

Retrieval Procurement in the German Telco Market, Methodology and Measurement Logic

How tariff citations in ChatGPT, Perplexity, Claude and Google AI Overview arise in legally compliant form. A methodological position statement, illustrated by Europe's most densely regulated Tier-1 consumer sector.

PublishedMay 2026
As ofMay 2026
Contents

Table of contents

Foreword and executive summary
  1. ·Foreword and executive summary
Part IV · Sovereign AI and industry standard
  1. 17Sovereign AI in the European telco sector3 subchapters
  2. 18Four strategic paths in the telco AI market3 subchapters
  3. 19Code of conduct as industry standard5 subchapters

Foreword and executive summary

Framing and condensed study core.

Northbridge
As of May 2026
Publication

Only a fraction of a Tier-1 telco's editorial content placements reach the retrieval layer where tariff recommendations in ChatGPT, Perplexity, Claude, Google AI Overview and Copilot are now produced. The rest is classically visible and generatively invisible. This study delivers the methodology that converts this state into an operationally measurable, legally tenable, and management-evidenceable citation strategy, by way of three concrete levers: a procurement standard with 18 criteria, a price-factor matrix as the enforcement mechanism vis-à-vis publishers, and a disclosure logic that simultaneously carries BGH and TKG obligations.

Three observations frame this study. First, ChatGPT, Perplexity, Claude, Google AI Overview and Copilot increasingly co-determine which tariffs a consumer is presented with at all before they reach a provider domain. Second, the citation inventory of these generative answers in the DE telco market is carried predominantly by a small number of comparison aggregators and test publications, not by the provider domains themselves. Third, with BGH I ZR 183/24 (main-presentation duty, October 2025), § 165 (2c) TKG (supply-chain duty) and the NIS-2 wave between December 2025 and March 2026, the regulatory layer has become a precondition of every tariff communication rather than an add-on. From these three observations follows one question: how does a telecommunications provider channel its visibility budget into the generative answer layer without breaking the compliance architecture? This study delivers a methodological answer.

Compliance-GEO, the application of Generative Engine Optimization (GEO) under the conditions of regulated consumer markets, is treated in public expert discourse as either an extension of classical SEO or a legal addendum; both miss the methodological architecture. Providers of classical search-engine optimisation handle the topic as a continuation of established visibility methods; general GEO publications omit the regulatory layer; legal publications focus on individual norms without a methodological architecture. This study positions Compliance-GEO as a discipline in its own right, illustrated by the German telecommunications sector as the most regulatorily dense Tier-1 constellation in Europe.

The substantive cut-off is May 2026. The regulatory wave between December 2025 and March 2026 (NIS-2 Implementation Act, KRITIS Umbrella Act) and the GSMA four-pillar assessment (January 2025, as of Q4 2024) with Ethics & Compliance as an equal-rank value pillar form the substantive frame. Three DE-market specifics, aggregator dominance in tariff queries, regulatory density at the TKG / TTDSG / BNetzA / NIS-2 interface, and a consumer-portal landscape with high trust weighting, make the study's DE-only scope a methodological precondition rather than a geographic narrowing.

The study is laid out in 5 parts plus conclusion: Part I (Chapter 1–5) defines Compliance-GEO as a discipline and unfolds the two parallel three-level structures of disqualification and compliance architecture. Part II (6–8) maps the German market structurally. Part III (9–16) unfolds the operational methodology in 6 retrieval engines, 5 phases, 18 procurement criteria, 6 disclosure variants and the price-factor matrix. Part IV (17–19) frames sovereign AI and industry standard. Part V (20–23) traces mandate practice back to its three pillars. Chapter 24 condenses the methodological position into 5 core positions. The executive summary that follows condenses the study's substance across 4 complementary layers: 8 operational artefacts as the toolkit framework, the GSMA Intelligence 4-pillar framework as external validation, 8 core statements as the operational substance map (what the study methodologically does), and 5 core positions as the strategic position synthesis (why it does so).

Reading pathTwo deepening sub-studies unfold authority compounding (Cost of Hesitation) and the impact pyramid (Study Pyramid) as standalone reading units.

Executive Summary · Toolkit framework

8 operational artefacts

The methodological position of the study is carried by 8 concrete artefacts, verifiable, deployable, and evidenceable vis-à-vis management and publishers. Each artefact unfolds in a chapter of the study and is additionally available as a standalone asset.

01

Procurement standard with 18 criteria.

8 binary A criteria decide eligibility, 10 gradual B criteria determine citation probability. Embedded in a two-stage verification workflow that runs before every final invoice.

02

Price-factor matrix as legal lever.

Coupling the final invoice to briefing compliance replaces the classical media-agency logic. Citation-Buy, Mixed-Buy and Mention-Buy as three calibrated price classes with empirical anchors.

03

BGH case-law map.

6 leading rulings between 2020 and 2025 carry the doctrinal foundation for Compliance-GEO, among them BGH I ZR 183/24 (October 2025) with front-loading relevance for generative answer windows. Three counter-lines (UCPD full harmonisation, platform vs. own generation, PAngV special regime) sharpen the substantive state.

04

Cost of hesitation, industrially validated.

GSMA Intelligence (January 2025, as of Q4 2024) and McKinsey (February 2025) position Ethics & Compliance as an equal-rank value pillar; non-implementation has compound effects across quarters.

05

Impact pyramid "The single number".

5 stages from reach to impact, starting from roughly 180 content placements per month. Shows where the lever sits between publication volume and citation quality.

06

Three-dimension measurement logic.

Citation Rate, Citation Persistence and Citation Quality as Share of Model Voice on a weekly cadence. A visibility number without persistence and tonality misses the steering substance, hence three dimensions rather than one aggregate metric.

07

Services in the two-role perspective.

Commissioner and internal approval partner as the shared grammar of the mandate. Relieves day-to-day work from having to renegotiate role conflicts between Marketing, Legal and management on a case-by-case basis.

08

Capstone diagram library.

6 consistent visualisations carry the methodological core statements: three layers of the compliance architecture (Ch. 3), impact pyramid (Ch. 8), compound hero Cost of Hesitation (Ch. 4), 5 phases (Ch. 11), 18 criteria (Ch. 12), price-factor matrix (Ch. 14). Consistent visual grammar, print- and mobile-optimised.

Chapter 3 · 4 · 8 · 11 · 12 · 14
Executive Summary · External validation

GSMA Intelligence 4-pillar framework

01

Financial, the traditional value pillar.

Investment costs, AI-attached revenues, payback periods. The only pillar that maps directly onto the balance sheet and carries the classical economic metrics of AI investments.

02

Business Transformation, depth and productivity.

AI deployment depth across network and product layers, productivity gains. Addresses the structural transformation through AI, not just individual efficiency gains.

03

People & Skills, competence and organisation.

AI talent share of total workforce, AI training frequency, organisational maturity. A pillar that accumulates over years rather than fluctuating per quarter.

04

Ethics & Compliance, AI governance and risk management.

AI governance structures, risk-management model implementation. GSMA Intelligence (January 2025, as of Q4 2024) positions this pillar as an equal-rank value contribution alongside Financial, Business Transformation and People & Skills, not as a hygiene factor. Compliance-GEO is methodologically anchored in this pillar.

Executive Summary · Substance map

8 core statements

01

Compliance-GEO is a discipline in its own right.

Not the stricter variant of GEO, but a methodology with its own phases, its own procurement standard and its own disclosure logic. The GSMA Intelligence 4-pillar framework (January 2025) carries the position from proprietary into the sector-consistent space.

02

Three compliance layers carry the architecture.

Regulatory (TKG with consumer and IT-security strands, BSIG, UWG, MStV, DDG, EU AI Act), contractual (procurement standard with price coupling as a form of supply-chain duty) and ethical (Class 3 dividing line). Not hierarchical, but parallel.

03

Three disqualification levels are orthogonal.

A placement can be excluded legally, technically or substantively, through breach of norm, missing retrieval suitability, or editorial weakness. Exclusion at one level renders the placement worthless, independently of the other two.

Chapter 2 · 12
04

18 procurement criteria, one two-stage workflow.

8 binary A criteria decide eligibility; three are publisher pre-checks at the domain-policy level (no booking), 5 are briefing compliance per piece (invoice reduction on FAIL). 10 gradual B criteria determine citation probability. The measurement logic carries three dimensions, Citation Rate, Citation Persistence, Citation Quality, as Share of Model Voice. Legal lever: price coupling, not the contract itself.

05

6 disclosure variants V01–V06.

V01 carries maximal legal protection; V06 is prohibited covert advertising. UWG § 5a (4), MStV § 22, DDG § 6 (1) No. 1 form the triad; in the telco sector, the TKG mandatory information from §§ 54–57 applies in addition and must be carried in the main presentation independently of the variant.

06

DE specifics are a methodological precondition.

Three fields make the DE specificity integral: aggregator and comparison-platform dominance in tariff queries with roughly one third of citation volume in generative answers (NB retrieval observation April 2026); regulatory density at the TKG / TTDSG / BNetzA / NIS-2 interface; consumer-portal landscape with high trust weighting. AT and CH are standalone follow-up studies, not a DACH surcharge.

Chapter 6 · 23
07

Compliance-GEO operates parallel to telco AI infrastructure, not within it.

The German Tier-1 telco market contains 4 strategic AI infrastructure paths (Fibre Connectivity, Intelligent Network Services, Space-and-Power, GPUaaS) with budget horizons in the double-digit billions, named in McKinsey & Company (February 2025); Compliance-GEO addresses none of these paths but rather consumer visibility in generative answers as a standalone budget line with its own approval stakeholder. The categorical separation decides whether a mandate finds the right sponsor inside the corporation.

Chapter 17 · 18 · 21 · 22 · 23
08

The channel architecture shifts radically.

At a mid-sized Tier-1 operator, around 20 of 100 sales units come from LLM direct citation after 12 months — a channel that practically did not exist 12 months earlier. The sales index rises moderately from 100 to 108, but the composition shifts structurally: Organic Search loses 5 points, comparison portals 5, Paid two; LLM direct citation gains 20. Citation conversion at roughly 7 versus 5 per cent is structurally higher than classical Google SERP.

Conclusion · Position synthesis

5 core positions

The 7 core statements describe what Compliance-GEO substantively is, as a descriptive substance map. The 5 core positions that follow condense which methodological consequences follow from this, as a synthetic closing position statement per study part. Where statement and position touch the same substance (discipline status, parallel operation to telco AI infrastructure), the position carries the synthesis accent, the statement remains substance map; the repetition is reading-mode-separated, not redundant.

I
Discipline status

Compliance-GEO is a discipline in its own right.

Not GEO with a compliance surcharge, but a methodology whose operational phases, procurement standard and disclosure logic are structurally altered by regulatory density. An industry-neutral approach does not work in telecommunications, Financial Services, Insurance and Commerce.

II
Methodology foundation

Two parallel three-level structures carry the methodology.

Three disqualification levels (legal, technical, substantive) and three compliance layers (regulatory, contractual, ethical) behave parallel to each other, not hierarchically and not gradually. Exclusion at one level is not compensated by fulfilment at others; every placement is checked across all 6 fields, not sequentially.

Chapter 2 · 3 · 12
III
Operational enforcement

Price coupling is the actual legal lever.

After publication, corrections to the URL path, DOM disclosure, schema markup or byline are practically not enforceable vis-à-vis the publisher; coupling the final invoice to evidence of fulfilled criteria is the only reliable enforcement mechanism.

IV
Categorical boundary

The Class 3 dividing line is categorical.

Strategic Text Sequences, Prompt Injection and covert advertising fall outside the mandate scope not because of their intensity, but because of their action category. They are not advised on, not documented, not checked; beyond this, the line operates defensively.

Chapter 5 · 21
V
Corporate embedding

Compliance-GEO operates parallel to telco AI strategy.

The three-pillar architecture, two-role perspective, engineering substance, DE market expertise, forms the operational foundation of mandate practice; none of the three pillars replaces the others. The categorical separation from the 4 McKinsey paths and sovereign AI constructions (Core Statement 07) is its precondition.

Chapters 17–18 · 21–23

The legal classifications in this study are supported by Northbridge-internal deepening research. They do not constitute legal advice and do not replace it; for mandate-specific application, the customary case-by-case review by the client applies.

Definition and demarcation: SEO, classic GEO, Compliance-GEO

Three disciplines, separated by objective, measurement logic and regulatory frame.

Northbridge
As of May 2026
Publication

A study that designates Compliance-GEO as a discipline in its own right must determine three terms in their relation to each other: classical search-engine optimisation (SEO), Generative Engine Optimization (GEO) in its general form, and Compliance-GEO as its shape in regulated consumer markets. The demarcation is not gradual; Compliance-GEO is not the stricter variant of classical GEO, just as classical GEO is not the more modern variant of SEO. The differences lie in objective, addressee, measurement logic and regulatory frame, and thereby in the action category.

1.1SEO and classical GEO

SEO (Search Engine Optimization) optimises content for the results pages of classical search engines, primarily Google and Bing. Success is measured in ranking position and organic traffic. The methodology has been established since the late 1990s and comprises technical, content and link-related elements. Regulatorily, SEO is affected mainly by advertising-disclosure duties where content is commercially motivated (UWG § 5a (4), MStV § 22).

Classical GEO optimises content for generative answer engines: ChatGPT, Microsoft Copilot, Perplexity, Claude, Gemini, Google AI Overviews (Chapter 9). The term was established in academic literature in 2024 by Aggarwal et al. as part of the KDD 2024 paper on GEO-Bench; Wu et al. specified the GEO / GEU (Generative Engine Utility) pairing in 2025. Success is measured in Citation Rate, Citation Persistence and Citation Quality (Chapter 7). Typical methods are front-loading, citation hooks, entity consistency and schema markup.

1.2Terminological demarcation, GEO versus GeoAI

The term GEO is used in academic literature in two independent meanings. Generative Engine Optimization denotes the discipline of this study. Geospatial AI (GeoAI) denotes the application of AI methods to geographic and spatially referenced data, a research field of its own at the interface of GIS and machine learning, with no connection to generative answer engines. In retrieval contexts, the confusion produces visibility loss when the short form GEO is used unspecifically. This study writes Generative Engine Optimization out in full at the first occurrence of every main chapter. The short form GEO is used exclusively for the discipline defined here; where Geospatial AI is meant, GeoAI is written out.

A further terminological variant, Answer Engine Optimization (AEO), is preferred by individual industry vendors but denotes the same discipline as classical GEO. This study stays with the academically anchored terminology GEO and GEU.

1.3Compliance-GEO

Compliance-GEO denotes the application of Generative Engine Optimization in regulated consumer markets, specifically in the sectors of telecommunications, Financial Services, Insurance, and Commerce & Subscription, while observing a three-layer compliance architecture that carries regulatory, contractual and ethical requirements as equal-rank (Chapter 3). The methodology covers Classes 1 and 2 of the influence spectrum: structural optimisation as well as paid placement with legally compliant disclosure. Class 3, covert manipulation of retrieval mechanics or model behaviour, is categorically excluded (Chapter 5).

The discipline is not "GEO for regulated industries" in the sense of a sectoral application. It is a methodology in its own right because the regulatory density across the 4 sectors (EECC, TKG, DSA, GDPR, DORA, EU AI Act) so alters the operational phases, the procurement standard and the disclosure logic that a sector-agnostic GEO approach structurally does not fit. GSMA Intelligence formally recognised, in January 2025 (as of Q4 2024) for the telco sector, that Ethics and Compliance form a value pillar of AI investment in their own right, not a hygiene factor but equal-rank alongside financial return, transformation depth and people build-up (Chapter 4).

1.4The demarcation at a glance

Dimension SEO Classical GEO Compliance-GEO
Primary addressee Google, Bing 6 retrieval engines (Ch. 9) 6 retrieval engines plus internal approval partners (CISO, Compliance Officer, CDO)
Measurement metric Ranking position, organic traffic Citation Rate, Persistence, Quality (Ch. 7) Citation Rate, Persistence, Quality plus compliance evidence as a value pillar in its own right
Core method Keyword optimisation, link building, technical SEO Front-loading, citation hooks, entity consistency, schema markup Procurement standard with 18 criteria and verification workflow (Ch. 12), disclosure variants V01–V06 (Ch. 13)
Regulatory frame Advertising-disclosure duty (UWG, MStV) industry-unspecific Three-layer architecture: EECC, TKG, DSA, GDPR, DORA, EU AI Act and sector-specific supervision (Ch. 3)
Categorical outer boundary Class 3 (manipulation) is categorically excluded (Ch. 5)

The B-criteria list from Chapter 12.3 looks superficially like classical on-page SEO hygiene (schema markup, byline, substance length, front-loading). The difference lies not in the inspected attributes but in the target metric (citation rate in retrieval answers rather than SERP ranking), in the measurement mode (weekly Share of Model Voice per engine rather than position tracking), and in the asymmetric effect of individual criteria (front-loading B 05 carries more empirically than question headlines B 08, supported by Indig 2026).

1.5Consequence for the study terminology

The following chapters treat Compliance-GEO as a discipline in its own right, not as a special case of classical GEO. SEO and classical GEO are not further deepened in this study; they remain reference points, not subject matter. Terminology follows academic usage throughout (GEO and GEU per Wu et al. 2025); industry-specific alternative terms such as AEO are not used. Where later chapters use "GEO" without qualifier, the term denotes classical GEO; Compliance-GEO is always marked as such.

The three levels of disqualification

Legal, technical and substantive level.

Northbridge
As of May 2026
Publication

A placement that is to act as a citation carrier can be excluded at three levels: legally through breach of norm, technically through retrieval architecture, substantively through editorial weakness. The three levels are not hierarchically ordered and not gradual; exclusion at one level renders the placement worthless, independently of the degree of fulfilment at the other two. This separation is the conceptual foundation of the procurement standard (Chapter 12) and thus the operational explanation of why Compliance-GEO works through parallel checks rather than prioritisation.

2.1The legal level, exclusion through breach of norm

At this layer, a placement becomes contestable because it breaches a regulatory norm. The relevant German references are UWG § 5a (4) (prohibition of disguised commercial communication), MStV § 22 (advertising disclosure in telemedia) and the BGH decision I ZR 211/17 (influencer disclosure duty, precedent for visible, not subsequently inserted disclosure). In the telco sector, the mandatory-information rules of the Telekommunikationsgesetz (TKG 2021, §§ 54–57) apply in addition, making any tariff promotion without contract term, termination conditions and price-change mechanism unlawful independently of other checks. As of April 2026, an EU-law preliminary reference is also relevant: on 10 July 2025, the VG Berlin in case 32 K 222/24 referred questions to the CJEU on the country-of-origin principle and the preclusive effect of the Digital Services Act. The outcome is open; a clarification or shift of the application logic of the MStV and the DDG with regard to cross-border services is possible within the Q3/Q4 2026 horizon and is to be calibrated in the next study revisions. The legal layer is the most publicly discussed exclusion category, but it captures only part of the exclusion space: a placement that is legally clean can still be disqualified technically or substantively.

2.2The technical level, exclusion through retrieval architecture

At this level, a placement fails because it is not reached, not indexed or wrongly classified by retrieval engines. 8 concrete exclusion mechanisms are formalised in Chapter 12 as A criteria. The technical level is empirically anchored across several layers: Indig 2026 demonstrated that entry segments are preferred in RAG chunking through front-loading; the Claude Opus 4.6 retrieval investigation of April 2026 documents that engines parse HTML, JSON-LD and structured data, not pixel layouts; the Profound markdown-versus-HTML experiment of 381 pages over three weeks evidences engine-dependent parser preferences (as of Q1 2026). In the public GEO discourse, the technical level is mostly reduced to classical SEO; for Compliance-GEO it is extended by categories SEO does not address (bot policies per retrieval engine, disclosure markup at DOM level, URL persistence).

2.3The substantive level, exclusion through editorial weakness

At this level, the placement is legally compliant and technically indexable but is rated by the engines with such low citation probability that it achieves practically no visibility. The empirical anchors come from three study layers. In the KDD 2024 paper, Aggarwal et al. showed across more than 10,000 queries on the GEO-Bench that position-adjusted word count correlates positively with citation rate. Indig 2026 documented that definitive language in cited passages stands at 36.2 per cent versus 20.2 per cent in non-cited ones. The Ahrefs study of 2025 (75,000 brands) evidences that brand mentions in editorial context correlate more strongly with citation rate than classical backlinks. The substantive level is gradual in its effect but binary in its disqualification threshold: below certain substance thresholds (Chapter 12, criterion B 03) the placement falls below the perception threshold of the engines.

2.4The three levels at a glance

Level What it excludes Empirical anchoring Link to procurement standard
Legal Breach of norm (UWG § 5a (4), MStV § 22, BGH I ZR 211/17, TKG §§ 54–57 in the telco sector) Case law and supervisory practice A criterion A 02 (DOM label), A 08 (outbound-link disclosure)
Technical Missing indexability, missing structured data, missing bot accessibility, missing URL persistence Indig 2026, Claude Opus 4.6 retrieval investigation, Profound experiment Q1 2026 A criteria A 01, A 03–A 07
Substantive Insufficient substance, missing citation hooks, missing front-loading discipline, missing entity consistency Aggarwal et al. KDD 2024, Indig 2026, Ahrefs 2025 B criteria B 01–B 10 (gradual; below substance thresholds effectively binary)

2.5Operational rule and demarcation from Chapter 3

The three disqualification levels are operational in nature: they describe at which check points a placement can fail. The three-layer compliance architecture (Chapter 3) is conceptual in nature: it describes which layers Compliance-GEO systematically addresses. The two three-layer models stand independently next to each other, not hierarchically. A legal disqualification (level 1) can have a regulatory, a contractual or an ethical cause (Chapter 3); a technical disqualification (level 2) can be remedied by a contractual URL-persistence agreement (Chapter 3). From this independence follows the operational rule: every placement is checked across all three levels before the final invoice, not sequentially. The verification workflow in Chapter 12.4 operationalises this parallel check.

The three layers of the compliance architecture

Regulatory, contractual, ethical layer.

Northbridge
As of May 2026
Publication

In its methodological core, Compliance-GEO addresses three layers that behave not hierarchically but parallel to each other: a regulatory, a contractual and an ethical one. These three layers stand independently alongside the three disqualification layers from Chapter 2: a legal disqualification can have its cause on any of the three layers, a technical disqualification can be cured by a contractual agreement, and a Class 3 breach simultaneously breaks all three layers. From this independence follows the operational rule that every placement is checked on all three layers before the final invoice, not sequentially. The three sections that follow develop each layer at its normative and operational anchor point.

Diagram · parallel architecture

Three layers, equal-rank and checked in parallel

I
Regulatory
Web of duties under the state legal order
  • TKG §§ 51–67 · 165 ff.
  • BSIG §§ 28, 30
  • UWG § 5a · MStV § 22
  • DDG § 6 (1) No. 1
Phase-00 precondition checks; BNetzA security catalogue under § 167 TKG.
II
Contractual
Procurement relations with publishers, engines, model vendors
  • Publisher agreement
  • Bot policies
  • Price coupling to criteria fulfilment
Procurement standard with 18 criteria and two-stage verification workflow (Chapter 12); Phase-02 contract design.
III
Ethical
Categorical dividing line against Class 3 manipulation
  • NB methodology · manipulation dividing line
  • Aggarwal et al. KDD 2024
Exclusion from mandate scope; two-role perspective (Chapter 21).
Operational rule Every placement is checked on all three layers, not sequentially. A breach on one layer renders the placement mandate-unfit, independently of the degree of fulfilment on the other two.

3.1The regulatory layer

The regulatory layer rests, in the German telco context, on the Telekommunikationsgesetz (TKG 2021, BGBl. I 2021, p. 1858, in force since 1 December 2021; current version as of 17 March 2026 following amendment by Article 6 of the Act of 11 March 2026, BGBl. 2026 I No. 66). Since 6 December 2025 the TKG carries two parallel duty strands, after Article 25 of the Act implementing the NIS-2 Directive and regulating essential principles of information-security management in the federal administration (BGBl. I 2025 No. 301, p. 2 ff.) re-enacted §§ 165 ff. TKG.

The first strand is the consumer-protection layer in §§ 51–67 TKG with pre-contractual information duties (§ 55), contract summary before contract conclusion (§ 54 (3)), term-length regulation including the duty to offer a contract of no more than 12 months (§ 56 (1)) and reduction and termination rights upon performance deviation (§ 57 (4) No. 2). This strand is the primary contact surface for Compliance-GEO: where a retrieval model summarises a tariff in an answer window without naming the minimum term, termination conditions or price-change mechanism from § 55 (1) TKG, an abbreviated representation arises whose attribution logic between telco provider, publisher and model vendor is to be clarified juridically on a case-by-case basis.

The second strand is the IT-security layer in §§ 165 ff. TKG. Every DE telco provider falls under NIS-2: the telco-specific threshold under § 28 (1) No. 3 BSIG (50 employees or 10 million euros annual revenue and balance-sheet total) sits below the EU large-enterprise threshold (250 employees / 50 million euros revenue / 43 million euros balance sheet); providers below qualify under § 28 (2) No. 2 BSIG as important entities. § 28 (5) No. 1 BSIG exempts telco entities from §§ 30, 31, 32, 35, 36, 38, 39, 61 and 62 BSIG; the analogous duties are found in § 165 ff. TKG, and the measures catalogue in § 165 (2a) TKG corresponds to the wording of § 30 (2) BSIG. The supervisory authority is the Bundesnetzagentur; operationalisation runs through the BNetzA security catalogue under § 167 TKG, whose revised version, as of April 2026, is in preparation following evaluation of the consultation that ran until 16 January 2026.

For telco corporations with non-telco-centred secondary activities, such as energy, real-estate or data-centre subsidiaries, the negligibility clause under § 28 (3) BSIG comes into consideration. The BSI has named, in its FAQ on NIS-2 (as of April 2026), concrete reference points: employee numbers, revenue and balance-sheet total of the secondary area; articles of association as a counter-indicator. The clause, however, is the subject of EU-law criticism that questions directive conformity (TeleTrusT, July 2025; Hessel/Schneider, RDi 2026, 25), and of a proportionality defence that reads its application as a concretisation of the proportionality principle. Administrative-court or supreme-court clarification is, as of April 2026, not available; the Austrian NISG 2026 (BGBl. I No. 94/2025, in force on 1 October 2026) dispenses with such a clause and thereby marks an EU-closer special path (Schönherr/DORDA). For the telco-core activity under § 28 (1) sentence 1 No. 3 BSIG, the clause does not apply; it operates exclusively for the assignment to additional entity types from Annexes 1 and 2 BSIG.

The regulatory layer touches Compliance-GEO at three specific points. First, § 165 (2a) No. 4 TKG requires supply-chain security including relations with direct suppliers and service providers; publisher suppliers, model vendors and measurement-tool vendors whose content feeds back into the answer paths of clients are part of this supply chain. The classification of model vendors under the criterion "direct suppliers or service providers" is, on the prevailing reading, not to be affirmed flatly: the supply-chain security duty applies to model vendors, on the current substantive state, as soon as and to the extent that a telco provider integrates a language model of a specific vendor as a direct contractual upstream supplier into a security-relevant process of its public network or its publicly accessible telecommunications service. Without a direct contractual relationship or in case of non-security-relevant use, the regulatory duty does not apply; a stand-alone inclusion of model vendors in the procurement standard remains possible as a methodological cyber-resilience measure outside regulatory duty. The line is supported in EU law by the Commission Implementing Regulation (EU) 2024/2690 of 17 October 2024 (telco deliberately excluded; sector-specific regulation in §§ 165 ff. TKG), by the ENISA Technical Implementation Guidance (June 2025, without LLM-specific norming), and by the NIS Cooperation Group Supply Chain Security Toolbox (30 January 2026, actor-agnostic, without LLMs being named); the final BNetzA security catalogue under § 167 TKG is the subject of the ongoing evaluation and may shift the interpretive frame. Second, § 165 (2a) No. 6 TKG requires concepts and procedures for assessing the effectiveness of risk-management measures; the Northbridge Phase-04 citation reporting logic sits within this duty. Third, § 165 (2b)–(2d) TKG addresses management with implementation, supervision and training duties plus personal liability; Compliance-GEO approval thereby touches not only the marketing function but the management board level of the telco client.

The liability architecture of § 165 (2c) TKG is, on the current substantive state, to be read as a subsidiary-constructive provision. Sentence 1 orders the liability of management for culpably caused damage primarily under the company-law rules applicable to the legal form: § 93 Aktiengesetz, § 43 GmbH-Gesetz, § 34 Genossenschaftsgesetz. Sentence 2 limits the TKG-specific liability arrangement to legal forms for which company law contains no liability rule, such as registered associations or foundations; a cumulative basis of claim is, on the prevailing reading, excluded; the Business Judgement Rule under § 93 (1) sentence 2 AktG remains applicable. The NIS-2-Umsetzungsgesetz thereby creates no new basis of liability but a fall-back provision. The operational consequence for Compliance-GEO lies in the supervision duty under § 165 (2b) TKG: documented inclusion of management in the mandate-approval chain carries this duty; the basis of liability itself remains in company law.

Two further regimes touch the regulatory frame without shifting the duty architecture of §§ 165 ff. TKG. The KRITIS-Dachgesetz (KRITISDachG, BGBl. 2026 I No. 66, partly in force since 17 March 2026) implements the CER Directive (EU) 2022/2557 and norms federally uniform minimum requirements for the physical protection of critical installations; on the prevailing reading, cyber protection (BSIG and TKG) and physical protection (KRITISDachG) run alongside each other with clear dimensional separation, with telco-KRITIS operators subject to the KRITISDachG registration duty but largely exempted from the operational evidence duties (notification, prevention, evidence) under § 39 (4) BSIG, because the NIS-2/TKG regime applies as a special regime. The Digital Operational Resilience Act regulation (DORA, Regulation (EU) 2022/2554, applicable since 17 January 2025) displaces, under Art. 4 NIS-2 Directive, the NIS-2 requirements for financial enterprises in the areas of ICT risk management, ICT incident reporting and ICT third-party management; for telco providers with payment functions (Embedded Finance, Mobile Wallet, registered payment service), DORA, on the prevailing reading, applies only to the payment sub-activity, while NIS-2/TKG remain authoritative for the telco-core part. The functional split is established in secondary literature; a final case-by-case demarcation remains required for mandates with a payment component.

Advertising disclosure as such is not regulated by the TKG but follows § 5a (4) of the Act against Unfair Competition, § 22 of the Interstate Media Treaty and § 6 (1) No. 1 of the Digital Services Act (DDG); Chapter 13 and Chapter 14 develop this triad of norms in the disclosure variants and in the price-factor matrix.

3.2The contractual layer

The contractual layer governs the procurement relations to the suppliers of the answer paths. Publisher contracts, engine bot policies, model-vendor terms, and, as the actual legal lever, coupling of the final invoice to evidence of fulfilled criteria are its subject. It is the operational form in which the regulatory layer (§ 165 (2a) No. 4 TKG, see 3.1) is enforced vis-à-vis client procurement. The Northbridge procurement standard (Chapter 12) decomposes every planned placement into 18 verifiable criteria and an eight-step verification workflow that runs before every final invoice; 8 criteria are binary and decide eligibility, 10 are gradual and determine the citation lift.

The methodologically central pivot of the contractual layer lies in price coupling. After publication, corrections to the URL path, the DOM label, the schema markup or the byline are practically not enforceable vis-à-vis the publisher. The only enforcement mechanism that reliably grips is coupling the final invoice to criteria fulfilment; not the advertorial contract itself but the price mechanic thereby becomes the carrier of compliance enforcement. Integration of this lever into the publisher agreement in Phase 02 of the mandate workflow (Chapter 11.4) makes the contractual layer a direct continuation of the regulatory layer by operational means.

3.3The ethical layer

The ethical layer draws the categorical dividing line against Class 3 manipulation (Chapter 5). It is not carried by a regulatory norm alone: Strategic Text Sequences, Prompt Injection and covert advertising are partly captured by criminal or competition law, partly only documented academically and technically (Aggarwal et al. KDD 2024). The Northbridge position is therefore identified as a layer in its own right: Class 3 is not advised on, not documented, not checked. The difference from Class 2 (paid placement with disclosure) is not one of intensity but of action category. Class 2 is carried by all three layers: regulatorily through the UWG/MStV/DDG triad of norms (see 3.1) and, in the telco sector, through the TKG mandatory information; contractually through the publisher agreement and the verification workflow; ethically through the transparency of the action itself. Class 3 simultaneously breaks all three layers. The ethical layer thereby also operates defensively vis-à-vis the two-role perspective from Chapter 21: it relieves ongoing mandate work from having to renegotiate class assignments on a case-by-case basis.

3.4Operational rule and bridge to industrial validation

The three layers are checked in parallel, not sequentially. A breach on one layer renders the placement mandate-unfit, independently of the degree of fulfilment on the other two. The three-layer architecture is, in this, no proprietary Northbridge construction; it finds its industrial counterpart in the four-pillar framework that GSMA Intelligence proposed in the study Telco AI: State of the Market, Q4 2024 (January 2025, as of Q4 2024). There, Ethics & Compliance stands as an equal-rank value pillar alongside Financial, Business Transformation and People & Skills, a categorical, not gradual classification, which Chapter 4 presents as industrial validation of the architecture developed here.

Industrial validation: compliance as an RoI pillar

GSMA Intelligence Q4 2024 and McKinsey 2025 as evidence frame.

Northbridge
As of May 2026
Publication

The methodological position that Compliance-GEO is a three-layer architecture of regulatory, contractual and ethical levels (Chapter 3) can, since January 2025, be anchored outside the academic literature as well. GSMA Intelligence — the data subsidiary of the global mobile industry association and, by its own description, the reference data source for operators, vendors and regulators — proposed in its report Telco AI: State of the Market, Q4 2024 a four-pillar framework for measuring the return on AI investments. One of the four pillars is Ethics & Compliance. The publication is dated January 2025 and rests on two survey waves among global network operators (GSMA Operators in Focus AI Adoption Survey 2024, GSMA Network Security Strategy Survey 2024). The four pillars of the GSMA Intelligence framework can be arranged in equal-rank presentation such that the pillar deepened by this study becomes visible. The other three pillars are sector context, not the subject of the methodological elaboration at hand.

GSMA framework · 4 pillars of telco AI return

4 equal-rank value pillars. One of them is the subject of this study.

01

Financial

traditional

RoI from AI investment, cost savings, revenue lift.

02

Business Transformation

non-financial

Process re-design, automation, scaling at the network and product level.

03

People & Skills

non-financial

Team build-up, AI literacy, hiring, upskilling.

04

Ethics & Compliance

non-financial · subject of this study

Regulatory compliance, reputation, risk management.

GSMA Intelligence, Telco AI: State of the Market, Q4 2024 (January 2025)

4.1The framework at a glance

Pillar Character Sample metrics from the framework
Financial traditional Investment costs, AI-attached revenues, payback periods
Business Transformation non-financial AI deployment depth (network and product level), productivity
People & Skills non-financial AI talent base as share of total workforce, AI training frequency
Ethics & Compliance non-financial AI governance present, risk-management model implementation

The report classifies the framework as a high-level proposal that is to be furnished with weightings and scoring in a follow-up publication in 2025 ("roadtesting in 2025", as of Q4 2024). The methodological substance therefore lies not in a finished formula, but in the categorical statement that Ethics & Compliance is an equal-rank value pillar alongside finance, transformation depth and people.

4.2The sector-maturity basis of the framework

The framework does not stand in a vacuum. The GSMA survey base (as of Q4 2024) yields three findings that evidence the sector maturity of AI implementation in the telco environment. Sixty-five per cent of surveyed operators run a formal AI strategy, of which thirty-three per cent as a standalone initiative and thirty-two per cent integrated into corporate strategy. Leading operators have, on average, established AI in 9 of 13 identified application fields. Forty-nine per cent of operators name cybersecurity as the top barrier to reaching their AI goals; eighty-eight per cent name phishing and smishing as the leading threat. The figures show a sector that already operates AI and in which compliance questions appear as a member topic of the management agenda, not as a downstream control function.

4.3Consequence for Compliance-GEO

The framework's statement is directly relevant to the procurement standard. If, in the sector consensus, Ethics & Compliance is recognised as a return pillar in its own right, then the three-layer architecture from Chapter 3 is not a proprietary Northbridge construction in need of explanation vis-à-vis commissioner and approval partner, but a sector-consistent position. This works in three directions: first, it eases the argumentation in the initial conversation with the commissioner, because the compliance layer is not discussed as an additional cost factor but as a value pillar. Second, it frames the two-role perspective (Chapter 21), because the approval partner can use the four-pillar framework as reference grammar that is already known in their own house. Third, it stabilises the dividing line against manipulation classes (Chapter 5), because structural optimisation and categorically prohibited manipulation can be cleanly separated on the Ethics & Compliance axis.

Compound hero · opportunity cost of hesitation

A 12-month mandate starting Q4 2026 reaches roughly 32 per cent additional cost compared with a Q4 2025 start, as a mechanics calculation from authority compounding.

1,32×
Cost factor

Hesitation is not cost neutrality, but a bet against authority compounding.

The Ethics & Compliance return pillar from the GSMA framework does not act time-neutrally. Authority signals compound; they do not grow linearly. Established domains accumulate mentions, citations and retrieval signals that late-entering operators have to catch up with through higher volume and longer runtime. The 1.32× figure is a mechanics calculation from published industry evidence, not a market forecast. The same compounding effect scales in a closed cascade: roughly 1.15× over 6 months, roughly 1.32× over 12 months, roughly 7 per cent cost premium per unused quarter. The 24-month calculation yields roughly 1.75× as the strategic outlook for board cycles; the 4 dimension cards below develop this 24-month horizon as a scenario.

Dimension 01 · Time-to-visibility

Catch-up time doubles

Start 2026
12 months
to top-30
Start 2028
18–24 months
to top-30

The cited top-30 domains fluctuate monthly, but the authority accumulation of established publishers raises the threshold for late entrants. Between 2026 and 2028, competitors accumulate three years of mention signals.

Indig 2026 · Ahrefs 75,000-brand study · Scrunch cohort analysis
Dimension 02 · Publication volume

More material is needed for the same result

Start 2026
180
articles per month
Start 2028
300–400
articles per month

Monthly production must increase, because work is done against three years of accumulated competitive signals. +67 to +122 per cent publication volume for the same mention-share goal. Cost per article remains procurement-standard-conformant; total runtime cost rises.

Ahrefs mentions correlation · procurement standard Chapter 12
Dimension 03 · Channel value

From differentiation to hygiene

2026
Early mover
differentiated
2028
Mandatory
absent means invisible

Similarweb documents plateauing referral volumes in Q1 2026, growth slows, the seats harden. Anyone not cited in 2028 is invisible; invisibility at a Tier-1 operator costs market share directly.

Similarweb Q1 2026 plateauing report · Conductor AI traffic share
Dimension 04 · Compensation cost

The visibility gap is paid in commissions

2026 with mandate
Own citation
full margin
2028 without
Portal commission
double-digit per contract

Without a built-up upstream influence layer, the operator compensates the gap via comparison-portal commissions. In the DE telco sector, these typically sit in the double-digit percentage range per concluded contract. The margin shifts away structurally, not temporarily.

NB Publisher Research Telco DACH · industry benchmarks of leading DE tariff aggregators
Rational conclusion Entry is possible at any time. It is cheapest today. This is not urgency rhetoric, but the mathematics of accumulating authority signals: with each quarter, the price for the same result rises — not linearly, but compounding.

DeepeningA standalone sub-study unfolds the compound mechanics in full depth: Cost of Hesitation.

What the analysis shows

Authority signals in retrieval engines grow not linearly, but compounding. Established domains accumulate mentions, citations and retrieval signals that late-entering operators have to catch up with through higher volume and longer runtime. The 1.32× figure is a mechanics calculation from published industry evidence for a 12-month mandate starting one year later — not a market forecast. The premium cascades to roughly 1.15× over six months, 1.32× over twelve months, around seven per cent per unused quarter.

How to use it

The compound mechanics is the investment argumentation vis-à-vis CFO and board: hesitation is not cost neutrality, but a bet against authority compounding. If a mandate start is postponed, mandate cost in the next wave does not rise linearly but compounds. The 24-month calculation (roughly 1.75×) is the strategic outlook for multi-year board cycles.

Sector transfer

Authority compounding is sector-agnostic. In the telco sector empirically evidenced via DE tariff aggregators, in the Financial Services sector analogously via finance-comparison platforms and business media, in the Insurance sector via insurance comparison platforms and IDD-conformant advice sources, in the Commerce sector via consumer-protection platforms and reputation carriers. The more established the competitors in the sector, the stronger the compounding mechanics.

Value for you

A clear mechanics calculation instead of urgency rhetoric. The compound logic is CFO-ready, because it is derived from published industry evidence and can be re-tuned quarterly. Mandate start becomes an investment decision with a documented premium curve, not a marketing gut question.

Three manipulation classes and the categorical dividing line

Classes 1 and 2 versus Class 3.

Northbridge
As of May 2026
Publication

Compliance-GEO cannot be defined without drawing a dividing line that is methodologically protected against gradual softening. Three classes of influence on generative answers form the field in which this line can be fixed. The separation between Class 2 and Class 3 is not a question of scale or intensity, but a categorical one.

5.1The three classes

Class Character Examples Mandate status
1 · Structural optimisation open contribution to retrieval suitability Schema markup, front-loading, citation hooks, entity consistency, author disambiguation (Ch. 12, B criteria) unrestricted part of the mandate
2 · Paid placement with disclosure commercial communication with legally compliant disclosure Advertorials in editorial context with full UWG and MStV disclosure (Ch. 13, variants V02 and V04) Part of the mandate, provided the procurement standard from Ch. 12 is fully met
3 · Manipulation covert or technically infiltrating interference with retrieval mechanics or model behaviour Strategic Text Sequences, Prompt Injection, covert advertising, mass-AI-content farming without editorial provenance Categorically outside the mandate, neither advised on nor documented

5.2Why the line between Class 2 and Class 3 is categorical

Class 2 is borne by disclosure, contract and editorial embedding. It can be examined on all three layers of the compliance architecture (Chapter 3): regulatorily via UWG § 5a (4), MStV § 22 and BGH I ZR 211/17, contractually via the publisher arrangement and the verification workflow, ethically via the transparency of the act itself. Class 3 breaks each of these three layers: it operates covertly, circumvents publisher governance, or intervenes in model layers that the principal can neither inspect nor be legally accountable for. The difference is not one of intensity, it is one of action category. From this asymmetry follows the operational rule.

5.3Operational rule

Northbridge advises on and operationalises Class 1 and Class 2.

The categorical rejection of Class 3 carries an empirical saturation mechanic that supports the ethical position. Originality.AI documented in 2024 that domains with a high share of generated content recorded systematic visibility losses after the Google spam-update cycle; the Grokipedia case in early 2025 shows that this loss is not confined to classical search — AI search engines such as ChatGPT, AI Mode and AI Overviews reduced their citations in sync with the Google visibility shift (Rudzki, Peec.ai, 25 February 2026, with references to the Originality.AI study 2024 and the Grokipedia case Q1 2025; cf. Chapter 9.4 on cross-engine penalty mechanics). A mandate strategy based on mass-AI content thereby damages GEO visibility no less than SEO visibility. Mass-AI content remains methodologically a sub-manifestation of Class 3, manipulation by saturation, alongside the three manipulation by deception variants Strategic Text Sequences, Prompt Injection and covert advertising; the three-class architecture is unchanged.

Class 3 is not worked on, not documented, not checked. If a mandate raises or hints at a Class 3 measure as part of the assignment, the mandate is not executable. The line also operates defensively: it relieves the two-role perspective (Chapter 21) from having to discuss class assignments after the fact. The cybersecurity sensitivity of the telco sector — 49 per cent of operators name cybersecurity as the top barrier to AI goals, 88 per cent name phishing and smishing as the leading threat (GSMA Intelligence, as of Q4 2024) — additionally frames this line regulatorily.

5.4Perimeter interpretation of § 168 TKG and methodological dividing line

The Class 3 dividing line is, on the current substantive state, methodologically conceived rather than an automatic consequence of the TKG reporting duty. § 168 (1) TKG as in force after the NIS2UmsuCG obliges operators of public telecommunications networks and providers of publicly accessible telecommunications services to report significant security incidents to BNetzA and BSI, with an early notification within 24 hours, a follow-up notification within 72 hours and a final notification within one month. The term "security incident" is not defined in § 168 TKG itself; the interpretation orients on Art. 6 No. 6 NIS-2 Directive, which targets the availability, authenticity, integrity or confidentiality of stored, transmitted or processed data of an entity's own network and information systems.

On the prevailing perimeter reading, the security incident relates to the network and information systems of the reporting entity itself. Manipulated citations in answer windows of generative models take place outside the telco infrastructure; the answer engine is not a network or information system of the telco. An automatic qualification of manipulated citations as a § 168 TKG security incident is therefore excluded under line 1; in any case, the materiality threshold of § 168 (3) TKG ("severe operational disruptions", "significant material or immaterial damage") is typically not reached by a single manipulated citation.

A more expansive reading (line 2) cannot be excluded: where systematic manipulations affect telco-related authenticity-relevant data and exhibit a direct causal link with the telco infrastructure, the authenticity dimension of Art. 6 No. 6 NIS-2 Directive could be functionally extended. This reading is not established in secondary literature as of April 2026 and stands in systemic tension with the perimeter focus of the NIS-2 Directive. A definitive classification is open to interpretation and is the subject of the legal validation (Chapter 24).

For the Class 3 dividing line, the consequence is: it is methodologically and ethically grounded, not enforced by an automatic regulatory reporting duty. A regulatory disqualification level (Chapter 2) bites for Class 3 manipulations only insofar as a direct LLM contractual relationship of the telco mandate exists, or other TKG, UWG or criminal-law offences are touched. The categorical rejection by Northbridge is independent of this and follows the ethical layer.

Market reality: aggregators as citation platforms

Comparison portals as a second answer layer in the German telco market.

Northbridge
As of May 2026
Publication

In generative answer systems, the German telco market is mapped not primarily through provider websites but through two national comparison portals that form their own retrieval layer as citation platforms. Anyone seeking to understand the DE market from the vantage point of a model answer window must know the mechanics of this layer, not because the portals are being rated, but because their structural dominance defines the intervention point for any Compliance-GEO work in the sector. The 4 subchapters that follow describe the market structure, the platform mechanics, the resulting citation geometry, and the economic feedback on the direct-provider side.

6.1Market structure of the DE provider side, descriptive, without assessment

The German provider side has a 4-tier structure: network operators, MVNOs, bundle providers, resellers and sales partners. These tiers are visible to different degrees in generative answer windows. The description proceeds structurally, without naming corporations and without per-actor maturity statements. The reseller and sales-partner tier (shop and intermediation level) is practically not cited in product queries and is not listed separately in the table that follows.

Level Typology Visibility in the model answer window
Network operators 4 players; three with an established own network, one in roll-out On average: brand queries yes, tariff queries only via portal citations
MVNOs Independent and corporate subsidiaries, including discounter brands Very uneven; strongly coupled to portal indexing
Bundle providers Triple-play and quadruple-play models with mobile integration Low without explicit portal placement

The decisive asymmetry is not which tier is stronger but that tariff, bundle and eligibility queries systematically shift to a layer outside the sector: the comparison portals. The provider tier becomes a table row there, not a primary source.

6.2Platform mechanics of the comparison portals

The structural dominance of the national comparison portals in DE tariff queries is not reach-driven but mechanical. It follows from 5 technical-editorial properties that generative retrieval systems preferentially absorb.

Mechanical element What makes it retrieval-suitable
Currency of conditions Daily or hourly updated prices; dateModified consistently maintained
Postcode-resolved prices in static HTML Regionally differentiated tariffs are indexable and chunkable, not hidden in JavaScript fetches
Schema depth Product, Offer, AggregateRating, FAQPage consistently marked up
Established domain authority High backlink density from trade and reach media (Ahrefs, as of Q1 2026)
Editorial comparison tables with question-style headlines Structural congruence with query patterns in tariff searches

The major DE aggregators and comparison platforms are not procurable in this layer in the classical advertorial sense: both produce their own content and do not sell editorial inventory. Access to a list row runs via performance contracts, not via content booking, and this does not meet the Class A criteria of the procurement standard (Chapter 12), because the provider appears as a list entry in someone else's table, not as an own source. The international positioning of the DE portal concentration is treated in Chapter 7 (international sector maturity).

The platform mechanics are supported by three publicly documented evidence layers. The Sistrix Promptindex evaluation for Q4 2025 through Q1 2026 documents a growth dynamic of operator domains in AI answers between plus 73 and plus 132 per cent across the 6-month period, with markedly higher growth at operators with an established own blog (Sistrix, sistrix.de Handbuch AI, as of 02 January 2026). The Sistrix domain-share analysis on the same reference date shows blog-share values between 6 and forty-three per cent across the 4 Tier-1 operator domains, with a clear correlation between blog share and Promptindex volume. The NB sector survey April 2026 documents at URL level that the operator-owned web presences are substantially represented in the citation aggregate; given clean Level-B hygiene, they survive the retrieval filters and appear as primary evidence alongside the aggregator sources. Operator on-site substance and aggregator geography therefore act as two non-substitutable levers, not as competitors.

6.3Citation as a table row in someone else’s hierarchy

When a generative system answers a tariff question, in the vast majority of cases it cites a portal page rather than the provider. The provider appears, if at all, as a name in a comparison table on the cited page. This second-order visibility has three structural consequences: the provider text is editorially formulated by others (portal editors decide which features are described how, and that wording is taken up by the model as factual statement); the table ranking is the decisive visibility signal, not the provider's own brand message; currency is coupled to the portal data state rather than the provider product state, which produces distortions for short-lived promotional tariffs that the provider cannot correct itself. The NB Claude Opus 4.6 retrieval investigation (April 2026) evidences this pattern quantitatively; Aggarwal et al. (KDD 2024) underpin it generically through position-adjusted-word-count analyses.

An NB-internal evaluation from the audit corpus (spring 2026) makes the geometry tangible through a concrete measurement frame. Measurement parameters: a multi-week observation window across a low-four-digit URL count in 5 retrieval engines, ChatGPT, Claude, Google AI Mode, Google AI Overview, Perplexity. Copilot is not included; the measurement is therefore to be read as a 5-engine cut in the DE market, not as an overall sector picture. The observed citation distribution at URL level:

Citation category Magnitude (URL level) Advertorial reach
Comparison portals roughly one third; concentrated in a few top platforms; the top-6 aggregator cluster carries the predominant share Not addressable via advertorial (own content)
Competitor tariff pages roughly 15 per cent Structurally outside
Competitor own pages under 5 per cent Structurally outside
News, blog and media surfaces roughly one fifth Advertorial-eligible, core procurement field
Community (Reddit, YouTube, forums) under two per cent Sector-specifically deprioritised

Two findings can be drawn from this. First: more than half of the citation volume lies structurally outside classical advertorial reach — comparison portals plus competitor surfaces combined. A pure advertorial strategy thus addresses, at no point, the majority of the citation space. Second: the community surfaces (Reddit, YouTube, forums) reach a share below two per cent in DE telco, markedly less than the widespread GEO narrative of a dominant Reddit role would suggest. The blanket community-investment recommendation from other sectors is not transferable to DE telco and must be re-measured sector-specifically.

The engine distribution shows no single-engine dominance. All three leading engines lie above the 20 per cent threshold; the 4th engine clearly below:

Engine position (anonymised) Share (magnitude)
Leading engine roughly one third
Second engine just under one third
Third engine roughly one fifth
4th engine low double-digit per-cent range
5th engine low single-digit per-cent range

The five positions are distributed across ChatGPT, Claude, Google AI Mode, Google AI Overview and Perplexity; the position-to-engine mapping is neutralised in the context of this study. Operational consequence: a ChatGPT-centric mandate structurally under-invests in at least two further engines with at least one fifth citation share each. The model-blended factor from Chapter 14 reflects this distribution in the price calculation.

Within the comparison-portal hubs with substantial brand-mention density, the majority of hub URLs — clearly more than half — cite all three main network operators on equal footing. The hub URLs are structurally additive: an additional provider in the hub table does not displace any other but is listed equally. For mandate practice, this opens an entry point that operates not competitively but cumulatively.

Data basis: NB-internal evaluation of a monitoring dataset collected in audit context; measurement parameters disclosed in neutralised form in the bracketed block above; raw data and client attribution confidential and not citation-fit. Not a controlled randomised sample but a standing-corpus excerpt from ongoing audit work.

6.4Creeping CAC shift, and why the SEO report does not show it first

The economic feedback of this citation geometry is a shift of customer acquisition costs (CAC) from organic to portal-mediated acquisition. The effect is initially not visible in classical SEO reports because search-engine rankings continue to be measured as a leading indicator while the user's actual decision path increasingly begins in the model answer window. Three mechanisms mask the shift in the early phase: brand equity buffers the effect for established brands for several quarters; the SEO report measures traffic on provider URLs, not failed model answers; leads from portal mediation are often booked as direct traffic in attribution models. The feedback loop in the model answer is shorter than in classical SEO visibility (Aggarwal et al. KDD 2024), which means: by the time the effect becomes measurable, it is already structurally in effect. The operational answer to this is not a higher SEO budget but intervention on the retrieval layer itself, that is, at publisher level in Phase 01 and Phase 02 of the mandate cycle (Chapter 11).

6.5Operational consequence

From a Compliance-GEO perspective, the DE market is a portal-dominated field in which the visibility of the direct provider is not decided on the provider's own domain. Mandate work therefore begins with treatment of the citation layer between provider and model: national comparison portals, specialist and test publications, consumer-protection publishers. The procurement standard from Chapter 12 is the operational lens; the price-factor matrix from Chapter 14 carries it into the final invoice. Which publishers are addressed in which phase is the subject of the sector-specific source map in the mandate, not of this study; Chapter 7 describes the international maturity contour against which the DE portal dominance is positioned.

Measurement logic: three dimensions instead of one number

Citation Rate, Persistence, Quality.

Northbridge
As of May 2026
Publication

The measurability of Generative Engine Optimization decides whether the discipline is controllable or merely describable. The market currently works predominantly with aggregated single values: "Citation Index", "AI Visibility Score" and related constructs. This study uses, instead, a three-dimensional measurement logic in which Citation Rate, Citation Persistence and Citation Quality are carried as independent axes that cannot be reduced to one another. The 5 subchapters that follow justify the decision against the single-number logic, define the three dimensions with their empirical anchors, and describe the orthogonality that follows from them.

Measurement logic · capstone diagram

Three orthogonal dimensions, not aggregable to a single index

DIMENSION 01 Citation Rate Frequency · how often is something cited 0 100 SCHEMA SUB-AXES Share of Model Voice Topical coverage Brand surface in zero-click DIMENSION 02 Citation Persistence Time · how long does the citation hold 0 100 SCHEMA SUB-AXES Time-to-first-citation Drift rate across weeks Cross-engine stability DIMENSION 03 Citation Quality Content · how is the citation phrased 0 100 SCHEMA SUB-AXES Sentiment tonality Mandatory-disclosure completeness Competitive context ORTHOGONALITY · NOT AGGREGABLE TO A SINGLE INDEX
What the analysis shows

Citation visibility decomposes into three orthogonal dimensions: frequency (Citation Rate), time (Citation Persistence), content (Citation Quality). The three dimensions cannot be aggregated to a single index because they address structurally different steering levers. High frequency with low quality is not the same as medium frequency with high quality; in the mandate report, however, they look identical if only an aggregated score is reported.

How to use it

The mandate report carries three separate curves, not one. Before any steering decision, the sub-axis on which the issue sits is identified: drift in persistence, sentiment shift in quality, volume gap in the rate. Only then is the operational answer chosen. This separation prevents the most common mandate misstep: pulling on a dimension whose deficit was not the problem.

Sector transfer

The three-axis architecture applies in all four regulated consumer verticals. Telco carries the TKG mandatory-disclosure connection in the quality dimension; Financial Services carries the advisory-duty connection (FinDAG, MaRisk, WpHG); Insurance carries the VVG information-duty connection; Commerce carries the UWG and DSA transparency connection. The axis architecture itself remains sector-stable; sector-specific calibration acts within the sub-axes of the quality dimension.

Value for you

A reporting architecture that Marketing, Compliance and the board carry jointly. Early-warning lever on the sentiment sub-axis before reputational risks show up in sales data. Protection against quiet aggregate distortion in which high frequency masks a critical tonality situation.

7.1Why not one number

An aggregated visibility number condenses different measurement phenomena into a single score. That is reader-friendly but loses operational steerability. A client with high citation rate but low citation quality stands worse than a client with medium rate and high quality, because citation quality decides whether a cited mention is purchase-encouraging or purchase-deterring (Chapter 13.5, Chapter 14.5). Aggregation metrics cannot represent this reversal: they treat each citation as an equivalent point value and leave the CMO with an index lacking diagnostic depth. The three-dimensional measurement logic therefore separates what is operationally separately steerable, and accepts that a mandate report carries three curves, not one. The industry discussion confirms this separation: the Peec.ai KPI framework (Rudzki, Peec.ai 11.03.2026) lists 5 independent KPI classes — Visibility, Position, Brand Sentiment, Conversions/Revenue and Traffic — as separate measurement axes, with explicit reference to the Eight-Oh-Two AI Search Behavior Study 2026 (37 per cent of consumers start their search with AI; 85 per cent continue to cross-reference traditional search). The NB three-axis architecture and the Peec 5-KPI framework are methodologically convergent: Citation Rate and Position fall under the NB frequency dimension; Brand Sentiment is a sub-axis of the NB content dimension; Conversions/Revenue and Traffic are mandate-success metrics outside the three citation dimensions. The external validation does not bring a competing architecture but a compatible industry view.

7.2Citation Rate, the frequency dimension

Citation Rate measures the relative frequency at which a client source is cited in retrieval answers to a defined prompt cluster. The operational measurement metrics are Share of Model Voice (citation share per cluster and model over defined time windows) and Mention Frequency (absolute mentions, model-separated, independently of link attribution). The empirical anchor is twofold: Aggarwal et al. (KDD 2024) underpin via position-adjusted-word-count and chunking analyses the retrieval mechanics that make citation rates explainable in the first place; the NB Claude Opus 4.6 retrieval investigation (April 2026) documents the DE-specific baseline for telco tariff queries. Indig 2026 (n = 18,012 verified ChatGPT citations) shows that 44.2 per cent of all citations come from the first 30 per cent of a page; front-loading is therefore the strongest single rate driver on the content side. The three-class assessment of the price-factor matrix (Chapter 14.2) assigns a price class on the basis of A and B criteria fulfilment; the citation rate is the expected measurement consequence of that assessment, evidenced in the Phase 04 control loop (Chapter 11.5). Criteria fulfilment is the assessment input; citation rate is the expectation output.

7.3Citation Persistence, the time dimension

Citation Persistence measures how stably a citation persists across time and model updates. The market finding is unambiguous: in a 13-week study across more than 100 million AI citations (as of October 2025), Semrush documented that Reddit citations on ChatGPT fell from around 60 to around 10 per cent of prompt answers between August and mid-September 2025 — a shift within a few weeks that structurally altered the citation geometry of the affected prompt clusters. Scrunch (industry research 2025/2026, 3.5 million citation events) evidences that citation half-lives vary engine-, sector- and source-type-specifically; Profound documents citation drift of up to 60 per cent per month between engines. These figures carry a methodological consequence and a transparency obligation. The consequence: a weekly measurement cadence per model is not redundancy but the minimum resolution that makes such shifts visible at all; monthly reports can miss precisely the drift that determines the mandate result (Chapter 16). The transparency obligation: model updates are not deterministically predictable; the persistence of a citation is therefore an observation quantity, not a forecast number. Any guarantee of a fixed citation duration is methodologically untenable.

7.4Citation Quality, the content dimension

Citation Quality measures whether the brand is cited positively, neutrally or negatively, and whether the substantive rendering is complete. It decomposes into three sub-axes. The sentiment axis captures how the model contextualises the brand — whether the mention is purchase-encouraging, neutral or purchase-deterring. Structurally, recurring negative-keyword profiles form per operator in answer texts that persist independently of the individual prompt and thus produce a measurable tonality geometry; the NB sector survey of April 2026 documents such profiles per operator across the 4 measured engines. Methodologically central is the separation of visibility and tonality. A topic with high citation frequency can carry a systematically negative sentiment tonality — for instance when an operator persists prompt-independently in a negative-keyword cloud (complaint vocabulary, contract-dispute vocabulary, service-deficit vocabulary). A visibility metric alone cannot distinguish that case from a positive citation cluster; a separate sentiment measurement axis is therefore not a refinement but a precondition of steerability. In mandate practice, the sentiment sub-axis is the first diagnostic lever at which Compliance functions check the difference between marketing visibility success and a reputationally relevant measurement state. The completeness axis checks whether mandatory content is co-cited: in the DE telco sector this comprises minimum term, 12-month alternative and compensation clause per Chapter 13.5; if any of these is missing in the model answer, the citation is legally problematic, even when high in frequency. The competitive-context axis captures with which competitors the brand is named in the same answer and in which order — brand surface in zero-click answers as a measurement quantity of its own. The empirical anchor is threefold: Yext (Search Experience Benchmark Q4 2025, 17.2 million citations) identifies author-entity disambiguation as a top-5 selection factor — the technical precondition for a model to cite the right brand, not a name-similar one; Ahrefs (Mentions vs Backlinks 2025, n ≈ 75,000 brands) evidences that brand mentions without a link produce brand-recall effects in LLM answers and therefore carry a residual value of their own (the basis for the Mention-Buy class in Chapter 14.2); Seer Interactive 2025 documents that around 80 to eighty-five per cent of AI Overviews citations come from 2023–2025 and that freshness is therefore a quality dimension, not editorial ornament. Quality is the most sector-sensitive of the three dimensions: in the DE telco context it carries the contact point with TKG mandatory-information visibility (Chapter 13.5, Chapter 14.5).

7.5Orthogonality and operational consequence

The three dimensions are orthogonal, that is, a shift on one axis says nothing about the state on the other two. A high citation rate can coexist with negative quality; a high persistence can sit on a low rate. This orthogonality is the methodological reason not to aggregate them into one number: any weighting in a single score would assert a substitutability that does not exist in measurement reality. For mandate practice the consequence is a reporting logic with three separate axes, engine-separated and cluster-separated, on a weekly cadence per model. The prompt clusters are the second measurement axis alongside the engine; they comprise 200 to 400 purchase-decisive queries per sector and market, derived from category research, sales transcripts and support tickets, and refreshed quarterly. The price calibration from Chapter 14 uses primarily the rate dimension (three-class assessment) and the quality dimension (TKG visibility modifier); the persistence dimension is the subject of the hypothesis validation from Chapter 16 and the Phase 04 control loop from Chapter 11. The three dimensions are working-hypothesis carriers, not measured point values: their mandate-specific recalibration is part of Phase 04, the validation systematics are documented in Chapter 16 and Annex B.

Engine reach in the German market

User distribution, retrieval basis, model-blended factor.

Northbridge
As of May 2026
Publication

The quantitative distribution of usage across the 6 retrieval engines described in Chapter 9 determines how much visibility weight a single engine citation carries in the German telco market. The chapter describes the DE reach distribution, the retrieval basis per engine, the model-blended factor derived from it, the aggregation discipline by which the per-engine values are combined into a scalar factor, and the feedback effect on price calibration. All distribution figures are sample observations on a publicly published data basis, not official market figures; quarter-on-quarter volatility is substantial and is quantified in section 8.4.

8.1DE user distribution, observed distribution, not a ranking

The DE usage distribution across the 6 engines is not officially reported and is approximated through three measurement approaches: prompt-corpus samples (SISTRIX Prompt Research DACH 2025, 62 million questions), referral-traffic analyses (Similarweb 2025 Generative AI Landscape, 2 December 2025) and vendor self-disclosures on market or revenue size. None of these sources delivers DE-specific engine market shares with the precision of classical media measurement; the distribution observations that follow are accordingly to be read as sample patterns, not as a ranking. In the available DE sample material, ChatGPT appears as the most frequently observed answer source, followed by Google AI Overviews (DE launch 26 March 2025) and Google Gemini. Perplexity shows above-average growth and was named in October 2025 by the Perplexity CEO as the engine’s second-largest revenue market (Reuters, October 2025). Microsoft Copilot and Claude appear in the consumer sample material less frequently than the three engines listed first, with structurally different focal points: Copilot stronger in Microsoft 365 enterprise workflows, Claude stronger in API and long-form research contexts. Similarweb documents globally for consumer AI tools roughly 1.1 billion monthly visits and a referral-conversion range between 5 and 7 per cent depending on engine (as of 2 December 2025); a DE breakdown of these figures is not published. The presentation describes which retrieval layers the DE answer volume in the sector is distributed across on a sample basis — no engine assessment, no market-share statement.

8.2Retrieval basis and buyer persona

The 6 engines can be distinguished not only by reach but also by their retrieval basis and dominant user context. The mapping that follows is structurally observational, not evaluative.

Engine Retrieval basis Buyer persona in the DE context
ChatGPT Bing index for live queries Broad default use, entity-driven, high Wikipedia/Wikidata coupling
Perplexity PerplexityBot index + Brave and Google supplement Finance and compliance contexts with a need for traceable source logic
Gemini Google search index + Knowledge Graph Google Knowledge Graph linkage, freshness orientation
Claude ClaudeBot index + Brave Search Long-form research, tolerance for unstructured substance per source
Google AI Overviews Google SERP + generative filter Mass volume, SERP fusion, the Google-using majority
Microsoft Copilot Bing index (identical to the ChatGPT live layer) Microsoft enterprise workflows (365, Windows)

Two asymmetries are relevant for DE telco mandates. First: Bing-based engines (ChatGPT, Copilot) share a retrieval basis — anyone visible on Bing is potentially visible in both. Second: Google-based engines (Gemini, AI Overviews) likewise share a basis, but treat paid third-source placements structurally with reduced trust weighting under the Google Quality Rater Guidelines; the resulting price discount is operationalised in section 8.3.

8.3Model-blended factor, from reach to price

The model-blended factor translates the reach distribution and the engine-specific advertorial handling into a single multiplicative price factor. It is the second calibration level above the A/B class logic of the procurement standard (Ch. 12) and the three-class assignment of the price-factor matrix (Ch. 14). The working hypotheses per engine (as of April 2026) are: ChatGPT 1.0×, Copilot 1.0×, Perplexity 0.95×, Claude 0.9×, Gemini 0.8×, Google AI Overviews 0.7× (NB Retrieval Study Claude Opus 4.6, April 2026). The figures are working hypotheses derived from documented retrieval architecture plus model self-disclosure, not measured penalty values; empirical validation is possible only via a controlled prompt test (Ch. 16).

Three scenarios follow from these hypotheses:

Mandate prioritisation Weighting Blended factor
Whole-market coverage equally weighted across 6 engines 0.89×
ChatGPT + Perplexity prioritised 40 % ChatGPT, 30 % Perplexity, 7.5 % each remaining 0.94×
Google AI Overviews prioritised 50 % AI Overviews, 30 % Gemini, 5 % each remaining 0.76×

The formula for the final invoice of a citation placement reads:

final_price = list_price × kriterien_faktor × modell_blended_faktor

The choice of the blended factor is a mandate decision fixed in the Phase-00 initial conversation on the basis of the principal’s engine prioritisation. It is not a negotiation tactic but a price derivation arising from retrieval mechanics.

8.3.1Aggregation discipline: three sources, one rule

The three scenario values from section 8.3 arise through weighted aggregation of the per-engine values into a single scalar model-blended factor. The aggregation discipline by which this calculation runs follows three methodological rules.

Three-source minimum threshold. A workable model-blended factor is built from at least three independent inference sources. Below three sources the variance is too high; individual engine outliers distort the result. The 6 engines from Chapter 9 meet this threshold structurally; the rule bites where a mandate prioritises only a subset of engines.

Independence clause. Two variants of the same model do not count as two independent engines. Different vendors, no common model-weight basis, separate training corpora. Otherwise the individual values collapse into a result that reflects only one vendor’s architecture.

Fall-back rule on engine outage. If a single engine drops out in a measurement wave or does not respond within the defined measurement window, its single value is replaced by a neutral partial value rather than removed from the wave. The wave thereby remains structurally comparable across all measurement rounds. Formula logic, the measurement window per engine and the worked example for deriving the 0.89× are documented in Annex B.5.

8.4Volatility, why the distribution is re-measured quarterly

The reach distribution is not a stable state. The Semrush 13-week study across more than 100 million AI citations (as of October 2025) documents that Reddit citations on ChatGPT fell between August and mid-September 2025 from roughly 60 to roughly 10 per cent of prompt answers — a shift that occurred within a few weeks and structurally altered the citation geometry of the affected prompt clusters. The NB Retrieval Study (April 2026) has documented further shifts, including the expansion of Google AI Overviews’ reach after the DE roll-out and the connection of Perplexity to the Deutsche Telekom MagentaMoments platform as a distribution lever. For mandate practice this implies a quarterly re-measurement of the DE reach distribution and an annual re-calibration of the model-blended factor; the distribution is mandate input, not study fact.

8.5Operational consequence

From a citation standpoint, the DE market splits into three retrieval bases (Bing, Google, own AI indices), 6 engines and a distribution that shifts quarterly. The consequence for mandate architecture is not to address all 6 engines equally, but to carry engine prioritisation as a mandate decision and pass it through the model-blended factor into price calibration. Which engines are prioritised in a specific mandate is the subject of the Phase-00 initial conversation and depends on the target-group definition, the product reach and the engine-specific buyer persona. Chapter 9 describes the engine structure within which this prioritisation is positioned; Chapter 14 operationalises the blended factor in the price-factor matrix; Chapter 16 describes the controlled test design for empirical validation of the hypotheses from section 8.3; Annex B.5 documents the formula depth of the aggregation discipline from 8.3.1.

Impact pyramid · Capstone diagram

The single number. 5 stages from reach to impact.

This is not a promise. This is a calculation. All input quantities of this pyramid come from published industry sources (Similarweb, Indig, Scrunch AI, Aggarwal et al., Sistrix) or from the NB-internal retrieval investigation of April 2026. The projected results are a model scenario on the basis of these input quantities, not yet mandate-empirically validated. The validation test with a real DE telco client is the subject of the post-study phase (self-limitation 4).
01
Foundation · market reality 2026
The channel exists and is moving fast
1.1 bn
GenAI visits June 2025
Similarweb
44.2%
Citations from first 30% of the page
Indig 2026
60%
Monthly drift between engines
Profound
02
Act I · baseline measurement
Where we stand today
Small
10 %
Citation share in 6 engines
Mid
18 %
Citation share in 6 engines
Large
22 %
Citation share in 6 engines
03
Act II · measure
12 months of targeted retrieval work
180 retrieval-effective articles per month
A-criteria-compliant · B-criteria-optimised · Phase-03 editorial · compliance-cleared · Citation-Buy or Mixed-Buy
04
Act III · measurement
Where we stand after 12 months
Small
18 %
+8 pp
Citation share in 6 engines
Mid
32 %
+14 pp
Citation share in 6 engines
Large
42 %
+24 pp
Citation share in 6 engines
05
Act IV · the single number
20of100
For a mid-sized operator: 20 of 100 sales units come, after 12 months, from a channel that practically did not exist 12 months earlier, LLM direct citation. For small operators 8, for large 32.
Input sources Similarweb GenAI Landscape 2025 · Indig 2026 · Scrunch AI Half-Life Study · Sistrix Prompt-Research DACH 2025 · Aggarwal et al. KDD 2024 · NB retrieval investigation April 2026 · Model calculation without mandate-empirical validation (self-limitation 4).

DeepeningA standalone sub-study on the impact pyramid with channel mix and model scenario: Study Pyramid.

What the analysis shows

The pyramid maps the path from reach to impact in five stages: market reality, measurement basis per engine, measure within the editorial standard, measurement after twelve months, a scalar closing number. The input quantities come from published industry evidence (Similarweb, Indig, Scrunch AI, Aggarwal et al., Sistrix) plus the NB retrieval investigation of April 2026. The projected closing number is a model calculation, not a mandate-empirically validated promise.

How to use it

The pyramid is the board argument for a Compliance-GEO mandate: each stage is its own investment question, each stage carries its own measurement discipline. Before the mandate starts, the stage where the client-specific deficit sits is identified. The pyramid is recalibrated quarterly; shifts at stages 1 or 2 alter the closing number directly.

Sector transfer

The five-stage architecture applies in all four regulated consumer verticals. Stage 1 (market reality) and stage 2 (measurement basis) shift sector-specifically in the engine shares; stage 3 (measure) carries sector-specific mandatory information; stages 4 and 5 remain architecture-stable. The concrete numbers per stage are calibrated sector-specifically at mandate start without altering the pyramid logic.

Value for you

A scalar closing number derived traceably from the mechanics, not from marketing promise. Board and CFO see the input quantities; the closing number is recomputable. Mandate investment becomes a model decision with a documented input trail.

The 6 retrieval engines

ChatGPT, Copilot, Perplexity, Claude, Gemini, AI Overviews.

Northbridge
As of May 2026
Publication

9.1Subject and demarcation

Generative Engine Optimization (GEO) deals with the visibility of content in retrieval engines, that is, in systems that deliver synthesised answers to user queries rather than result lists. The chapter names the 6 retrieval engines that are available in Germany and that the operational part of the study (Chapter 11–16) addresses: ChatGPT (OpenAI, San Francisco, US), Microsoft Copilot (Microsoft, Redmond, US), Perplexity (Perplexity AI, San Francisco, US), Claude (Anthropic, San Francisco, US), Google Gemini and Google AI Overviews with AI Mode (both Google/Alphabet, Mountain View, US). The selection is not a market ranking but a demarcation of the subject area; other generative systems (You.com, Brave Search, Mistral Le Chat, DeepSeek) are not observed in the study.

9.2DE availability

All 6 engines are regularly accessible in Germany. Launch dates range from November 2022 (ChatGPT Web) to October 2025 (Google AI Mode). ChatGPT is listed for Germany on the official OpenAI Supported Countries list (as of April 2026). Claude has been available in Europe since 14 May 2024, including German users (Anthropic, 14 May 2024). Perplexity was globally accessible from the start; the DE market is, per the CEO, the engine’s second-largest revenue market (Reuters, October 2025). A partnership with Deutsche Telekom bundles Perplexity Pro through the MagentaMoments programme. Microsoft Copilot evolved from Bing Chat (February 2023) and was renamed in November 2023; Microsoft 365 Copilot was generally available at the same time. Google Gemini is the Bard service renamed as of February 2024, which had been available in Germany as Bard since July 2023. Google AI Overviews launched in Germany on 26 March 2025; AI Mode followed on 7–8 October 2025.

9.3Architecture dichotomy, engine ≠ bot

A retrieval engine is not identical with the bots that crawl the web on its behalf. Typically an engine operates several bots with separated functions: a training crawler collects data for foundation-model training, a retrieval crawler feeds the index that the engine queries at runtime, and a user-fetch agent visits pages on user request. The separation is operationalised in Chapter 10.

The 6 engines can be divided into two architecture types:

Group I, dedicated AI-bot architecture covers ChatGPT, Claude and Perplexity. These engines run their own crawler infrastructure with separate user agents for training, retrieval and user-fetch. The three functions are individually addressable via robots.txt.

Group II, search-index-coupled architecture covers Microsoft Copilot, Google Gemini and Google AI Overviews and AI Mode. Here the AI retrieval layer uses the existing search-engine index — Bingbot at Microsoft, Googlebot at Google. Search-index presence and AI-answer presence are architecturally coupled; the mechanisms for decoupling are correspondingly limited.

The consequences of this dichotomy for opt-out options and bot compliance are described in Chapter 10.

9.4Measurement consequences of the architecture dichotomy, 4 sub-findings

The separation into Group I and Group II has operational measurement consequences that are documented in industry research. 4 sub-findings are relevant for the measurement apparatus of a Compliance-GEO mandate.

First, fan-out doubling in ChatGPT. The average word count per ChatGPT query fan-out doubled between October 2025 and January 2026 from roughly 6 to roughly 12 words, with a single weekly peak of around 16 words. The number of fan-outs per prompt remains constant between 2.4 and 2.8 — ChatGPT makes each individual sub-search more precise, not more sub-searches. Data basis: 20 million query fan-outs across Germany, the United Kingdom, Singapore, Thailand and the USA (Wells/Peec.ai, 12 February 2026). Operational consequence for the measurement apparatus: prompt-cluster definitions must work with longer fan-outs to capture citation hits that the model generates at sub-search level.

Second, engine citation-rate benchmarks. Across more than one million harmonised AI citations, Peec.ai (Wells, 27 February 2026) documents three structurally different citation distributions per engine. ChatGPT cites generously: roughly 31 per cent of measured URLs reach a citation rate of 2.0 or higher and account for roughly 59 per cent of all citations. Google AI Mode is conservative and consistent: more than 90 per cent of URLs sit below citation rate 1.0; sweet-spot recommendation 1.1 to 1.5. Perplexity concentrates its citations strongly: 64 per cent of URLs are never cited, 6 per cent account for roughly half of all citations, with benchmark recommendation 1.5 to 2.0. Operational consequence: a citation-rate value without engine attribution is methodologically not meaningful. The weekly measurement cadence from Chapter 7 must be engine-separated.

Third, English-switch in non-English ChatGPT sessions. In non-English ChatGPT sessions, 78 per cent of sessions perform at least one English sub-search. 43 per cent of all fan-outs for non-English prompts run on the English web. Language-specific switch rates vary from roughly 60 per cent (lowest measured value) to roughly 94 per cent (highest); no non-English language sits below 60 per cent. Data basis: 10 million prompts and 20 million query fan-outs (Rudzki/Peec.ai, 12 February 2026). DACH consequence: a German tariff-comparison search from a German IP can receive English comparison-portal sources as the primary citation layer, with the consequence that the answer ordering overlooks German market leaders. The measurement apparatus of a DE telco mandate must carry English sub-search hits in the citation evaluation.

Fourth, cross-engine penalty as a consequence of the Group II coupling. The Grokipedia case (early 2025) is the first documented case of a synchronous citation reduction across classical search and AI search: the AI-generated Wikipedia clone lost its Google visibility and, in parallel, its citations in ChatGPT, AI Mode and AI Overviews — the latter three grounded on Google. Originality.AI documents in a 2024 study on the Google spam update that 100 per cent of pages affected by the penalty contained AI-generated posts, with half at 80 to 90 per cent (Rudzki/Peec.ai, 25 February 2026). Operational consequence: the separation between classical SEO penalty and GEO visibility is, in Group II (Microsoft Copilot, Google Gemini, Google AI Overviews/AI Mode), operationally partly illusory. Mandate decisions that breach the content-integrity line from Chapter 19 (code of conduct) damage GEO visibility no less than SEO visibility.

Bot policies of the 6 retrieval engines

Crawler classes, robots.txt behaviour, telemetry.

Northbridge
As of May 2026
Publication

10.1Three bot classes

Compliance-GEO addresses retrieval engines via the individual bots that visit the web on their behalf. Three functional classes are distinguished:

  • A · Training crawler, collects data for foundation-model training
  • B · Live retrieval crawler (search-index crawler), feeds the index that the engine queries at runtime
  • C · User-initiated fetch, the engine visits a page specifically on behalf of a user prompt

Each class has its own robots.txt consequence and its own opt-out profile.

10.2Group I, dedicated AI-bot architecture

ChatGPT (OpenAI) operates three published bots: GPTBot/1.1 (training), OAI-SearchBot/1.0 (retrieval), ChatGPT-User/1.0 (user fetch; in server logs also observed as versions 2.0 and 3.0). OpenAI updated the documentation in December 2025 to the effect that ChatGPT-User is not bound by robots.txt to the same extent as the two automated bots (OpenAI Help Center; reports: Search Engine Roundtable, 9 December 2025). GPTBot and OAI-SearchBot share crawl results to avoid duplication.

Claude (Anthropic) has run, since February 2026, three formally separated bots: ClaudeBot (training), Claude-SearchBot (retrieval, newly formalised), Claude-User (user fetch). The Anthropic policy states uniform robots.txt respect for all three, without an exception clause for user fetches (support.anthropic.com/en/articles/8896518, as of April 2026).

Perplexity operates two declared bots: PerplexityBot (index crawler, not for foundation training) and Perplexity-User (user fetch). Perplexity documents that Perplexity-User ignores robots.txt directives once the fetch is triggered by a user query (docs.perplexity.ai, as of April 2026). Cloudflare de-listed Perplexity as a verified bot on 4 August 2025 and documented stealth-crawling patterns — rotation of user agents and ASNs going beyond the declared bot identities (blog.cloudflare.com, updated 29 January 2026). Perplexity disputes this account and points to third-party traffic (BrowserBase). The two accounts are conflicting.

10.3Group II, search-index-coupled architecture

Microsoft Copilot relies on the Bing index. Bingbot/2.0 serves Bing search and Copilot grounding jointly; a dedicated AI training crawler is not published. Blocking Bingbot removes a website from both channels simultaneously. In addition, Microsoft runs Copilot Actions, an agentic browser feature in Edge and on copilot.com, which appears as a regular Edge/Chromium session without a dedicated bot user agent and without HTTP signature (HumanSecurity, 12 January 2026). A protocol-based distinction from human Edge traffic is not possible; detection runs heuristically over session patterns.

Google separates training and search index across two user agents: Google-Extended (since September 2023; documentation update April 2025) controls the use of content for training and grounding of Gemini and Vertex AI, Googlebot feeds the search index. Google-Extended has no influence on presence in AI Overviews and AI Mode — these features draw from the Googlebot index. For removal from AI answers, three options are currently available, each with side effects: blocking Googlebot (also removes from Google search), nosnippet or max-snippet (also affects classical search-result snippets), and awaiting regulatory developments.

10.4Regulatory dynamics

On 28 January 2026, the UK Competition and Markets Authority (CMA) demanded opt-out controls for AI search features from Google. Google replied the same day that it was reviewing updates to control mechanisms, without a timetable (Search Engine Journal, 28 January 2026). Cloudflare published a statement on 30 January 2026 with the finding that the existing mechanisms — Google-Extended and nosnippet — do not bite reliably in practice: customers had found content reappearing in AI features despite signals being set (blog.cloudflare.com/uk-google-ai-crawler-policy/, 30 January 2026).

10.5Methodological implication for Compliance-GEO

The binary robots.txt logic of the SEO era is not sufficient for GEO. Compliance-GEO distinguishes three dimensions and steers them separately:

  • Training opt-out, individually controllable at OpenAI (GPTBot), Anthropic (ClaudeBot) and Google (Google-Extended); not published at Microsoft; not relevant at Perplexity due to absence of foundation training
  • Retrieval opt-out, possible in Group I via robots.txt; in Group II only via blocking the shared search-index crawler, with loss of search presence as a side effect
  • User-fetch opt-out, respected without exception at Anthropic, more loosely formulated at OpenAI since December 2025, officially not respected at Perplexity

The operational part of the study (Chapter 11–16) derives configuration recommendations per website from this three-way split; the procurement standard (Chapter 12, criteria A 09 and A 10) checks external service providers against the same three-way split.

The 5 phases: from precondition check to reporting

Precondition check, publisher identification, contract, editorial, reporting.

Northbridge
As of May 2026
Publication

Compliance-GEO is, in the sense of this study, not a one-off project but a mandate cycle of 5 phases building on each other. The phases separate the structural precondition check from publisher build-up, the contractual procurement arrangement from editorial publication, and individual measurement from the feedback loop. The order is not convention but methodological consequence: each phase presupposes the result of the preceding one as state, and each later phase delivers measurement data that flow back into earlier phases as recalibration. The 5-phase frame is the operational counter-design to one-off optimisation and to tool-centred practice without contractual binding. Generative Engine Optimization (GEO) is consistently carried as GEO throughout the study.

11.1Phase 00 · agent precondition check

Before any mandate kick-off comes the check of technical and structural preconditions for retrieval-effective work in the target market. Three check objects are mandatory. First: bot hygiene and robots.txt check of the principal’s properties and of the relevant publisher environment — missing or blocking entries for the market-relevant crawlers (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, Google-Extended and the further agents described in Chapter 9) make downstream phases ineffective. Second: engine availability in the mandate market — which of the 6 retrieval engines described in Chapter 9 become effective in the target market with which sample weight. Third: reference-baseline collection for the three measurement dimensions from Chapter 7 — Citation Rate, Citation Persistence and Citation Quality — separated per engine and per defined prompt cluster. Without this baseline, no later effect of an intervention can be cleanly attributed.

What measurably shifts in 7 days. Phase 00 is calibrated to a one-week work block that runs without a mandate contract, without publisher procurement and without editorial intervention. The block delivers the baseline on which every subsequent phase rests, and it already produces three board-ready measurements on its own. First: a Citation-Rate baseline across 4 to 6 engines with a prompt sample of 80 to 120 tariff queries from the target cluster — manual measurement run or tool-supported (Rankscale, Peec, Profound); effort roughly 6 to 10 hours for prompt curation plus measurement run. Second: a count of the aggregator share in the LLM answers by citation category (comparison portal, competitor tariff page, trade publisher, community) per engine and cluster; effort roughly 4 to 6 hours, automatable from the second measurement round. Third: an A-criteria check on existing advertorials as a manual pass over the principal’s last 12 published placements with the eight-part A schema from Chapter 12.2; effort roughly three to 5 hours per 12 placements. These three measurements are in-house-capable, presuppose no external advisory frame, and deliver in one work block what is sufficient for a board report at weekly cadence. The Phase 00 baseline is not a mandate component but the entry condition; it can be produced Northbridge-externally and introduced into a running mandate as the starting state.

11.2Phase 01 · publisher identification and source-map build

Following the precondition check comes sector-specific publisher identification. The logic is retrieval-first: which publishers actually appear in the retrieval answers of the prioritised engines for the relevant prompt clusters, which are observed as citation carriers in the principal’s peer group, and which cover the country-specific grounding requirement for the mandate market. The candidate list runs through a pre-check against the procurement standard from Chapter 12 for A and B class fitness; admission to the source map presupposes structural fulfilability of at least the A class. The source map documents cluster assignment, procurement-standard pre-check and observed retrieval presence per engine; it is the working basis for Phase 02 and is regularly updated against the Phase 04 measurement.

Sector clusters of publisher identification in the DE telco mandate. Publisher identification structures candidates into sector-structural clusters that are present in the NB Publisher Research Telco-DACH and in the NB Sector Dossier Telco as a sector inventory map. Three clusters carry methodologically in Phase 01 for DE telco mandates. First, the tariff-comparison cluster with aggregator-dominated retrieval geometry across the leading DE tariff comparison platforms; these aggregators are to be mapped in Phase 01 as competitive reality, not as procurement candidates — they structurally produce in-house content and offer no advertorial inventory. Second, the reputation cluster with network-test publications such as connect, CHIP, ComputerBILD, SMARTPHONE Magazin, Stiftung Warentest as well as crowd-sourced measurement resources such as Tutela and opensignal, and the authority measurement of the Bundesnetzagentur; these publications dominate reputation queries ("which network is most reliable") and are taken into Phase 01 as test-URL slot candidates — the annual test sub-sections of a domain carry higher trust weight than generic news URLs of the same domain. Third, the fixed-line and fibre cluster as an aggregator-thin field with specialised DSL and fibre comparison resources plus the federal availability layers of the BMDV; this cluster is the largest open flank for network-operator publisher work in DE and forms the natural Phase 01 connection point for integrated network-operator portfolios with fixed-line and fibre roll-out.

11.3Phase 02 · publisher contract design and procurement arrangement

Phase 02 is the contractual anchoring of the procurement standard. It describes methodologically what must find a place in the publisher arrangement so that the final-invoice logic from Chapter 14 bites; it is not a legal recommendation, but the methodological coupling of measurement criteria to contract mechanics. The phase is split into three sub-steps that follow the two-level responsibility separation from Chapter 12.1.1.

Phase 02.1 · publisher pool pre-check (before booking). The three domain-policy criteria of the procurement standard — A 03 domain reputation, A 05 paywall status, A 06 bot policy — are checked once at the publisher domain and entered into a publisher pool list. Re-validation quarterly. Publishers that do not pass this pre-check are not commissioned; the pre-check produces no invoicing consequence because the policy lies outside the briefing frame. Bookings are placed from the pool per mandate.

Phase 02.2 · placement booking. The specific contribution is booked with the publisher. 4 contractual elements are constitutive: first, the coupling of the final invoice to evidence of fulfilled A and B criteria from Chapter 12, operationalised via the three-class assignment of the price-factor matrix from Chapter 14.2 with the briefing-FAIL / pre-check-FAIL differentiation; second, URL persistence over at least 12 months as a measurement precondition for Citation Persistence per Chapter 7.3 (corresponding to A 07); third, the binding assignment of a disclosure variant from the V01–V06 inventory of Chapter 13, including the layer of TKG mandatory information that runs orthogonally in the DE telco sector from Chapter 13.5; fourth, the Phase 02 inventory documentation that transparently records every booking, the chosen publisher, the chosen variant and the agreed criteria list — as the basis for later reporting and for the burden-of-proof documentation described in Chapter 12.5 against the consideration presumption under § 5a (4) sentence 2 UWG.

Phase 02.3 · briefing verification (before final invoice). After publication, the published contribution is verified per the stage-02 workflow from Chapter 12.4 (URL and DOM disclosure, indexing status, schema and byline, word-count hooks front-loading, outbound links and persistence). On briefing-FAIL (FAIL on A 01, A 02, A 04, A 07 or A 08), the final invoice is reduced; on full compliance, invoicing follows the assignment under 14.2. The editorial implementation of the contractually fixed variant takes place only in Phase 03; Phase 02 fixes solely the binding form and the verification mechanics.

11.4Phase 03 · editorial standard and publication

Phase 03 implements editorially the commitments from Phase 02. The operational substance is set out in Chapter 15: 6 layers — date logic, disclaimer structure, scopes of validity, source prioritisation, exception handling, country-specific grounding — are checked cumulatively per publication; front-loading per B-criterion 05 and definitive language per B-criterion 06 flank the layers as retrieval-effective editorial levers. The disclosure variant chosen in Phase 02 is implemented and verified at the published location in terms of visibility — including, in the DE telco context, the sector-specific visibility check for TKG mandatory information from Chapter 14.5. The three-point release procedure from Chapter 15.3 (editorial first release, compliance release, publisher release) is the operational interface; only after all three releases is the piece published. Phase 03 is thereby the only phase in which content is actually produced; all other phases prepare or evaluate it.

11.5Phase 04 · reporting and feedback loop

Phase 04 closes the cycle. The weekly measurement cadence from Chapter 7 runs across the three dimensions: Citation Rate (Share of Model Voice per prompt cluster), Citation Persistence (stability across measurement waves per engine) and Citation Quality (sentiment, completeness of mandatory information, competitive context). Reporting is engine-separated and cluster-separated; aggregation into a single number is methodologically excluded (Chapter 7.5). On the basis of this measurement runs the feedback loop: the hypothesis validation from Chapter 16 delivers two to three controlled tests per quarter; the cumulative results flow back semi-annually into recalibration of the working hypotheses from Chapter 8.3, 12.3, 14.2 and 15.1. A re-measurement after at least 4 measurement waves with seasonality correction shows the effect of a single intervention. The phase is not a closing point but a pivot: it shifts the baseline back for the next mandate cycle.

Weekly dashboard reading as the data antecedent to the feedback loop. The weekly measurement cadence simultaneously carries an in-house operational dashboard reading at a 7-day cadence (section 7.7). Dashboard reading and mandate feedback loop work complementarily: the dashboard reading gives the in-house team weekly visibility on Citation Share, Persistence and publisher stability; the mandate feedback loop integrates the aggregated findings at quarterly and semi-annual rhythm into recalibration of the working hypotheses. The dashboard reading is not a replacement but the data antecedent of the feedback loop. The weekly cadence is not advisory comfort, but minimum resolution: the citation drift documented in Chapter 7.3 of up to 60 per cent per month between engines (Profound industry research), as well as the Reddit citation shift in the Semrush 13-week study (October 2025), show that monthly reports can miss precisely the drift that determines the mandate result. Details on dashboard metrics, trigger thresholds and role allocation are documented in Annex B.7.

11.6Operational consequence

The methodological point of the 5-phase architecture lies in the strict separation of contract (Phase 02) and editorial work (Phase 03), with simultaneous coupling of both to the measurement infrastructure from Chapter 7. Without this coupling, the final-invoice logic remains agency convention rather than methodology; without the separation, contractually agreed verification mechanics and editorial fulfilment blur into a grey zone that is no longer cleanly evidenceable. The tool stack with which the phases are operationally supported is documented in Annex C.

5-phase timeline · mandate cycle capstone

5 phases, one direction, one feedback loop. The mandate cycle at a glance.

00
Phase · agent check

Precondition check

Check bot hygiene and robots.txt, determine engine availability in the target market, collect a baseline across the three measurement dimensions from Chapter 7, separated per engine and prompt cluster. Without a clean baseline, no later effect of an intervention can be attributed.

deliversBaseline values per engine · 6 engines · prompt-cluster weighting
01
Phase · publisher map

Publisher identification

Retrieval-first: which publishers actually appear in the answers of the prioritised engines. Candidates run through an A/B class pre-check per the procurement standard; admission to the source map requires at least A-class fitness.

deliversSource map with cluster assignment · A/B pre-check · retrieval presence per engine
02
Phase · contract

Contract design and procurement arrangement

02.1Pool pre-check per A 03/05/06, quarterly re-validated.
02.2Booking with price coupling, URL persistence 12 months, variant V01–V06, inventory.
02.3Verification per Chapter 12.4 before final invoice.
deliversBooking inventory · disclosure variant · price factor per Chapter 14.2
03
Phase · editorial

Editorial standard and publication

6 layers from Chapter 15 checked cumulatively. Front-loading (B 05) and definitive language (B 06) as retrieval levers. Three-point release: editorial, compliance, publisher. Publication only after all three.

deliversPublished contributions · documented three-point release · visible disclosure
04
Phase · reporting

Reporting and feedback loop

Weekly measurement of the three dimensions (Citation Rate, Persistence, Quality), engine-separated and cluster-separated, no single-number aggregation. Hypothesis validation per quarter, recalibration semi-annually.

deliversCluster-separated reporting · feedback-loop input for Phase 00
Feedback loop Phase 04Phase 00 · semi-annual recalibration of the working hypotheses from Chapter 8.3, 12.3, 14.2 and 15.1. Phase 04 is not a closing point but a pivot — it shifts the baseline back for the next mandate cycle.
What the analysis shows

Compliance-GEO is not a one-off project but a cyclical mandate in five operational phases: precondition check, briefing verification, editorial standard, retrieval measurement run, reporting plus iteration. The sequence is not convention but structural necessity, because each phase checks the output condition of the previous one. Phase 04 closes the cycle through semi-annual recalibration in Phase 00.

How to use it

A mandate starts with Phase 01 precondition check, not with a briefing. Phase 03 is the only phase in which content is produced; all others check, measure or report. Each phase produces its own artefacts, which serve compliance functions as an audit trail. If Phase 01 disqualifies, the mandate is not opened, and no budget is burned.

Sector transfer

The phase sequence is sector-invariant. The Phase 03 contents are sector-specific: telco calibrates to TKG mandatory information, Financial Services to WpHG/MaRisk suitability declarations, insurance to VVG information duties and IDD advisory documentation, commerce to UWG and DSA disclosure. The phase architecture and feedback loop remain identical.

Value for you

Mandate steerability via clear phase transitions with escalation points. Protection against silent mandate drift that arises when briefings start without a precondition check. An audit-capable documentation trail per phase, presentable to compliance, CISO and board.

18 criteria and a verification workflow

8 A and 10 B criteria, two-stage workflow.

Northbridge
As of May 2026
Publication

The procurement standard is the operational form in which Compliance-GEO becomes measurable. It decomposes every planned citation placement into 18 verifiable criteria and a two-stage verification workflow that runs before the final invoice. 8 of these criteria are binary; they decide whether a placement qualifies as a citation carrier at all. 10 are gradual; they determine how high the citation lift turns out, once eligibility is given. The separation between eligibility and lift is not semantic but operational: a single briefing-FAIL renders a placement worthless; a missing B property reduces the lever, not eligibility. Orthogonal to this A/B split, the criteria additionally fall along responsibility levels — three criteria belong to the publisher pre-check before booking, 15 to briefing compliance before the final invoice (subsection 12.1.1).

12.1Why verification before payment is the only lever

After publication, corrections to the URL path, DOM disclosure, schema markup or byline are practically not enforceable vis-à-vis the publisher. A publisher who has served an advertorial under a /sponsored/ path will not change this path after invoice approval. The only enforcement mechanism that reliably works in practice is coupling the final invoice to evidence of fulfilled criteria. The verification workflow therefore runs before each final invoice, not after, and the price coupling is the actual legal lever, not the advertorial contract itself. Integration of this lever into the publisher arrangement is the subject of Phase-02 contract design (Chapter 11.3).

On the current substantive state, the procurement standard is conceived as a methodological measure, not as the directly legally compelled implementation of the supply-chain duty under § 165 (2a) No. 4 TKG. The TKG supply-chain duty implements Art. 21 (2)(d) and (3) NIS-2 Directive and demands security measures in the relationships with direct vendors and service providers. Whether LLM vendors without a direct contractual relationship to the telco fall under the term "direct vendors or service providers" is, on the prevailing reading, to be denied — Implementing Regulation (EU) 2024/2690 of 17 October 2024 does not capture telco providers per Art. 1, and the ENISA Technical Implementation Guidance of 26 June 2025 likewise does not address telco. A more expansive reading where the telco actively deploys LLMs (own chatbots, shop integration) is tenable in secondary literature but is not finally settled as of April 2026. The procurement standard is therefore positioned primarily methodologically: it operationalises vendor due diligence independently of whether the regulatory supply-chain duty bites in the specific case, and it merges into the regulatory scope where there is a direct LLM contractual relationship. The detailed positioning of the two interpretive lines with the relevant EU-law and national references is found in Chapter 3.1. The negligibility clause under § 28 (3) BSIG leaves the telco core activity untouched; it is captured under § 28 (1) sentence 1 No. 3 BSIG without a threshold reservation. The clause is therefore not pertinent to the procurement-standard positioning. The interpretive openness of the more expansive reading remains the subject of legal validation (Chapter 24).

12.1.1Two levels of responsibility

The 18 criteria fall across two levels that follow different responsibilities and different escalation mechanics. The separation follows from the publisher acceptance check: quality publishers accept enforcement claims only in areas that lie within the influence range of the individual assignment. Domain-policy decisions (robots.txt, paywall architecture, domain reputation) are strategic publisher decisions outside the individual-assignment negotiation. The separation is not a methodological softening but a precision of enforcement mechanics.

Level 01, publisher pre-check (before booking). Three criteria decide at domain-policy level and apply independently of the individual contribution: A 03 domain reputation, A 05 paywall status, A 06 bot policy. They are go/no-go — if a publisher does not meet them, the contribution is not booked. Re-validation quarterly. No invoicing consequence after booking, because these criteria are not part of the briefing frame.

Level 02, briefing compliance (before final invoice). 15 criteria sit within the publisher’s influence range for the specific contribution: A 01, A 02, A 04, A 07, A 08 plus B 01 to B 10. If the publisher complies with the briefing, the final invoice is billed at full classification per the price-factor matrix. Briefing-FAILs are typically not repairable after publication and lead to invoice reduction.

The guideline is operational: every criterion sits in the influence range of a clearly named partner. Anyone who sticks to the briefing should expect no disadvantages. Policy decisions that lie outside the individual-assignment influence range are decided before booking — through not booking — not after booking through invoice reduction. The verification workflow follows this separation in two stages (subsection 12.4).

12.2The 8 A criteria, eligibility, binary

The 8 A criteria check properties previously extracted from the published placement. The extraction layer is the technical bridge between the placement and the binary criteria evaluation: URL path structure (for A 01 and A 07), DOM disclosure in the header area (A 02), meta and canonical tags from the page source (A 04), schema-markup validation (B 02, flanking), paywall and bot-policy fetch at domain level (A 05, A 06), outbound-link structure (A 08). Extraction runs automatically over HTTP fetch, DOM parser, schema validator and text analyser. It is a precondition both for the mandate verification in 12.4 and for the in-house reading in 12.5.1. The extracted descriptors are the input data for any criteria evaluation; without clean extraction, the eligibility check is not robust, irrespective of the application mode chosen.

The A class checks whether a placement is at all indexable, crawlable and editorially classified within the candidate set of the 6 retrieval engines. Each criterion is a binary filter; FAIL means disqualification, irrespective of price, publisher name and content quality. Three criteria (A 03, A 05, A 06) belong to the publisher pre-check before booking; the remaining 5 (A 01, A 02, A 04, A 07, A 08) to briefing compliance before the final invoice.

Criterion Responsibility level What is checked Disqualifying when
A 01 · URL path Briefing compliance Article sits under an editorial path of the domain Path segment is /sponsored/, /anzeige/, /advertorial/, /promotion/, /pr/, /partner/; subdomain offloading such as partner.x.de or brandzone.x.de
A 02 · DOM disclosure Briefing compliance No advertorial template in the article area. Textual advertising disclosure ("Anzeige", "Sponsored") is permitted and legally required. Stand-alone DOM elements (badges, frames, CSS wrappers such as .advertorial) that classify content as advertising at template-structure level
A 03 · Domain reputation Publisher pre-check Domain is not a recognisable advertising aggregator Domain name or about text contains terms such as "anzeigen", "presseportal", "prnews", "advertorial", "sponsored"
A 04 · Indexing status Briefing compliance Index, follow; self-referencing canonical; no X-Robots block <meta name="robots" content="noindex">, X-Robots-Tag: noindex in the HTTP header, missing or foreign canonical
A 05 · Paywall status Publisher pre-check Full text fully accessible to crawlers Partial content, metered paywall without Googlebot exception, login wall before the Article schema
A 06 · Bot policy Publisher pre-check robots.txt allows all relevant retrieval bots separately Blanket User-agent: * with Disallow: /; individual disallow for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, OAI-SearchBot, CCBot or Bingbot
A 07 · URL persistence Briefing compliance Contractually fixed 12-month guarantee at an unchanged URL No persistence clause; publisher’s advertorial archiving routine without exception clause
A 08 · Outbound links Briefing compliance Links to the principal marked with rel="nofollow sponsored" Missing or incomplete rel attribution; dofollow links without sponsored marking

The DOM-disclosure refinement in A 02 separates between textual inline disclosure (permitted, legally required under § 5a (4) UWG, retrieval-neutral) and an advertorial template (disqualifying, because engine classification as advertising). This differentiation covers the findings from the Retrieval Study Claude Opus 4.6 (April 2026), in which variant V02 (textual disclosure under the headline, without DOM template) is identified as the target corridor; variant V05 (visually correct, but with /advertorial/ URL and missing schema) remains a briefing-verification case detected before the final invoice. The bot-policy shift in A 06 reflects the observation from Publisher Research Telco DACH that several DE publishers strategically use bot blocks; these domain-policy decisions are identified before booking, not negotiated per contribution. The A 07 reduction to 12 months reflects CMS migration intervals and redesign frequencies of DE trade media; per Indig 2026, the second half of the two-year window delivers disproportionately little additional citation lift at disproportionately high commitment risk.

The clean separation between legally mandated advertising disclosure (UWG § 5a (4), MStV § 22) and technical retrieval classification is the subject of Chapter 13. Compliance and eligibility are not mutually exclusive; they are operationalised separately.

12.3The 10 B criteria, lift, gradual

The B class determines the citation lift once the A class is fully met. All 10 criteria belong to briefing compliance — they sit in the publisher’s influence range for the individual contribution and are verified before the final invoice. The criteria act additively; each additional fulfilled criterion raises the citation probability per engine, without any single criterion alone deciding eligibility. The empirical basis comes from three measurement strata: academic retrieval research (Aggarwal et al. KDD 2024, Indig 2026), industry studies on citation structure (Profound, Ahrefs 2025, Seer Interactive 2025) and the Yext Q4 2025 analysis on author entity disambiguation.

The 10 B criteria check features known from classical SEO hygiene; the demarcation from SEO lies in target metric and measurement mode (Chapter 1).

What the B criteria are not: classical on-page SEO. Several B criteria (byline, schema markup, substance length, front-loading) also appear in SEO audits, but under a different optimisation logic. The difference is structural, not gradual: classical on-page SEO optimises the order of a page in a search-result list (ranking at domain level); Compliance-GEO optimises the extractability of individual passages as citable chunk units and their retrieval probability per engine (chunk level). The same trade article with the same title tag and the same schema is judged equally in SEO, irrespective of paywall status, mandatory-information position and chunk granularity; for retrieval behaviour, precisely these three properties are decisive. The B criteria translate this retrieval mechanics into verifiable procurement requirements, not into ranking factors.

Criterion What is checked Empirical anchor
B 01 · Byline Editorially responsible author who has performed final review and revision of the text. Pre-work by client or agency is permitted; what matters is editorial final responsibility and substance review by the bylined author. Yext Q4 2025, author entity as selection factor
B 02 · Schema markup Article or NewsArticle, with author, publisher, datePublished, dateModified NB methodology; schema-subtype choice remains a publisher-internal editorial and compliance decision
B 03 · Substance At least 800 words with information gain over the topic baseline Aggarwal et al. KDD 2024, Position-Adjusted Word Count
B 04 · Citation hooks At least three named statistics and one attributed direct quote Ahrefs 750-prompts study 2025 (hook-density correlation with Citation Rate); not to be confused with the separately conducted Ahrefs Mentions-vs-Backlinks study 2025 (n ≈ 75,000 brands, brand recall) from Chapter 7.4 and 14.2
B 05 · Front-loading Core statement within the first 30 per cent of the text Indig 2026; RAG chunking favours entry segments
B 06 · Definitive language Definitions in declarative form, no hedging, no modal softeners NB editorial standard (Chapter 15)
B 07 · Entity consistency Brand, product and person references named consistently, not varied NB methodology
B 08 · Question headlines At least one headline or sub-heading in question form NB methodology, corresponds with query matching
B 09 · Listicle structure At least one section structured as a list with clear item separators NB methodology
B 10 · Update Documented dateModified revision at least quarterly NB methodology; corresponds with A 07

For B 01, editorial final responsibility is to be separated from the question of authorship. A narrow reading ("actual editor, no guest contribution") would conflict with Pressekodex No. 7 (separation of editorial and advertising), because an editor should not put their name under a text they did not write. The reading carried here mirrors actual publisher practice: agency or client pre-work passes through editorial review, revision and substance validation by the bylined author, who then bears final responsibility. Journalistic ethics are thereby preserved and retrieval-relevant author disambiguation remains intact as a selection function. For B 02, no standard prescription is set for schema sub-types; for paid content, Google expects schema truthfulness, and the choice between Article, NewsArticle or AdvertiserContentArticle remains a publisher-internal compliance decision with risk effect on the entire domain. The retrieval-relevant prescription remains the complete markup with author, publisher, datePublished, dateModified.

12.4The two-stage verification workflow

The workflow follows the responsibility separation. Stage 01 clarifies before booking whether the publisher meets the requirements at domain level — a FAIL means non-booking, no invoicing consequence. Stage 02 checks after publication, before payment, whether the publisher has complied with the briefing — a FAIL leads to remediation or invoice reduction per the price-factor matrix. After publication, adjustments to path or disclosure are barely enforceable vis-à-vis the publisher; the lever is the open invoice.

Stage 01, publisher pre-check · before booking

Schritt Gegenstand Verfahren Kriterium
/ 01 Domain reputation Check domain name and about text for aggregator indicators; consult pool admission list A 03
/ 02 Paywall status at domain level Spot check with Googlebot user agent: full-text access without metered paywall and without login wall A 05
/ 03 robots.txt of the domain Fetch /robots.txt; check all user agents from A 06 against it — no Disallow for the relevant bots A 06
/ 04 Publisher-pool admission documentation Document the three pre-check results per publisher; re-validation quarterly organisational

Stage 01 is performed once per publisher and re-validated quarterly. It produces a publisher-pool list from which bookings are made per mandate. On FAIL in any of the 4 steps, the publisher is not admitted to the pool or is removed from it; retroactive invoicing consequences are excluded at this level.

Stage 02, briefing verification · before final invoice

Schritt Gegenstand Verfahren Kriterium
/ 01 URL path and DOM disclosure Open the article URL, verify path segment; check the header area for an advertorial template (textual disclosure permitted) A 01 · A 02
/ 02 Indexing status Page source: check <meta name="robots"> and <link rel="canonical">; canonical points to the page’s own URL; HTTP header via curl -I -A "Googlebot" [URL] and analogously with OAI-SearchBot and PerplexityBot; no X-Robots-Tag: noindex A 04
/ 03 Schema and byline Google Rich Results Test: Article or NewsArticle schema detected, author and publisher declared, datePublished and dateModified present; byline author identifiable as editorially responsible B 01 · B 02
/ 04 Word count, hooks, front-loading and formal criteria Word count at least 800; statistics and direct quotes counted manually; check first 30 per cent for core statement; spot-check definitive language, entity consistency, question headlines, listicle structure B 03 · B 04 · B 05 · B 06 · B 07 · B 08 · B 09
/ 05 Outbound links and persistence Inspect links to the principal in the page source; rel="nofollow sponsored" fully set; review contract clause: 12-month guarantee at an unchanged URL; dateModified update obligation by interval length A 07 · A 08 · B 10

The workflow is deliberately formulated without tool dependencies. Stage 01 requires only a browser and a command line; Stage 02 additionally requires the public Google Rich Results Test, reading competence and contract access. Annex C describes the extended tool stack that makes verification more efficient at mandate scale; the basic verification is unaffected by it. Responsibility clarification: Stage 01 leads on FAIL to non-booking and protects the publisher from retroactive invoice reduction; Stage 02 couples briefing compliance to invoice acknowledgement.

12.5Operational rule and bridge to the price-factor matrix

A briefing-FAIL renders the placement worthless and is not repairable after booking. A pre-check-FAIL prevents the booking; no invoicing consequence arises at this level, because the policy lies outside the briefing frame. A missing B criterion reduces the lift, not the eligibility. From this asymmetry follows the prioritisation: Stage 01 runs before booking, Stage 02 before the final invoice; a premium spot in a leading medium with a /sponsored/ path is worthless; a trade publisher with an editorial path but an incomplete B class is fixable.

From this follows the three-class assignment of the final invoice described in Chapter 14.2, with two differentiated FAIL rows:

Class Precondition Price factor / consequence
Citation-Buy Full A class, at least 7 of 10 B criteria 1.0 × market price
Mixed-Buy Full A class, 4 to 6 B criteria 0.5 to 0.7 × market price
Mention-Buy Full A class, fewer than 4 B criteria 0.2 to 0.4 × market price
Briefing-FAIL FAIL bei A 01, A 02, A 04, A 07 oder A 08 0.0 ×, invoice reduction, publisher-controlled
Pre-check-FAIL FAIL bei A 03, A 05 oder A 06 Not booked, no invoice consequence, since policy lies outside the briefing frame

The operational rule: FAIL in briefing compliance is grounds for invoice reduction; FAIL in the publisher pre-check prevents the booking. This coupling is the actual difference from media-agency logic: Northbridge does not invoice for purchased reach but for fulfilled criteria, and holds the publisher accountable only for what lies in the influence range of the individual assignment.

A methodological side effect of the documentation density from Phase 02 is burden-of-proof preservation against the consideration presumption under § 5a (4) sentence 2 UWG. BGH I ZR 35/21 of 13 January 2022 (Influencer III) confirmed the presumption and the rebuttal burden of the advertiser: to rebut, it must be made plausible that no consideration was received for the statement. The documented procurement-standard chain — Phase 02 inventory procurement with transparent breakdown of bookings, the publisher arrangement and the price-coupling mechanics — is, on the current substantive state, a suitable form of burden-of-proof preservation for the telco principal vis-à-vis competition and consumer claimants; it does not replace a case-by-case appraisal in dispute, but makes it documentarily evidenceable.

The procurement standard addresses the criteria that are technically verifiable in the published placement. It does not cover all layers that are decided in a telco mandate between marketing order and compliance release. 6 operational layers — date logic, disclaimer structure, scopes of validity, source prioritisation, exception handling, country-specific grounding — act at the interfaces with TKG, EECC, DSA, GDPR and DORA and belong in the verification routine. Chapter 15 operationalises them in the Phase-03 editorial standard; Chapter 21 reflects them onto the two-role perspective.

12.5.1In-house reading · the 18 criteria as an internal audit tool

The 18 criteria are formulated as the NB procurement standard for mandate work, but they also carry without an NB mandate as an internal audit tool for in-house teams with their own content offensive or their own advertorial procurement. The in-house application uses the same criteria structure in two modes.

A criteria as a pre-check gate before advertorial procurement. Before a publisher is booked, the planned placement runs through the 8 binary A criteria. An A-FAIL in the pre-check prevents the booking. The in-house team needs no NB relationship and no additional infrastructure for this — just the checklist itself (freely available in the procurement-standard document under Creative Commons BY-ND).

B criteria as quality control of running publications. Existing content-offensive articles or advertorials in the inventory are scanned against the 10 gradual B criteria. Missing B properties are documented as an optimisation backlog. The scan is manual or semi-automated self-assessment, without a negotiation component vis-à-vis publishers.

The in-house reading has three limits. First: the price-factor matrix from Chapter 14 acts as a contract lever only if price coupling is part of the publisher arrangement (Phase 02, Ch. 11.3); without contractual anchoring, the matrix remains a pure rating scale. Second: the audit chain from Chapter 12.6 presupposes documented log infrastructure, which is rarely present in the in-house standard setup. Third: the Phase-00 baseline measurement across 6 engines presupposes a measurement tool stack, which most in-house teams operate in partial form.

The in-house reading is thereby the entry-level variant that a team can operationalise within the first 90 days without external advice. The mandate reading is the full version that adds price coupling, audit chain and full measurement stack. Both use the same 18 criteria as core, but at different operational depth. The SEO demarcation from Chapter 1 applies in the in-house mode as well: the B criteria are not an SEO checklist, even when applied without an NB mandate.

Procurement standard · capstone diagram

18 criteria. One price coupling.

18
verifiable criteria
8 binary A criteria decide on eligibility. 10 gradual B criteria determine the lift.

The separation is operational, not semantic: a briefing-FAIL in the A class renders the placement worthless. A missing B property reduces the lever, not eligibility.

A class · eligibility Binary. Pass or fail. No intermediate stages. 3 pre-check · 5 briefing
A 01URL pathBriefing
A 02DOM disclosureBriefing
A 03Domain reputationPre-check
A 04Indexing statusBriefing
A 05Paywall statusPre-check
A 06Bot policyPre-check
A 07URL persistenceBriefing
A 08Outbound linksBriefing
B-Klasse · Lift Gradual. Each criterion raises the citation probability per engine. 10 briefing · additive
B 01BylineYext Q4 2025
B 02Schema markupNB methodology
B 03Substance ≥ 800 wAggarwal KDD 2024
B 04Citation hooksAhrefs 2025
B 05Front-loadingIndig 2026
B 06Definitive languageNB editorial
B 07Entity consistencyNB methodology
B 08Question headlinesNB methodology
B 09Listicle structureNB methodology
B 10UpdateNB methodology
Legal use · Phase 02 Documentation density is not a side effect. It is burden-of-proof preservation against the consideration presumption under § 5a (4) sentence 2 UWG (BGH Influencer III, I ZR 35/21).

12.6Audit chain · verifiable release history

The documented procurement-standard chain mentioned in section 12.5 only becomes a robust evidentiary basis once its tamper resistance over time is secured. Prose documentation alone does not carry; it is retroactively adjustable. The audit chain supplements the documentation with three technical building blocks that together make a subsequent change of release history detectable.

Hash basis. Every release decision, every verification report from Stage 01 and Stage 02, and every Phase-04 report receives an SHA-256 hash. If the content changes after the fact, the newly computed hash deviates from the log entry. The manipulation becomes visible.

Append-only log. All hash entries sit in a log structure that allows only additions, no overwrites. Each entry carries timestamp, actor identifier, action and hash. The structure is machine-verifiable.

External anchoring. The log file is regularly committed to a Git commit history. The commit hashes sit outside the mandate infrastructure and form an external anchor against manipulation of the log file itself. Optionally, the chain is extended by a cryptographic signature per ECDSA or EdDSA, where regulatory requirements (DORA context) or mandate properties so suggest.

The minimal implementation with log-line format, Git commit schema and verification path is documented in Annex B.6. The audit chain is a component of the Phase-02 verification routine and is carried in the mandate as an operational infrastructure layer, not as a mandate extra.

The 6 disclosure variants V01–V06

Legal basis, retrieval effect, application matrix.

Northbridge
As of May 2026
Publication

Advertising disclosure and retrieval suitability are two independent check axes that must be met simultaneously in every advertorial placement. The level on which legal compliance is established (visible banner, inline note, footer statement) is not the same level on which retrieval suitability is decided (URL path, schema markup, rel attributes). The chapter names 6 disclosure variants observed in market practice — V01 to V06, shows their classification on both axes, and marks the target corridor for mandate procurement.

13.1Three-level logic of disclosure

A published advertorial placement can be assessed on three levels. Level A is legal disclosure — the visible banner, inline note, footer statement; it addresses the human reader and targets UWG § 5a (4), MStV § 22 and BGH case law (in particular I ZR 211/17 and I ZR 90/17). Level B is the technical structure — URL path, schema markup, indexing directives, rel attributes; it acts as a binary retrieval filter. Level C is the substantive content — word count, information gain, byline entity, front-loading; it is the dominant lift factor once Level B is met. The three levels are independent: a legally clean placement (Level A) can be technically (Level B) disqualified, and vice versa.

13.2The 6 variants

Variant Description Legal compliance Retrieval effect
V01 "ANZEIGE" banner above the headline, large and contrasting. URL editorial (/ratgeber/). Article schema. Byline a real editor. Outbound rel="nofollow sponsored" Compliant (BGH-safe) Neutral to mildly negative
V02 Disclosure under the headline as a text note ("In cooperation with Brand X"). URL editorial. Article schema. Byline a real editor Compliant (BGH standard) Neutral
V03 Disclosure only in the footer. URL editorial. Schema. Byline Borderline to non-compliant (BGH precedent on subsequently inserted disclosure, documented warning-letter risk) Neutral
V04 Disclosure inline as the first sentence of the article. URL editorial. Schema. Byline a real editor Compliant Neutral to minimally negative
V05 Visually correct "ANZEIGE" disclosure, but URL /advertorial/, schema missing or AdvertiserContentArticle, byline "Editorial" or "[Brand] GmbH" Compliant (level A) Strongly negative to disqualifying (level B)
V06 Schleichwerbung ohne erkennbare Kennzeichnung Nicht konform (UWG § 5a Abs. 4) Indistinguishable for the LLM

The classifications follow BGH case law on UWG and MStV (as of April 2026); they are reproductions of documented substantive states, not independent legal interpretation. The TKG mandatory-information interlocking is orthogonal and is unfolded in 13.5; the systematic case-law foundation in 13.6.

13.3Target corridor and exclusion

Three rules apply for mandate procurement. V02 and V04 form the target corridor — legally compliant, retrieval-neutral, compatible with the procurement-standard A classes A 01, A 02 and A 04; V01 is a permissible alternative for risk-averse publisher constellations, with a minimally negative retrieval tendency in Google-based engines (Chapter 8.3). V02 (disclosure under the headline as a text note) and V04 (disclosure inline as the first sentence) are fully compatible with A 02, because A 02 explicitly permits textual inline disclosure and only disqualifies the advertorial template (stand-alone DOM elements, badges, CSS wrappers); V02 and V04 thereby represent precisely the target corridor of textual inline disclosure that the Retrieval Study Claude Opus 4.6 (April 2026) identifies as legally compliant and retrieval-neutral. V05 is the typical fault case — visually correct disclosure with simultaneous Level-B disqualification; the verification workflow from Chapter 12.4 (Stage 02 briefing verification) identifies V05 before the final invoice; the three-class assignment from Chapter 14.2 (with the briefing-FAIL row differentiated in Chapter 14) drops to Mention-Buy or Briefing-FAIL. V06 is categorically excluded — Northbridge does not work with covert advertising; the exclusion line is an extension of the Class 3 separation from Chapter 5 and belongs in the mandate frame condition, not in case-by-case examination.

13.4Operational consequence

The 6 variants are the concrete publication forms in which the three-level logic from 13.1 appears in market practice. Mandate procurement does not address them as a menu of choices, but as a contractually agreed disclosure-duty combination with the publisher (Chapter 11.3), whose fulfilment is checked in Phase 03 (Chapter 15) before invoice release. In the DE telco sector, the TKG mandatory information of the advertising provider is added to the advertising disclosure of the chosen V01 to V05 variant — as an orthogonal duty layer with its own visibility requirements (section 13.5).

13.5TKG mandatory information in the telco sector, orthogonal to V01–V06

The 6 disclosure variants V01 to V06 govern advertising disclosure by the publisher and follow UWG § 5a (4), MStV § 22 and DDG § 6 (1) No. 1. The Telekommunikationsgesetz contains no separate advertising-disclosure rule; its mandatory information addresses the advertising telco provider independently of the chosen disclosure variant. From this follow 5 categories of mandatory information that must appear in the main display of every DE telco advertorial independently of V01 to V05 (V06 is in any case categorically excluded).

First, the minimum-term display per § 55 (1) in conjunction with § 56 (1) TKG: the minimum contract term belongs in the main display, not in a footnote area. Second, the 12-month alternative per § 56 (1) sentence 2 TKG: the telco provider must offer a contract with a term of no more than 12 months — from this follows the operational question whether the advertised 24-month variant may be presented as the only term. Third, the bundle-component transparency per § 66 TKG in conjunction with § 55 (1) TKG: for bundle products, the price of individual components must be disclosed where these are also offered individually. Fourth, price transparency for premium-rate numbers per §§ 109 ff. TKG. Fifth, the compensation and provider-switch clause per § 55 (1) TKG.

A methodological consideration arises in addition: § 57 (4) TKG governs reduction and extraordinary termination rights for persistent or frequent significant deviations of actual from agreed performance. On the current substantive state, the norm is a contract-level duty, not an advertising-information duty; its inclusion as a sixth mandatory-information category is not compelling. For advertising with prominent term or speed promises, mention of the price-change and reduction mechanism is consistent with the transparency logic of §§ 54 ff. TKG and recommended on methodological caution; a definitive classification as a binding advertising mandatory-information item is left to legal validation (Chapter 24).

Advertising disclosure itself (marking the commercial purpose in the publisher inventory) is not TKG-driven but UWG § 5a (4), MStV § 22 and DDG § 6 (1) No. 1. The /sponsored/ path decision and the A 02 DOM-label check of the procurement standard (Chapter 12) follow this norm triad, not the TKG. The operational consequence of the visibility requirement on the TKG mandatory information — in particular the collision with the front-loading lever B 05 — is unfolded in Chapter 14, section 14.5, as a telco-specific price-factor modifier. Open interpretive questions on this duty layer (e.g. advertising in the sense of the TKG for purely editorial-style tariff descriptions; attribution logic in LLM citation without TKG mandatory information) are the subject of legal validation (Chapter 24).

13.6Case-law connection, 6 leading BGH decisions for Compliance-GEO

The classifications in 13.2 and the mandatory-information categories in 13.5 rest on a supreme-court line consolidated between 2020 and 2025. 6 leading decisions carry the doctrinal foundation; their rights-leveraging application to the price-factor matrix and attribution architecture is unfolded in Chapter 14 (14.5 main-display duty, 14.6 strand differentiation, 14.7 norm hierarchy UWG/MStV/DDG).

Case reference and date Core statement Connection
BGH I ZR 96/19 · 25.06.2020 (LTE-Geschwindigkeit) Completeness duty for telco advertising with speed claims; uniform subject matter under § 5 / § 5a UWG Strand-A transferability, unfolded in 14.6
BGH I ZR 98/23 · 27 June 2024 (climate-neutral) Completeness duty for ambiguous advertising claims Foundation for tariff representations in generative answer windows (13.5)
BGH I ZR 164/23 · 11.07.2024 (nikotinhaltige Liquids) Informationspflichten § 5a Abs. 1 / § 5b Abs. 4 UWG auf Basis Art. 7 Abs. 5 UGP-RL Abgrenzung Werbe-Angaben- vs. Vor-Vertragsschluss-Pflicht (13.5)
BGH I ZR 112/23 · 23 October 2024 (online marketplace liability) Platform liability from knowledge; perpetrator liability on systematic toleration Strand-B Bc disturber-liability analogy, unfolded in 14.6
BGH I ZR 53/24 · 23.01.2025 Fortschreibung § 5a UWG-Linie in neuer Fassung seit UWG-Novelle 2022 Specifies the systematics of information completeness (13.5)
BGH I ZR 183/24 · 9 October 2025 (Netto/price reduction) Main-display duty, 30-day lowest price unambiguous; § 11 (1) PAngV on the basis of CJEU C-330/23 (Aldi Süd) Analog-Anwendung auf § 56 Abs. 1 Satz 2 TKG, Entfaltung in 14.5

The convergent corridor stands: completeness duty for tariff advertising is consolidated; platform liability from knowledge is, per BGH I ZR 112/23, transferable to model vendors; the main-display duty has been substantially sharpened by BGH I ZR 183/24. A definitive case law on generative answer windows does not exist as of April 2026; the existing BGH lines are, on the prevailing reading, transferable to the Compliance-GEO context. The concrete transfer to LLM attribution constellations is the subject of legal validation (Chapter 24).

The 6 leading decisions are not undisputed. A methodological review of counter-lines (Northbridge-internal case-law map of April 2026) identifies three lines with partial robustness: the full-harmonisation line of UCP Directive 2005/29/EC (CJEU C-540/08, C-261/07 and C-299/07, C-304/08), the doctrinal differentiation between platform liability and own generation in liability attribution (BGH I ZR 112/23 addresses the platform operator, not advertised third-party companies), and the PAngV special-regime reading of the main-display duty. These counter-lines do not refute the leading line but sharpen the substantive state. For the transfer to generative answer windows, the consequence is: the prevailing line is workable, but a CJEU referral on the full-harmonisation question or an OLG divergence decision could shift the substantive state after Q3/Q4 2026.

13.7Legal interlocking at a glance

The TKG mandatory information unfolded in 13.5 and the BGH line unfolded in 13.6 do not stand in isolation. 5 regulatory frameworks affect a DE telco advertorial simultaneously, each with its own scope, addressee and legal consequence. The synopsis that follows lays the layers side by side; it serves orientation for the legal validation (Chapter 24) and makes visible why a Compliance-GEO mandate in the telco sector serves 4 different addressee circles in parallel.

Regulation Scope Addressee Legal consequence Retrieval relevance
TKG §§ 54–57 (TKG 2021, Fassung April 2026) Telekommunikationsdienste; Werbung mit Tarif-, Laufzeit- und Leistungsangaben Werbender Telco-Anbieter (Pflicht zur Hauptdarstellung) Cease-and-desist claim; fine under § 228 TKG; warning letter from competitors and consumer associations High. Mandatory information must sit in the main display and collides with the front-loading lever B 05; unfolded in 14.5
UWG § 5a Abs. 4 B2C commercial communication; all advertising forms including advertorial Advertising business; the carriage may pass to the publisher (BGH I ZR 125/20, Influencerin II) Cease-and-desist claim; warning letter; costs depending on dispute value Indirect. Determines Level A of V01 to V06; Class 3 covert advertising (V06) is excluded
MStV § 22 Rundfunk und journalistisch-redaktionelle Telemedien Medien-Anbieter (Publisher) Media-supervisory injunction; fine by Landesmedienanstalten Mittelbar. Trennungs-Pflicht im Publisher-Inventar; flankiert UWG § 5a Abs. 4 in redaktionellen Umfeldern
DDG § 6 Abs. 1 Nr. 1 (DE-Umsetzung DSA) Online platforms and online intermediary services with advertising function Digital-service providers (platform operators) Administrative sanctions; injunction; fine (BNetzA as DSA coordinator for DE) Indirect. Real-time advertising disclosure at platform level; acts via Level-B discipline
NIS-2 (NIS-2-UmsuCG, Stand April 2026) Network and information security at essential and important entities; telco operators are categorically captured as essential entities Telco providers as essential entities; management (personal responsibility) Fines up to EUR 10 million or 2 per cent of global annual turnover; management liability; reporting duties Low. Only indirect, via the durability of infrastructure and network-security claims in telco advertising; compliance frame, not disclosure duty

The 5 layers do not run redundantly. TKG §§ 54–57 address the advertiser with positive mandatory information; UWG § 5a (4) addresses the same advertiser with a disclosure duty; MStV § 22 addresses the publisher; DDG § 6 addresses the platform operator; NIS-2 frames the advertising infrastructure claim from the compliance side. The operational consequences for the price-factor matrix and the attribution architecture are unfolded in Chapter 14 (14.5 main-display duty, 14.6 strand differentiation, 14.7 norm hierarchy); the open interpretive questions are addressed in Chapter 24.

The price-factor matrix

Disclosure classification coupled to price factor.

Northbridge
As of May 2026
Publication

The price-factor matrix is the final calibration level between the publisher list price and the actually invoiced final invoice. It couples two levers introduced separately in earlier chapters: criteria fulfilment at placement level (A and B classes from Chapter 12) and the mandate-specific model-blended factor at engine-mix level (Chapter 8). The matrix is not a discount tactic but the operational translation of retrieval mechanics into price language, and the actual commercial difference from classical media-agency logic.

Price-factor matrix · value cascade

Three classes, three factors · from list price to final invoice

INPUT Publisher list price 100 % Gross booking price 3 CLASS PATHS CLASS A · CITATION-BUY Factor 1.0× Full value · citation and mention delivered CLASS B · MENTION-BUY Factor 0.89× Brand recall without citation · Ahrefs 2025 CLASS C · BRIEFING-FAIL Factor 0× Disqualification · no invoicing OUTPUT Final invoice CITATION-BUY 100 % MENTION-BUY 89 % BRIEFING-FAIL 0 % THREE-CLASS PATH · COUPLES BOOKING TO DEMONSTRATED EFFECT

14.1Two calibration levels

The first calibration level is the criteria factor. It checks whether the published placement fully meets the 8 A criteria (eligibility, binary) and how many of the 10 B criteria (lift, gradual) are achieved. The second level is the model-blended factor. It weights the 6 retrieval engines per mandate-specific prioritisation and integrates the working hypotheses on engine-specific advertorial handling (Chapter 8.3). Both levels act multiplicatively on the list price and calibrate different fault sources — placement quality and reach geometry of the mandate.

14.2Three-class assignment of the final invoice

The combination of A-class fulfilment and B-class count yields a three-class assignment of the final invoice.

Class Fulfilment profile Criteria factor Empirical anchor
Citation-Buy All 8 A + at least 7 of 10 B 1.0 × NB Retrieval Study (April 2026), Aggarwal et al. KDD 2024
Mixed-Buy All 8 A + 4 to 6 B 0.5 × to 0.7 × NB methodology, Ahrefs 2025 (n ≈ 75,000 brands)
Mention-Buy All 8 A + fewer than 4 B 0.2 × to 0.4 × Ahrefs 2025, Seer Interactive 2025
Briefing-FAIL FAIL bei A 01, A 02, A 04, A 07 oder A 08 0.0 ×, invoice reduction, publisher-controlled Procurement standard (briefing compliance)
Pre-check-FAIL FAIL bei A 03, A 05 oder A 06 Not booked, no invoice consequence, since policy lies outside the briefing frame Procurement standard (publisher pre-check)

Mention-Buys carry residual value in the citation layer; Ahrefs documents that brand mentions without a link in LLM answers produce brand-recall effects. The residual value is invoiced at the mention price, not at the citation price; the separation prevents the common market mistake of procuring a mention service at the citation price and assuming a citation.

14.3Formula and worked example

The final invoice for a citation placement follows a single multiplicative formula:

final_price = list_price × kriterien_faktor × modell_blended_faktor

Worked example (Citation-Buy, total market coverage). Publisher list price €1,000. All A + 7 of 10 B → criteria factor 1.0×. Principal without engine prioritisation → model-blended factor 0.89× (evenly distributed, as of April 2026). Negotiated price = 1,000 × 1.0 × 0.89 = €890. Principal sensitivity across disclosure class and engine mix remains systematically anchored in the three-class assignment from 14.2 and the model-blended factor from Chapter 8; a Mention-Buy with a Google priority drops on the maths to well under a quarter of the list price.

14.4Operational consequence

The matrix is the enforcement form of the procurement standard vis-à-vis the publisher. It is agreed in Phase 02 as contractually binding final-invoice logic (Chapter 11.3) and is run through in Phase 03 for every published placement before invoice release. A publisher who does not deliver the criteria catalogue does not deliver a citation but a mention, and the fee follows actual performance, not the booking intent. Chapter 15 operationalises the editorial routine that ensures the B class is achieved on the principal’s side at all; Chapter 16 describes the controlled validation by which the price factors from section 14.2 and the model-blended hypotheses from Chapter 8.3 are empirically refined. In the DE telco sector, three sector-specific levels apply additionally: the front-loading vs. mandatory-information modifier (14.5), the attribution architecture for LLM recommendations (14.6), and the norm hierarchy between UWG, MStV and DDG (14.7).

Price-factor matrix · capstone diagram

From criteria and engine mix comes a price.

final_price=list_price×kriterien_faktor×modell_blended_faktor
Worked example €1,000 × 1.0 × 0.89 = €890 Citation-Buy · total market coverage
Class Factor Fulfilment profile Empirical anchor
Citation-Buy 1.0 ×
all A · at least 7 of 10 B
NB Retrieval · Aggarwal KDD 2024
Mixed-Buy 0.5–0.7 ×
all A · 4–6 B
NB methodology · Ahrefs 2025
Mention-Buy 0.2–0.4 ×
all A · fewer than 4 B
Ahrefs 2025 · Seer Interactive 2025
Briefing-FAIL 0.0 ×
FAIL on A 01, A 02, A 04, A 07 or A 08 · invoice reduction, publisher-controlled
Pre-check-FAIL not booked
FAIL on A 03, A 05 or A 06 · no invoice consequence, policy outside the briefing frame
DE telco sector layer In a telco mandate, an advertorial without mandatory information in the front-loaded zone disqualifies to 0.0 ×, irrespective of the A/B class profile. See §14.5 (TKG modifier).
What the analysis shows

The matrix translates retrieval mechanics into price language. Three classes, three factors: Citation-Buy 1.0× (full value), Mention-Buy 0.89× (brand recall without citation, Ahrefs 2025), Briefing-FAIL 0× (disqualification). The 0.89× factor is not estimated but derived from the mentions correlation study with about 75,000 brands. In a telco mandate, a TKG modifier applies in addition: a mandatory-information failure disqualifies to 0×, irrespective of the class profile.

How to use it

Procurement runs negotiations with the matrix as anchor: every booking is checked against the class assignment before the final invoice. A placement briefed as Citation-Buy that shows only mention effect in the measurement run is invoiced at 0.89×. A V05 disqualification from Chapter 13 drops into Briefing-FAIL and triggers 0× invoicing. The attribution architecture separates booking, delivery and effect as three independent measurement points.

Sector transfer

The three-class architecture applies across sectors; the 0.89× factor is sector-invariantly supported via the Ahrefs study. The class-assignment thresholds shift sector-specifically: telco carries TKG mandatory information as a citation precondition, Financial Services the advisory-duty markers, insurance the VVG information duties, commerce the UWG/DSA disclosure markers.

Value for you

A procurement tool directly usable in vendor negotiations. Protection against silent value erosion through unrecognised Mention-Buys remunerated as Citation-Buys. An audit-capable documentation trail that checks economic substance and booking class of each placement against each other.

14.5TKG-specific telco modifier, front-loading vs. mandatory-information visibility

In the DE telco sector, the front-loading lever B 05 from Chapter 12 — tariff core in the first 30 per cent of the page — meets the TKG mandatory-information placement from § 55 (1) in conjunction with § 56 (1) TKG. Minimum term, 12-month alternative, bundle-component transparency and compensation clause (Chapter 13, section 13.5) may not be less visible than the advertised tariff statement; a footnote solution or link offloading misses the visibility requirement and renders the advertorial unlawful, irrespective of the chosen advertising-disclosure variant V01 to V05. The operational compromise lever is the combined hero-plus-box construction: hero claim with tariff core front-loaded in the first 30 per cent, with a mandatory-information box of clear visual presence on the same style level immediately below.

The sharpening of the visibility requirement, on the current substantive state and per the prevailing reading, lies in a methodologically transferable analogous application of the main-display duty from BGH I ZR 183/24 of 9 October 2025 (Netto/price reduction) to § 56 (1) sentence 2 TKG (12-month alternative); a definitive analogy case law does not exist as of April 2026, and the definitive classification is left to legal validation (Chapter 24). From this follows the telco modifier for 14.2: a DE telco advertorial without mandatory information in the front-loaded zone or in an immediately adjacent box of equal visual presence is disqualified at the legal level (Level-A disqualification from Chapter 2 and 13.5) and drops to the 0.0× factor, irrespective of the A/B class profile. The disqualification is not repairable, because a subsequent mandatory-information-box insertion is practically not enforceable vis-à-vis the publisher; the verification workflow from Chapter 12.4 is extended in the telco mandate by a sector-specific visibility check (Phase 03, Chapter 15).

14.6Attribution architecture for LLM tariff recommendations, Strand A and Strand B

The attribution question for an LLM tariff recommendation without complete TKG mandatory information is, on the current substantive state, to be differentiated along two strands. The differentiation carries the question of whether the price-factor matrix and the telco visibility-check extension from 14.5 reach through to the LLM answer.

Strand A, active LLM use by the telco principal. Where the telco actively deploys an LLM system as a marketing channel (own chatbot, shop AI, LLM-supported advertising texts), the LLM output is, on the prevailing reading, a commercial act of the telco within the meaning of § 2 (1) No. 2 UWG. The AI has no legal personality of its own; use by the telco is attributed to it (CMS commentary March 2024, IT-Recht-Kanzlei June 2024, in line with UWG doctrine on agent attribution under § 8 (2) UWG). BGH I ZR 96/19 of 25 June 2020 (LTE speed) is directly transferable to Strand-A cases; the mandatory-information visibility from 13.5 and the hero-plus-box construction from 14.5 are principal-side requirements on the prompt and answer frame of the deployed LLM system. Strand A follows the price-factor matrix from 14.2 without modification; breaches disqualify per 14.5.

Strand B, autonomous third-party LLM recommendation without contractual relationship. Where an LLM, autonomously and without an order relationship with the telco, recommends a tariff (retrieval answer in Claude, GPT, Gemini, Perplexity without embedding in telco-owned channels), attribution to the telco is not given; absent active deployment, the autonomous LLM answer is not the telco’s commercial act within the meaning of the classical UWG advertising definition. Three sub-lines are to be distinguished: (Ba) primary liability with the model vendor as operating entity, telco not addressee; (Bb) on a systematic favouritism strategy of the telco, an indirect attribution is not excluded, but is not as of April 2026 settled at supreme-court level; (Bc) disturber-liability analogy per BGH I ZR 112/23 of 23 October 2024 (online marketplace liability) — the model vendor is subject to a blocking and correction duty from knowledge of a clear infringement, the telco is subject to a notification duty when noticing a faulty recommendation. The § 7 DDG privilege (§ 7 DDG in conjunction with Art. 4 to 8 DSA) does not, on the prevailing reading, bite when the model vendor actively generates answers (DFN Rechtsstelle, May 2024; CJEU Kinderhochstühle line on the active role).

The Union-law basis for Strand B is the consolidated CJEU line on the active vs. passive role of host providers — CJEU C-236/08 to C-238/08 Google France, C-324/09 L’Oréal/eBay, C-682/18 and C-683/18 YouTube/Cyando. Liability privilege thereunder bites only on a purely technical-passive role; advertising, optimisation or systematic toleration break the privilege. The Glawischnig-Piesczek line (CJEU C-18/18) supports the admissibility of word- and meaning-equivalent filter orders also against host providers and is substantively load-bearing for the question of whether model vendors can be obliged to systematic correction on known faulty citation.

A methodological review of the load-bearing capacity of the three sub-lines (Northbridge-internal attribution map of April 2026) calibrates the substantive state more precisely. The Ba line holds, on the current substantive state, with partial robustness: 5 independent supports converge — BGH doctrine on agent liability under § 8 (2) UWG requires a commission or cooperation relationship (BGH “Google Ads” I ZR 28/25 of 11 March 2026 expressly maintains the commissioning precondition, in distinction from BGH “Liability for Affiliates” I ZR 27/22 of 26 January 2023); the CJEU line on the active role addresses the model vendor, not the advertised third-party company; the consumer-protection warning-letter and litigation practice knows no attribution of autonomous LLM outputs to third-party beneficiaries; the DSA/DDG regime locates responsibility with the provider, not with the passively mentioned party; the AI Act regime with its provider/deployer dichotomy under Art. 3 No. 3 and No. 4 of Regulation (EU) 2024/1689 structurally relieves the passively mentioned party, because the deployer preconditions require a use under one’s own responsibility, which is not met without active embedding. The Bb line wobbles in the pure no-contract constellation: the § 8 (2) UWG analogy requires, per BGH doctrine, integration into the operating organisation, the possible determining influence and the accruing benefit to the business activity — elements that are not met in the pure no-contract constellation; in a borderline case of documentable, targeted favouritism strategy with direct contact or coordinated cooperation, it can gain traction, but then approaches Strand A. The Bc line is to be distinguished in two directions: Bc against the model vendor is workable — the CJEU line on the active role, the rationale from CJEU “Russmedia” C-492/23 of 2 December 2025 (host privilege does not relieve from sectoral protective duties) and the AI Act provider duties support an independent unfair-competition responsibility of the LLM vendor for incomplete mandatory information in outputs. Bc against the telco, the mirrored notification duty for noticed faulty recommendations, is doctrinally constructible upon qualified knowledge but remains without precedent and is to be marked as doctrinal new ground.

Operational consequence of the strand differentiation. The price-factor matrix bites in Strand A without modification; Compliance-GEO measures sit fully in the telco’s responsibility. In Strand B the matrix does not reach through directly; the Compliance-GEO methodology acts there as structural optimisation (Class 1 from Chapter 5) that raises the LLM citation probability without establishing a direct attribution chain to the telco’s advertising responsibility. The definitive attribution classification for Strand Bb (systematic favouritism strategy) is the subject of legal validation (Chapter 24).

14.7Norm-Hierarchie § 5a Abs. 4 UWG / § 22 MStV / § 6 DDG

The advertising-disclosure duty follows three parallel norms — § 5a (4) UWG (concealment of the commercial purpose), § 22 (1) MStV (media-law separation requirement), § 6 (1) No. 1 DDG (disclosure requirement for commercial communication in digital services). BGH case law has, on the current substantive state, structured the hierarchy clearly. BGH I ZR 125/20 of 9 September 2021 (Influencerin II) determines: § 6 (1) No. 1 TMG (now § 6 (1) No. 1 DDG) takes precedence over § 5a UWG as a special provision; § 58 (1) sentence 1 RStV (now § 22 (1) sentence 1 MStV) likewise takes precedence over § 5a UWG as a special provision; § 5a (4) UWG is the more general provision and is therefore subordinate. The national norm hierarchy operates within the frame of the UCP Directive 2005/29/EC, which per CJEU case law has full-harmonisation effect (CJEU C-540/08 Mediaprint, CJEU C-261/07 and C-299/07 VTB-VAB and Galatea, CJEU C-304/08 Plus Warenhandelsgesellschaft). National tightening beyond the harmonised protection level is structurally limited; § 5a (4) UWG as catch-all norm moves within the union-law-set frame.

The operational consequence for the price-factor matrix: For a DE telco advertorial in a digital service (broadcast-free online medium, portal, editorial web offering), § 6 (1) No. 1 DDG bites primarily for the advertising disclosure; for broadcast-like content (video advertorials, streaming platforms in the MStV scope), § 22 (1) MStV bites primarily; § 5a (4) UWG remains the catch-all norm for facts outside the DDG and MStV scopes. The classifications of disclosure variants V01 to V06 from Chapter 13.2 follow this hierarchy; the terms used there — "compliant", "borderline" and "non-compliant" — rest on the lex specialis rule per Influencerin II. The clarification is relevant for the classification systematics in 14.2 (disqualification level), because a norm-competition error in the legal argument can lead to misallocation of the duty source; the disqualification itself is unaffected.

The Phase-03 editorial standard: 6 operational layers

Front-loading, definitive language, citation hooks, entities, schema, release.

Northbridge
As of May 2026
Publication

B-class fulfilment from Chapter 12 does not arise automatically with the publisher commission but is editorially worked out in Phase 03. 6 operational layers act here at the interfaces with TKG, EECC, DSA, GDPR and DORA — at points where a legally clean placement can still become editorially retrieval-effective or rejection-prone. The chapter describes the 6 layers, the flanking levers front-loading (B 05) and definitive language (B 06), and the release procedure in which they take effect.

15.1The 6 operational layers

Layer Editorial check question Regulatory anchor level
Date logic Is the as-of date of the statement clearly visible, and does it match the dateModified entry? Is the update frequency documented? UWG § 5 (misleading through outdated information), TKG mandatory information
Disclaimer structure Are risk, scope and validity notices placed where they must be regulatorily effective — not in a footnote below the chunk end? MStV § 22, DSA Art. 26 ff., sector-specific obligations
Scopes of validity Is the scope of the statement (geographic, temporal, product) explicitly bounded, or does the wording suggest universal validity it does not have? UWG § 5, GDPR territoriality rules, BEREC roaming rules
Source prioritisation Is it recognisable which source supports which sub-statement? Are primary and secondary sources marked separately? DSA Art. 14 (media-privilege demarcation), scientific citation hygiene
Exception handling Are exception cases named for which the statement does not apply, or is the majority case implicitly presented as the full case? Consumer-protection case law on material omissions
Country-Specific Grounding Is the statement tied to the DE jurisdiction, DE price disclosure, DE provider landscape, or is an EU or global state assumed that does not carry in the DE context? TKG, TTDSG, BNetzA orders, NIS-2 transposition acts

The 6 layers are run as a checklist before each publication in the Phase-03 routine; they are not alternative but cumulative. An article that is clean at the level of A and B criteria of the procurement standard can nonetheless be rejected at one of the 6 layers — typically by the principal’s compliance side, rarely by the publisher’s editorial team.

15.1.1 Telco-specific anchoring of layers 4, 5 and 6

The layers source prioritisation (Layer 4), exception handling (Layer 5) and country-specific grounding (Layer 6) carry sector-structural substance in the DE telco sector that is concretely operationalised in the Phase-03 routine. 6 schema properties form the engineering mirror layer to editorial work (technical implementation in Chapter 22.1).

Brand disambiguation (Layer 4 source prioritisation). DE telco portfolios typically run several brands in parallel — network brand and tariff brand. Generative models systematically confuse this allocation. Machine-readable separation via parentOrganization and subOrganization plus BrandDetails with owner is the schema substance prescribed to editorial in Phase 03; it is supplemented in the FAQ schema by explicit text passages that the model can quote on follow-up. The author-entity disambiguation from the Yext Search Experience Benchmark Q4 2025 (17.2 million citations) confirms across sectors that entity separation is a top-5 selection factor; the telco brand architecture is the DE-specific application.

Listicle and category-page geometry (Layer 5 exception handling). Engine-specific content-type preferences are documented in industry research: ChatGPT favours listicles (around 52 per cent listicle share in the high-citation bucket), Google AI Mode favours category and product pages (Wells, Peec.ai 27 February 2026, over 1 million harmonised citations). In DE telco Phase 03 this has an asymmetric consequence: tariff listicle structures serve ChatGPT, availability category pages serve Google AI Mode. Editorial Layer 5 turns this asymmetry into a mandatory question per publication, not an optimisation recommendation.

Address-precise availability and EECC core parameters (Layer 6 country-specific grounding). Three schema properties carry the DE telco specifics of Layer 6. areaServed and serviceLocation make regional availability of fixed-line and fibre offers machine-readable; address-precise availability APIs become readable into retrieval via the schema. priceValidUntil and speedLimit capture the EECC core parameters (monthly price, speed) as dynamically updated terms that sit in a coherent retrievable block, structurally hard to separate, supporting the TKG mandatory-information context from Chapter 13.5 in Phase 03. isRelatedTo with discount property makes bundle benefits retrievable as data structure rather than as marketing prose; and offers.priceSpecification.billingIncrement places device-financing monthly instalments into tariff-context schemas, underpinning the final-invoice logic from Chapter 14 in editorial preparation.

The 6 schema properties are implemented in Chapter 22 (engineering substance) as a technical work field; their editorial application in Phase 03 is the subject of layers 4, 5 and 6 of this chapter. Aggarwal et al. KDD 2024 (Position-Adjusted Word Count) anchors front-loading (B 05) as retrieval mechanics that has its structural mirror in this schema layer: the first 30 per cent of the page are retrieval-empirically privileged; the schema properties ensure that the cited statement remains embedded in the right entity context.

15.2Flanking editorial levers: B 05 and B 06

The 6 layers are flanked by two B criteria of the procurement standard that bite directly on the editorial side. Front-loading (B 05) requires the core statement to sit in the first 30 per cent of the text — empirically anchored in the retrieval chunking geometry of generative systems (Indig 2026, 44.2 per cent of ChatGPT citations from the first 30 per cent of the page). Definitive language (B 06) requires declarative form rather than modal-verb softening; expressions such as "possibly", "tendentially", "could" are used only where uncertainty is objectively warranted. The two levers act against different retrieval failures: B 05 against chunking losses, B 06 against model preference for definitive sources. Neither lever replaces the 6 layers from 15.1; together they form the editorial baseline for Phase-03 publications.

15.3Release procedure in Phase-03 daily practice

Every publication runs through a three-point release procedure. First: editorial first release per the 6 layers from 15.1 plus B 05 and B 06, by the NB editorial team on the principal’s basis. Second: compliance release by the principal’s compliance officer, focusing on the three layers that bite regulatorily in the specific publication context (for tariff statements typically date logic, disclaimer structure, scopes of validity; for network-test references typically source prioritisation and exception handling). Third: publisher release that confirms the technical fulfilment of the procurement standard’s A and B criteria. Only after the third point is the publication scheduled. The procedure is anchored in the Phase-02 contract arrangement (Chapter 11.3) as a delivery sequence, not as an option.

15.4Operational consequence

The Phase-03 editorial standard is the interface where marketing order, retrieval mechanics and compliance release meet. It is also the point at which Compliance-GEO is operationally demarcated from classical content services — through the 6 layers as standard checks, not case-by-case clarification. Chapter 16 describes the hypothesis validation by which editorial rules are refined against the measurement infrastructure from Chapter 7; Chapter 21 reflects the 6 layers onto the two-role perspective between marketing and compliance, which becomes concretely effective in release practice.

15.5Two structured information models per principal domain

The 6 editorial layers from 15.1 presuppose that the content to be edited already exists. The upstream question of which statements may enter editorial at all and which may not is just as decisive in Phase-03 practice as the editorial layers themselves. Two structured information models are maintained per principal domain.

Positive model · mandatory-information-compliant, citable statements. In the DE telco sector these are, for example, complete TKG-compliant tariff descriptions with total monthly amount, minimum term, notice period and device binding. The positive model is the statement library from which editorial building blocks are formed. It is not static but is updated quarterly against TKG amendments, price-list changes and BNetzA-order interpretations.

Negative model · sector-regulatorily incompatible statements. In the DE telco sector these are, for example, monthly prices without minimum-term disclosure (breach of § 5a (4) UWG), "today only" wording without temporal concretion (breach of § 6 PAngV), comparative statements without reference markers (breach of § 6 UWG), or blanket network-quality claims without reference to BNetzA measurement methodology. The negative model is not the absence of the positive model; it is its own structured statement list that acts as an exclusion criterion in editorial release.

Both models are preconditions for the 6 editorial layers from 15.1, not their result. An article can meet front-loading, schema markup and definitive language perfectly and still fall through the negative model if a single sector-regulatorily incompatible statement is included. The models are initialised in the Phase-00 kick-off per mandate and enter the Phase-03 editorial daily practice as structured reference documents.

Hypothesis validation through controlled test design

Advertorial control pairings and recalibration loop.

Northbridge
As of May 2026
Publication

Several working hypotheses of the study are not deductively provable but only refinable via controlled measurement: the engine-specific price factors from Chapter 8.3, the three-class assignment thresholds from Chapter 14.2, the effect of the 6 editorial layers from Chapter 15.1 and the B-criterion composition from Chapter 12.3. The chapter compactly describes how validation is methodologically laid out; the technical execution (sample sizes, statistical tests, measurement apparatus) is gathered in Annex B.

16.1What is validated

4 hypothesis classes are tracked separately. Price-factor hypotheses (Ch. 8.3): the engine-specific values from 1.0× to 0.7× are derived from documented retrieval architecture plus model self-reporting and are recalibrated per mandate against measured citation frequency. Criteria-effect hypotheses (Ch. 12.3): the 10 B criteria are additively weighted, but the actual effect per criterion is not symmetrical — front-loading (B 05) has empirically stronger evidence than question headlines (B 08). Layer-effect hypotheses (Ch. 15.1): the 6 operational layers act regulatorily, but their retrieval-side effect differs — date logic and country-specific grounding have higher measurement evidence than source prioritisation in direct citation comparison. Validation-metric hypotheses (Annex B.4.6): the 4 metrics decision threshold, correlation predicted vs. actual, false-negatives rate and GBM loss carry their own reference-value hypotheses, refined per mandate. Reference correlation above 0.6 is taken as workable, false-negatives rate below 15 per cent as acceptable. The refinement of these reference values per mandate is an integral part of hypothesis validation and distinguishes an NB measurement architecture from aggregated visibility scores without a validation layer.

16.2Design logic

The test design follows the principle of controlled variation: one variable per measurement wave is changed, all others are held constant. The measurement infrastructure from Chapter 7 (Share of Model Voice, prompt clusters, cadence) provides the baseline; the intervention is applied to a defined article cohort; the re-measurement after at least 4 measurement waves with seasonality correction shows the effect. The typical mandate cycle allows two to three controlled tests per quarter; the cumulative results are consolidated semi-annually into a recalibration of the working hypotheses from Chapter 8.3, 12.3, 14.2 and 15.1. Validation is therefore not a one-off act but a feedback loop, methodologically consistent with Phase 04 from Chapter 11.

16.3External validation layer

The NB-internal measurement architecture carries operational validation in the mandate. Lifting mandate-bound findings to study-firm statements across multiple mandates is methodologically a second layer; it requires statistical-methodological questions that an academic advisory unit structurally carries better than a consultancy. A follow-up programme is sketched for this second layer.

4 methodological fields are laid out in the programme proposal:

  • Power analysis for sample scaling beyond the 1,800-sample architecture from Annex B, with sector- and engine-stratified effect-size estimation
  • Stratification check across sectors and engines, including reliability analysis of citation classification against independent re-classifiers
  • Mixed-effects architecture (or alternatively Generalized Estimating Equations) for cross-mandate aggregation, cleanly carrying the mandate-bound hierarchy of samples
  • Pre-registration discipline as co-validation of the registration routine already implemented in Annex B, with an external statistics reviewer per wave

Status May 2026: the programme is laid out as a cooperation outline with an academic advisory unit; it is not an ongoing research relationship, not a completed validation run, not an anticipated study-result layer. The methodological fields named here are the open topics for an initial conversation and are detailed in the NB Research Cooperation Outline, Topic A.

16.4Limits and reference to Annex B

Three limits are to be named. First, the model landscape is unstable: a model release between two measurement waves can invalidate a validation run (the Semrush 13-week study 2025 documents corresponding shifts). Second, isolating a single variable in a live retrieval system is achievable only approximately; the residual variance remains part of the measurement statement. Third, validation is mandate-bound — the refined values apply initially to the specific principal and are lifted to study-firm statements only on convergent findings across multiple mandates. The external validation layer sketched in 16.3 addresses the third limit methodologically without resolving it operationally at the present time. Sample sizes, statistical test procedures and the prompt-corpus build logic are documented in Annex B.

Sovereign AI in the European telco sector

Sovereignty spectrum, operator initiatives, compliance link.

Northbridge
As of May 2026
Publication

The term Sovereign AI denotes the capacity of a country or region to operate AI infrastructure, models and data under its own jurisdictional control. In the European telco sector, the term moved between 2024 and Q1 2025 from a political demand to an operational investment category. The McKinsey report AI infrastructure: A new growth avenue for telco operators of 28 February 2025 documents 4 European operator examples in which Sovereign AI is no longer a statement of intent but an ongoing project.

17.14 European operationalisations (as of Q1 2025)

Operator (country) Core initiative
Telenor (Norway) Sovereign-AI platform for the Nordic region, cooperation with NVIDIA
Swisscom (Switzerland) Swiss AI Platform for data storage and processing within Switzerland
Telefónica Tech (Spain) 10 AI specialist centres, over 400 AI professionals, genAI platform for virtual-assistant development
BT (United Kingdom) Managed network services with Fortinet for public sector and enterprise customers, integrated security architecture

The 4 examples differ in their structural shape. Telenor is clearly infrastructure-side and positions itself as a regional vendor for the Nordics. Swisscom holds a special position through Swiss data-export rules and addresses a national jurisdiction requirement. Telefónica Tech is organised as a standalone group company and has a portfolio logic that goes beyond Sovereign AI. BT addresses the public sector as a customer, with a security primacy.

17.2What the examples are not

This study makes no statement about which of the 4 models is transferable to the German market, which German network operator should build a comparable position, or how the group structure of a German provider should be organised. Such statements would breach the self-limitation of this study (no operator assessments, see foreword). The 4 examples are reference architectures against which sector reality can be anchored, not templates projected onto German Tier-1 operators.

17.3Connection point with Compliance-GEO

Sovereign AI and Compliance-GEO meet at a point that is rarely named precisely in the public debate: the question of which jurisdiction a generative answer is produced in and which retrieval engines are reachable in it at all. Where a German operator builds or uses Sovereign-AI structures, the question "which of the 6 retrieval engines (Chapter 9) are available in this jurisdiction, under what conditions, with what data-flow paths" is operationally immediate. Country-specific grounding as one of the 6 operational layers from Chapter 15 (Phase-03 editorial standard) is the editorial form of this question. The measurement logic from Chapter 7, in particular the citation-quality dimension with its completeness axis, is the operational form in which jurisdiction-specific visibility shifts become measurable. Sovereign AI frames both regulatorily without replacing them.

For Compliance-GEO mandate procurement, Sovereign AI has three immediately operational consequences. First, jurisdiction determines which of the 6 retrieval engines from Chapter 9 are admissible in the procurement standard; for public-sector or state-participating telco customers, the US hosting status of an engine can become an exclusion criterion, which directly touches the mandate frame condition (Chapter 11.3). Second, the briefing workflow from Chapter 15 (Phases 01 and 02) must document the data-flow path that an advertorial traverses in the retrieval process; this documentation links with briefing verification from Chapter 12.4. Third, Sovereign AI frames the load-bearing capacity of retrieval evidence from Chapter 7: a citation measurement whose payload runs through non-EU jurisdictions is less robust before German supervisory or judicial bodies than a measurement under EU sovereign conditions, which feeds into the price-factor matrix from Chapter 14 as an evidence-risk dimension. Fourth, the Sovereign AI question reaches into tariff-rendering accuracy: generative answers transport tariff facts without an explicit verification mechanism, with observed outliers in term/price assignments in shopping-assistant answers (small-sample observation, documented in the Q1 2026 mandate audit). Tariff-rendering correctness is thereby an independent measurement variable of the completeness axis from Chapter 7.4 and a potential risk of the supply-chain service-provider question under § 165 (2a) No. 4 TKG. Sovereign AI is therefore not infrastructure decoration but a frame in which procurement rule, briefing discipline, evidence quality and tariff-rendering supervision receive operational limits.

4 strategic paths in the telco AI market

Consumer connectivity, B2B, AI service operator, AI integrator.

Northbridge
As of May 2026
Publication

The study so far treats Compliance-GEO as a methodology of consumer visibility in generative answers. In parallel, the German Tier-1 operators have, since 2024, been building AI infrastructure that carries an independent B2B business model and neither competes with nor derives from Compliance-GEO. The McKinsey report of February 2025 describes this parallel market along 4 strategic paths. This chapter recounts the 4 paths as sector context, not as a strategy recommendation.

18.1The 4 paths in compact form

Path Core mechanic Addressable market size (McKinsey estimate, as of Q1 2025)
Fibre connectivity for new data centres Fibre access for new hyperscaler and colocation sites Global opportunity USD 30 to 50 billion
Intelligent network services Software-defined networks with AI workload routing, egress-cost management Global egress-cost market USD 70 to 80 billion annually (Gartner-cited figure)
Space-and-power monetisation Letting unused data-centre and exchange capacity to hyperscalers and GPUaaS providers Strongly regionally variable; no global market-size figure in the source report
GPUaaS (GPU-as-a-Service) Provision of high-performance GPU clusters for AI inferencing and training Addressable telco market share USD 35 to 70 billion by 2030, RoIC range 6 to 14 per cent

The market sizes are estimates from the McKinsey demand model and, when cited within the sector, are tagged with an as-of date (Q1 2025). The growth assumptions rest on 22 per cent annual growth in data-centre demand to 2030, on a forecast tripling to 170 gigawatts of global power demand.

18.2Why these paths sit in this study

A study on Compliance-GEO in the German telco sector need not fully illuminate the context of the telco AI market. But it cannot present itself as if Compliance-GEO were the central AI building block of a network-operator group. It is not. The 4 McKinsey paths add up to an addressable market that, in order of magnitude, exceeds the sum of all communications budgets in the Tier-1 segment by orders of magnitude. The two-role perspective (Chapter 21) becomes operationally workable only where the orderer from the marketing line and the release partner from CISO, compliance or CDO functions jointly understand the order of magnitude in which Compliance-GEO sits in the group architecture — as an independent consumer-visibility layer in parallel to a markedly larger infrastructure strategy.

18.3Categorical separation, operational rule

The 4 paths describe telco AI infrastructure as a B2B business model of the operators. Compliance-GEO describes consumer visibility in generative answers. Both are present in the same group, but they operate in different budget lines, with different release stakeholders, and within different contract frames. The operational rule that follows: a Compliance-GEO mandate is not financed from an AI infrastructure budget and is not released through AI infrastructure governance. The dividing line is not academic; it decides whether the mandate finds the right internal sponsor. Chapter 21 operationalises this separation at stakeholder level.

Operational consequence for the Compliance-GEO mandate. The 4-path categorisation has three immediate consequences for Compliance-GEO mandate procurement. First, the principal’s path classification determines the procurement-standard context: a consumer-connectivity mandate operates with different visibility goals than a B2B AI-service-operator mandate. Second, the principal’s path is not a fixed quantity; Tier-1 operators move across the 4 paths, and the mandate configuration must reflect this movement. Third, paths 3 and 4 (AI service operator, AI integrator) are still sparsely occupied in the DE telco reality; mandates in these paths have small comparison pools, which methodologically complicates baseline measurement and necessitates mandate-specific calibration of the working hypotheses from Chapter 8.3.

Code of Conduct as industry standard

6 vendor frameworks as a reference layer.

Northbridge
As of May 2026
Publication

The ethical layer of the three-layer compliance architecture from Chapter 3 methodologically carries the Northbridge Class 3 dividing line from Chapter 5. This dividing line does not stand in isolation. 6 AI vendors have published Code-of-Conduct and policy frameworks between 2018 and 2026 that, in sum, form an industry-standard layer — voluntary, not legally binding, but established in sector perception as a reference canon. This chapter describes the state of the layer and its connection point with Compliance-GEO (generative engine optimisation) in the German telco context.

19.1Why CoC frameworks are read as an industry standard

The 6 frameworks are standard-effective in three respects. First, OpenAI, Anthropic, Google and Microsoft jointly cover the dominant share of globally addressed generative inference capacity — they are the model or integration layers of the 6 retrieval engines from Chapter 9. Second, all 6 frameworks are publicly published, versioned and developed across repeated iterations; this structural feature distinguishes them from internal policies. Third, the frameworks are already used as a comparative basis in sector reference literature (GSMA Intelligence, McKinsey); SK Telecom is named as a model example in the GSMA report as of Q4 2024.

19.26 vendor frameworks at a glance

Vendor Framework(s) As of Convergence axis Divergence axis
SK Telecom AI Code of Conduct ("T.H.E. AI"), AI Charter, AI Governance Published March 2024 Three pillars: telco-based, humanity-based, ethics-based Sector-specific (telco), group-internal governance system
Anthropic Responsible Scaling Policy (RSP) v3.0; Usage Policy RSP February 2026; Usage Policy ongoing Two-part: capability thresholds plus user prohibitions Focus on catastrophic risks; Responsible Scaling Officer as internal role
OpenAI Model Spec; Usage Policies Model Spec 2025-12-18; Usage Policies ongoing Two-part: model-behaviour specification plus user prohibitions Chain-of-command logic; publicly versioned at model-spec.openai.com
Google AI Principles (revised); Frontier Safety Framework Principles revised February 2025; Framework ongoing Three core tenets: Bold Innovation, Responsible Development, Collaborative Progress Frontier risks with own methodology; DeepMind-led
Microsoft Responsible AI Standard v2 Published June 2022; NIST AI RMF integration since 6 principles: Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, Accountability Single document; stronger engineering-lifecycle integration
Meta Responsible AI Framework (5 pillars); Responsible Use Guide for Llama 5 pillars published 2021; Llama guides ongoing 5 pillars: Privacy, Fairness, Accountability, Transparency, Safety & Robustness Open-source model context; Llama Guard as technical implementation tool

19.3What the frameworks share

4 elements run through all 6 frameworks. First, a fairness or non-discrimination anchor, mostly with reference to protected attributes. Second, a transparency anchor, which addresses either model documentation (model cards, model spec) or user disclosure. Third, a safety anchor, encompassing either misuse prevention (user prohibitions) or technical robustness (red-teaming, safeguards). Fourth, an accountability anchor, encompassing either roles (RSO at Anthropic, governance committees at Microsoft and Google) or documentation duties. These 4 axes form the industry-standard core. In regulatory reception, they are picked up in G7 codes, OECD principles, and increasingly in the EU AI Act.

19.4What the frameworks do not do

Three limits are systematic. First, all 6 frameworks are voluntary; they do not replace regulatory duties and do not render the three-layer compliance architecture from Chapter 3 superfluous. Second, the frameworks differ in their enforcement mechanics — some bind external developers and end users via usage policies (OpenAI, Anthropic, Google), others primarily internal teams (Microsoft Responsible AI Standard, Meta 5 pillars). Third, the frameworks address content manipulation by third parties — Strategic Text Sequences, prompt injection or native-ad camouflage in the sense of Chapter 5 Class 3 — only indirectly; the user prohibitions cover misuse by the vendor’s customer, not the editorial responsibility on publisher and agency side.

19.5Connection point with Compliance-GEO

For Compliance-GEO in the German telco context, the 6 frameworks act in two directions. They frame the ethical layer of the compliance architecture from Chapter 3 and the Class 3 dividing line from Chapter 5 as a sector-consistent position — the Northbridge dividing line is therefore not a methodological special construction but a continuation of a publicly documented industry standard. Second, they give the verification check in the procurement standard from Chapter 12 a reference basis against which external service providers can be measured: conformity with the user prohibitions of OpenAI, Anthropic and Google can be formulated in Chapter 13 (disclosure variants) as one of the check axes. On this reading the frameworks are not a legal substitute but a reference layer in parallel to the regulatory and contractual level.

Application boundaries: where Compliance-GEO methodologically holds, and where not

4 preconditions, 6 method boundaries, three mirror sectors.

Northbridge
As of May 2026
Publication

Compliance-GEO is not a universal marketing model. The discipline has a clear application field — regulated consumer markets in the EU with measurable and steerable retrieval geometry — and edges where it does not methodologically hold. The chapter names both sides: the preconditions under which the method takes operational hold, and the boundaries at which it falls out categorically or for methodological-economic reasons. The scope definition is not sales qualification but a component of the discipline’s methodological self-demarcation.

20.1Preconditions for method application

4 conditions must be met jointly for Compliance-GEO to hold methodologically.

Precondition What is checked
Regulated consumer field Telco, Financial Services, Insurance, Commerce & Subscription with EU consumer regulation (TKG, EECC, MiFID II, IDD, VVG, DSA, GDPR); the regulation is methodological foundation, not methodological hurdle
Prompt-cluster substance 200–400 purchase-decisive queries per product group are identifiable; without measurable queries, the measurement basis from Chapter 7 is missing
Release structure Marketing order and compliance release are organisationally decoupled; the two-role perspective from Chapter 21 presupposes this structure
Publisher procurement budget The three-class assignment of the price-factor matrix (Chapter 14) needs real procurement headroom; pure in-house content constellations without publisher investment cover only the Level-C substance, not the Level-B procurement mechanics

20.2Method boundaries

6 edges mark where Compliance-GEO does not methodologically hold. First, pure B2B enterprise sales without a consumer layer — there the retrieval mechanics work differently, and the measurement infrastructure from Chapter 7 is not trivially transferable. Second, unregulated verticals (fashion, consumer tech, leisure) — there the regulatory discipline that structures Compliance-GEO is missing. Third, acute crisis communication on network or service outages — its own discipline, not GEO. Fourth, Class 3 manipulation requests — categorically excluded (Chapter 5). Fifth, legal advice on individual cases — falls outside the methodological frame. Sixth, methodological interventions at third-party companies outside one’s own mandate frame — for instance competitor profiles with personal data.

20.3Multi-sector triangulation of the preconditions

The 4 preconditions from 20.1 are methodologically developed in the DE telco architecture; their load-bearing capacity for Financial Services, Energy and Commerce can be triangulated via three NB sector dossiers (as of April 2026). The triangulation evidences the methodological load-bearing capacity of the precondition logic beyond telco; the three sectors remain methodology mirrors in the sense of self-limitation 2, not their own subjects of investigation.

Regulated consumer field. The FinServ dossier shows MiFID II, IDD, PRIIPs and DORA plus BaFin supervisory practice as a regulatory density level at least comparable to TKG / EECC; the Energy dossier shows EnWG, EEG, GEG, the Green Claims Directive (Directive (EU) 2024/825, transposition deadline 27 March 2026) and REMIT plus BNetzA supervisory practice as regulatorily more ambitious through the transformation component; the Commerce dossier shows DSA, DMA, the Omnibus Directive and the EU AI Act as a platform-structural layer. In all three sectors, regulatory density is methodological foundation, not methodological hurdle — the telco finding mirrors methodologically.

Prompt-cluster substance. Measurable prompt clusters with 200–500 queries per product group are documented in all three mirror sectors: 4 purchase situations in FinServ (call-money/mortgage comparison, suitability questions, vendor reputation, process questions), 5 in Energy (commodity tariff, regional availability, suitability/transition, subsidy questions, green claim), 5 in Commerce (comparison, suitability, reputation, availability, cancellation/lifecycle). The measurement basis from Chapter 7 is load-bearing in all three sectors.

Release structure. The marketing-compliance decoupling of the two-role perspective (Chapter 21) carries structurally in all three mirror sectors: FinServ via MaRisk compliance functions and BaFin supervisory practice; Energy via EnWG consumer-protection structures and Green Claims compliance; Commerce via DSA risk assessment, trust-and-safety functions and Omnibus-conform cancellation governance. The organisational precondition of the method is given across sectors.

Publisher procurement budget. Established comparison-portal markets with publisher procurement structures exist in all three sectors: in Financial Services with leading finance comparison platforms, consumer-test publications and business trade media; in Energy with established tariff comparators, independent test magazines and sustainability publications; in Commerce with the known price and reputation platforms and consumer-protection bodies. The three-class assignment of the price-factor matrix (Chapter 14) thereby carries the mandate mechanics also beyond telco.

The triangulation is methodological, not sector-independent: the three mirror sectors are not addressed here as study subject but as a validation layer of the precondition logic. The DE telco study remains the substantive subject; the sector triangulation evidences the methodological load-bearing capacity of the application preconditions beyond telco, without touching self-limitations 2 (no sector beyond telco) and 3 (no market beyond DE).

Relation to the four-vertical discipline concept. The canonical four-vertical anchor of the Compliance-GEO discipline carries Telecommunications, Financial Services, Insurance and Commerce. The triangulation of the precondition logic uses those three mirror sectors for which NB sector dossiers as of April 2026 are available — these are Financial Services, Energy and Commerce. Insurance is methodologically captured in the discipline anchor and carried as the fourth vertical in three-layer clauses of this study; an own NB Sector Dossier Insurance is in conception. The dual view is not inconsistency but consistency care: the discipline anchor remains stable, the triangulation mirror follows dossier availability.

20.4Operational consequence

Examination of the 4 preconditions is scope definition, not a hurdle: if they do not fit, Compliance-GEO as methodology is not pertinent — that is an exclusion at the right time, not an obstacle in the process. The two-role perspective from Chapter 21 operationalises the release structure; Chapter 22 describes the technical work that accesses the method after a positive applicability check. For the three mirror sectors from 20.3: the methodological load-bearing capacity is triangulated; a study application of these sectors remains the subject of independent follow-up works.

The two-role perspective

Orderer and internal release partner as shared grammar.

Northbridge
As of May 2026
Publication

A Compliance-GEO mandate is not carried by one but by two roles that hold only jointly: the marketing order and the compliance release. The two roles have different goals, different risk preferences and different languages; the operational difference of NB from classical marketing consultancy is that compliance is not treated as a hurdle but as a second principal layer. The chapter describes the two roles, the typical friction points at the 6 layers from Chapter 15.1, and the operational steering of the two-role dynamic in the mandate.

21.1Two principals, one mandate language

The marketing role orders visibility: Share of Model Voice, citation density in tariff queries, referral quality on transactional product pages. It optimises for lead volume, conversion and marketing-budget efficiency. The compliance role checks the mandate for regulatory compliance, reputational risk and audit triggers: BNetzA injunction interpretations, TKG mandatory-information fulfilment, DSA transparency duties, GDPR territoriality, DORA collisions on MVNO finance reference. Both roles are legitimate principals of the mandate; neither can override the other without damaging the mandate basis. Compliance-GEO therefore works in a mandate language that is readable for both roles — that is the function of the governance artefacts (procurement standard, price-factor matrix, editorial standard) operationalised in chapters 12, 14 and 15.

21.2Friction points at the 6 layers

The 6 operational layers from Chapter 15.1 are the points at which the two roles regularly hold different priorities.

Layer Marketing perspective Compliance perspective
Datumslogik Currency as a citation lever Currency as protection against misleading practices (UWG § 5)
Disclaimer-Struktur Disclaimer as retrieval-disrupting element (front-loading conflict) Disclaimer als Haftungs-Schutz (MStV § 22, DSA Art. 26 ff.)
Scopes of validity Broad statement as conversion lever Narrow statement as warranty protection
Source prioritisation Own source as brand signal Primary source as evidentiary duty
Exception handling Majority case as entry Exception naming as completeness duty
Country-Specific Grounding Global or EU statement as efficiency lever DE specificity as legal certainty (TKG, TTDSG, BNetzA)

The table is not an opponents list, but a map of negotiation points. In mandate practice they are walked through individually in the Phase-03 release rounds (Chapter 15.3); the result is typically a wording that carries both perspectives, without replacing one of them.

Audit chain as a precondition for the compliance role. For the compliance role, the audit chain from Chapter 12.6 is not technical hygiene but a check precondition. BNetzA injunction interpretations, TKG mandatory-information fulfilment and DORA collisions demand, on subsequent examination, a non-manipulable release history. Without hash basis, append-only log and external Git anchoring, prose documentation is attackable as evidence. The audit chain is therefore the point at which marketing order and compliance release operationally converge: the marketing role delivers the content, the compliance role checks it, and the audit chain makes both retrospectively evidenceable. This is the technical form of the mandate language from section 21.1.

21.3Release rhythm in the DE telco sector

Role carriage in the DE telco sector follows the operator typology from Chapter 11.4 (CISO, Head of Digital Commerce, Head of Regulatory Affairs, Compliance Officer, Product Manager Consumer and Business, Procurement, Legal). The release rhythm is asymmetric: marketing decides on the mandate subject and budget frame, compliance decides on mandate boundaries and case-by-case release. Where a single case escalates, it does not run in the marketing budget frame but in the compliance escalation path (Chapter 11.5). This asymmetry protects both sides: marketing against a compliance veto that paralyses the mandate; compliance against a marketing order pushed through under time pressure.

21.4Operational consequence

The two-role perspective is not an organisational recommendation but a mandate precondition. If one of the two roles is not organisationally anchored at the principal’s house, the second control loop is missing — the one that distinguishes Compliance-GEO methodologically from SEO services and that, in a regulated consumer market, raises the operational risk to mandate level. Chapter 22 describes the engineering substance that takes hold in the marketing perspective; Chapter 23 describes why DE specificity in the compliance perspective is not negotiable.

Operational mandate steering between the two roles. The two-role dynamic surfaces in mandate practice at three places. First, in the Phase-00 initial conversation: both roles must be engaged as principals — a pure marketing commission without a compliance release partner leads to subsequent correction loops. Second, in the Phase-02 publisher arrangement: the negotiating positions of the two roles must be reflected in the same contract structure, so that price coupling (marketing lever) and disclosure duties (compliance lever) do not compete. Third, in the Phase-04 feedback loop: the audit chain from 12.6 is the shared data source on which both roles draw, without either holding data sovereignty over the other. This is the technical form of the mandate language from 21.1.

Engineering substance

5 technical fields as operational mandate precondition.

Northbridge
As of May 2026
Publication

Compliance-GEO is not a content-consultancy model with technical flanking but an engineering-dense discipline with an editorial interface. The chapter names the technical work fields walked through in every mandate, and demarcates them against classical SEO technique. The substance is a precondition for fulfilling the procurement standard (Chapter 12), the measurement logic (Chapter 7) and the price-factor matrix (Chapter 14): without engineering, no measurement; without measurement, no mandate feedback loop.

22.15 technical work fields

Engineering work in the mandate falls into 5 fields that are addressed in parallel and iteratively.

Work field Content Anchor in operational part
Schema and entity engineering Structured data (Article, Product, Offer, Organization, Person, FAQPage); entity disambiguation between network brand, tariff brand, holding, author entity. Telco-specific schema properties as sector-structural engineering substance: parentOrganization/subOrganization and BrandDetails with owner for brand disambiguation; areaServed/serviceLocation for address-precise fixed-line and fibre availability; isRelatedTo with discount property for bundle structures; priceValidUntil/speedLimit for EECC core-parameter retrievability; offers.priceSpecification.billingIncrement for device-financing tariff coupling. Editorial implementation of this schema layer in Ch. 15.1.1 A-criteria A 04/A 08, B-criterion B 02 (Ch. 12)
Bot and crawler configuration robots.txt control per retrieval bot (GPTBot, ClaudeBot, PerplexityBot, Bingbot, Googlebot, Google-Extended); log analysis of bot visits; stealth detection. Empirical anchoring of the three-class logic in Cloudflare reports (4 August 2025 Perplexity stealth crawling; 29 January 2026 and 30 January 2026 on bot-compliance dynamics) and HumanSecurity (12 January 2026, Copilot Actions as a user-fetch category) A-criteria A 06/A 10 (Ch. 12), Bot-Klassen-Matrix (Ch. 10)
Retrieval audit URL-path hygiene, paywall and login-wall analysis, indexing directives (noindex, X-Robots-Tag), canonical consistency, outbound-link rel attribution A-criteria A 01, A 03, A 04, A 05, A 08 (Ch. 12)
Measurement apparatus Prompt-cluster battery, re-run automation across 6 engines, seasonality correction, answer-consistency tracking Ch. 7 measurement logic, Ch. 16 validation
Freshness and update infrastructure dateModified pipeline; content update cadence per publisher and per principal asset; audit log for revision evidence A-criterion A 07 (URL persistence) and B 10 (update), Ch. 12

The 5 fields are surveyed in Phase 00 (retrieval-audit baseline), configured in Phases 01–02 (publisher procurement and principal asset roll-out), editorially closed in Phase 03 (Chapter 15) and monitored in Phase 04 (Chapter 11.1). The iteration between engineering and editorial is part of the feedback loop, not project ornament.

22.2Demarcation from classical SEO technique

Three technical fields overlap with classical SEO technique but are addressed under Compliance-GEO with a different prioritisation. Schema markup is optimised in SEO primarily for rich-snippet visibility; in Compliance-GEO it is optimised as a retrieval-classifier signal that decides on A-criterion fulfilment. Crawler steering is optimised in SEO primarily for search-engine crawlers; in Compliance-GEO it is differentially steered for the three retrieval-bot classes (training, retrieval, user fetch) (Chapter 10). Content updating is taken in SEO as a ranking lever; in Compliance-GEO as a freshness boundary condition of the measurement logic (Chapter 7) with regulatory date-logic consequences (Chapter 15.1). The tools overlap; the metrics and check criteria are different.

22.3Operational consequence

The engineering substance is the place where Compliance-GEO is demarcated equally from classical marketing consultancy and from classical SEO. A mandate without engineering capacity — on the principal’s or on the consultancy side — cannot fulfil the procurement-standard A criteria, cannot measure, and consequently cannot invoice (Chapter 14). The fit criterion from Chapter 20.1 "publisher procurement budget" has therefore a technical mirror on the engineering side: if either side is missing, the mandate basis is incomplete. Chapter 23 describes finally why DE market expertise as the third basic condition of the NB methodology is not optional.

Engineering substance across the mandate phases. The engineering fields of this chapter are not addressed in parallel to the mandate but take effect in concrete phases. In Phase 00 (precondition check) bot-policy interpretation and retrieval-architecture understanding are checked; without this foundation, the baseline measurement is methodologically not load-bearing. In Phase 01 (publisher identification) the extraction layer from 12.2 takes hold for domain assessment. In Phase 02 (publisher arrangement) the technical specification of placement requirements determines the contract text. In Phase 03 (editorial standard) the schema-markup and front-loading work is performed within the 6 layers from 15.1. In Phase 04 (reporting) the entire audit-chain stack from 12.6 runs as technical infrastructure. Engineering substance is therefore not flanking expertise but the operational precondition of every mandate phase. Without the engineering density, the mandate mechanics break at every individual phase.

DE market expertise as a methodology precondition

Linguistic, legal, market anchoring in the German context.

Northbridge
As of May 2026
Publication

The study’s self-limitation to Germany (self-limitation 3 of the 4 methodological self-limitations from the study concept) is not a geography decision but a methodology precondition. Three properties of the DE market make DE specificity an integral component of mandate work, not a country surcharge.

23.1Three DE specifics

First: aggregator dominance in tariff queries. The portal geometry described in Chapter 6 — DE aggregators and comparators plus complementary trade publishers — is not present in this form in any other EU market. Operational mandate planning works with this geometry as a starting point; an EU-wide aggregation would level the DE finding.

Second: regulatory density at the interface with TKG, TTDSG, BNetzA injunctions and NIS2 transposition acts. The legal anchors from Chapter 2, 13 and 15 are formulated DE-specifically; EECC transposition, BEREC guidelines and DSA bite differently nationally and are concretised in DE through BNetzA guidelines and BGH case law (I ZR 211/17, I ZR 90/17). A DACH or EU unification of the legal anchors would lose the check sharpness.

Third: consumer-portal landscape with high trust weighting. Stiftung Warentest, Finanztip and Verbraucherzentralen are cited above average in generative answers to consumer questions. The Phase-01 source selection (Chapter 11) addresses these portals as a trust-anchor layer that is structurally differently composed in other EU markets.

23.2Operational consequence

The three specifics explain why AT and CH mandates, in the current methodology maturity, are not treated as a DACH extension but as independent follow-up studies. They also explain why the fit criteria from Chapter 20.1 check the prompt-cluster substance on a DE basis — without this basis, the measurement infrastructure from Chapter 7 is rudderless. Chapter 24 (conclusion) summarises the methodological position of the study; this closing section marks that the position does not operationally hold without DE market expertise as the third leg, alongside engineering substance (Chapter 22) and the two-role perspective (Chapter 21).

Methodological position and study conclusion

What the study methodologically carries, what remains open, what follows next.

Northbridge
As of May 2026
Publication

The study has treated Compliance-GEO as methodology in regulated consumer markets, with the telecommunications sector in Germany as primary subject of investigation. The 23 preceding chapters have demarcated the discipline definitionally (Part I), illuminated the German market structurally (Part II), unfolded the operational methodology in its 6 engines, 5 phases, 18 procurement criteria and 6 disclosure variants (Part III), framed the Sovereign-AI and industry-standard context (Part IV), and traced mandate practice to its load-bearing legs (Part V). This conclusion summarises the methodological position in 5 core statements, marks the 4 self-limitations carrying that position, and determines the study’s reach as a position determination, not as an implementation manual.

24.15 methodological core positions

The 5 core positions condense the study as a methodological synthesis, not as a repetition of the 8 core statements from the executive teaser. They answer what follows from the sum of the 24 chapters, not what the chapters carry individually. Where statement and position touch the same substance, the position carries the synthesis accent with operational chapter location; the statement remains a descriptive substance map.

First, the standalone nature of Compliance-GEO is shown in the coupling of three levels into a single methodology. Regulatory substance (TKG, UWG, MStV, Chapter 3), three-dimensional measurement logic (Citation Rate, Persistence, Quality as Share of Model Voice on a weekly cadence, Chapter 7) and price architecture (criteria factor times model-blended factor from Chapter 14) bite as a multiplicatively coupled methodology that is structurally not reproducible as an SEO extension. The sector-consistent anchoring in the GSMA Intelligence 4-pillar framework (Chapter 4) carries the position out of the proprietary into industry-validated space.

Second, two parallel three-level structures carry the methodology, and their non-compensability is the operational point. The three disqualification levels (legal, technical, substantive, Chapter 2) and the three compliance layers (regulatory, contractual, ethical, Chapter 3) sit parallel to each other; an exclusion at one level is not compensated by fulfilment of others. The two-stage verification workflow in Chapter 12 operationalises this parallelism as a binary A-class check before booking (three criteria) and before the final invoice (5 criteria); the three-class assignment from Chapter 14.2 translates non-compensability into multiplicative — not additive — price logic.

Third, price coupling is the legal lever, and the audit chain makes it tamper-resistant over time. After publication, corrections to URL path, DOM disclosure, schema markup or byline are practically not enforceable vis-à-vis the publisher; coupling the final invoice to evidence of fulfilled criteria is the only reliable enforcement mechanic — Phase 02 (Chapter 11.3) and the price-factor matrix (Chapter 14) carry it. The documented procurement-standard chain preserves the burden of proof against the consideration presumption under § 5a (4) sentence 2 UWG (BGH I ZR 35/21, Influencer III); the audit chain from Chapter 12.6 with hash basis, append-only log and external Git anchoring supplements prose documentation with evidenceable tamper resistance — not as an NB extra, but as Phase-02 infrastructure.

Fourth, the Class 3 dividing line is categorical and operationally distributed across three chapters. Strategic text sequences, prompt injection and covert advertising fall out of the Northbridge mandate scope not because of their intensity but because of their action category (Chapter 5). The categorical nature continues in the procurement standard as a binary A-criterion with disqualification consequence (Chapter 12) and in the two-role perspective as a mandate boundary line where the compliance role decides on case-by-case rejection (Chapter 21). Because the boundary is categorical and not on an intensity scale, it is not renegotiated; this structurally relieves the class-classification discussion.

Fifth, Compliance-GEO operates parallel to telco AI infrastructure strategy, and the three-leg architecture from Part V is the operational foundation. The 4 McKinsey paths (fibre connectivity, intelligent network services, space-and-power, GPUaaS, Chapter 18) and the Sovereign-AI constructions in 4 European operators (Chapter 17) sit in the same group architecture as Compliance-GEO but operate in different budget lines, with different release stakeholders and in different contract frames. The categorical separation decides whether a mandate finds the right sponsor in the group; the three-leg architecture (two-role perspective Chapter 21, engineering substance Chapter 22, DE market expertise Chapter 23) forms the operational foundation that distinguishes Compliance-GEO from AI infrastructure consulting not by substance demarcation but by mandate architecture.

24.24 self-limitations as a methodology frame

The study is framed by 4 self-limitations that are not relinquishment but methodology preconditions. No operator evaluations, because such an evaluation would overlay sector-structure description with actor judgements. No sector statements outside telecommunications, because Financial Services, Insurance and Commerce & Subscription carry only as methodology mirrors, not as context analogies. No market statements outside Germany, because Austria and Switzerland must not be treated as a DACH surcharge but as independent follow-up studies — the three DE specifics from Chapter 23 (aggregator dominance, regulatory density, consumer-portal landscape) lay the methodological grounds. No primary research on third-party companies, because methodology development takes hold at structurally accessible sector constellations, not at externally collected market data.

24.3Scope and study maintenance

The procurement-standard enforcement mechanic is sharpened vis-à-vis publishers: coupling the final invoice to briefing compliance — not to publisher-policy decisions such as domain reputation, paywall architecture or bot policy — corresponds civil-legally to the usual performance definition. Domain-policy decisions are to be disclosed before booking and lead, on non-fulfilment, to non-booking, not to retroactive invoice reduction. This separation strengthens, on the current substantive state, the principal’s burden-of-proof position against the consideration presumption under § 5a (4) sentence 2 UWG (BGH I ZR 35/21 of 13 January 2022, Influencer III). The deepening of this argument sits in Chapter 12.5 (burden-of-proof preservation) and is operationalised in Annex A in the responsibility-level allocation of the criteria-register mirror.

The study is methodological position determination, not implementation manual. It describes what a Compliance-GEO mandate work in telecommunications orients itself by; it does not deliver mandate protocols, contract templates or final-invoice forms. The operational substance from chapters 9 to 16 is further sharpened in ongoing mandate work and differentiated in follow-up publications. The regulatory anchors are under active maintenance; the re-review cadence is documented in subject-matter chapters 3 and 13.

The position unfolded here is not final. Compliance-GEO as a discipline will be carried forward in the coming quarters by regulatory development (EU AI Act in the implementing-act phase, TKG follow-up amendments, security-catalogue final version) and by the empirical differentiation of the retrieval engines. The study serves as a methodological basis on which mandate practice continues, and on which the next sectoral position determination — Financial Services, Insurance and Commerce & Subscription — finds its own cut.

24.4Follow-up research

The methodology unfolded here is focused on a single sector and a single market. Its empirical load-bearing capacity will be carried forward in a cross-sector retrieval-citation observation programme that addresses the regulated consumer sectors named in Chapter 4 in parallel. Three corner points frame the programme.

Cut. Several thousand prompts per measurement wave across the four sectors telecommunications, Financial Services, Insurance, Commerce & Subscription, and the retrieval engines named in Chapter 9. Collection runs mandate-independently and aggregated anonymously across publisher classes and vendor groups, in continuation of the measurement architecture described in Annex B.

Methodological hardening. The statistical layer of the measurement design — sample stratification, inference architecture and classifier reliability — is hardened in cooperation with a university statistics unit, the Statistics Cluster of Technical University of Applied Sciences Rosenheim (SCSC). A formal research cooperation is named as a project goal.

Publication route. The target publication is an applied-research paper at an information-retrieval venue, flanked by a public benchmark report with anonymised, class-level aggregated findings. A method-documenting second publication is being prepared in parallel. The separation between published measurement findings and mandate-bound correction grammar from Chapter 24.2 and 24.3 remains consistently preserved in the follow-up programme.

Sources register and criteria-register mirror

Sources, case law, tools, sector mirror and the criteria register.

Northbridge
As of May 2026
Publication

This annex carries the verified external sources of the study in 6 sub-categories per study concept: academic foundation sources (Chapter 1, 2, 5, 7), industry-research sources (Chapter 7, 12, 14, 15), sector research houses (Chapter 4, 17, 18), AI-vendor CoC frameworks (Chapter 19), engine-vendor documentation (Chapter 9, 10, Annex C), legal foundations (Chapter 2, 3, 5, 12, 13, 14, 21, 23, 24).

The date discipline follows source type: as-of date for industry research; publication date for foundation and sector-research sources; sighting date for engine-vendor documentation (as of April 2026 unless otherwise marked); date of entry into force for legal foundations. Northbridge-internal methodology and primary-collection artefacts are carried in the chapter-specific source-anchor blocks, not in this register.


A.1Academic Foundation

01 Aggarwal et al. (KDD 2024). GEO-Bench; Position-Adjusted Word Count; strategic text sequences. Definitional basis of the Compliance-GEO concept and empirical anchoring of front-loading effectiveness.

02 Wu et al. (2025). Generative Engine Utility (GEU). Methodological refinement of the GEO/GEU terminology; basis for the AEO demarcation in Chapter 1.

A.2Industry-Research GEO-Tools

03 Indig (2026). n = 18,012 ChatGPT citations; 44.2 per cent of citations from the first 30 per cent of the text (front-loading quantification).

04 Semrush 13-week AI citation study (as of October 2025). Over 100 million citations; Reddit drift in ChatGPT from 60 to 10 per cent; model-landscape volatility.

05 Scrunch industry research (as of 2025/2026). 3.5 million citation events; engine-specific citation half-lives.

06 Profound industry research (as of 2025/2026). Citation drift up to 60 per cent per month; markdown-vs-HTML experiment (Q1 2026).

06a rankscale.ai. AI visibility tracking on a 5-engine cut ChatGPT, Claude, Google AI Mode, Google AI Overview, Perplexity; measurement logic from detection and position; visibility-score formula not disclosed; prompt inventory mandate-configurable.

06b Peec.ai. AI visibility tracking on transactional prompts; three-engine cut ChatGPT, Perplexity, Google AI Overviews; measurement logic from detection and position analogous to 06a; daily tracking on configurable prompt inventory.

07 Yext Search Experience Benchmark (as of Q4 2025). 17.2 million citations; author-entity disambiguation as a citation-quality lever.

08 Ahrefs Mentions vs Backlinks (as of 2025). n ≈ 75,000 brands; mentions as a brand-recall mechanic without a link.

08a Ahrefs 750-prompt study (as of 2025). Hook-density correlation with citation rate; empirical anchor for criterion B 04 in Chapter 12.3.

09 Seer Interactive (as of 2025). AI Overviews citations 2023 to 2025; freshness lever.

10 Similarweb GenAI Landscape (as of 2025). ChatGPT user distribution DE; 7 per cent conversion reference.

11 Sistrix Prompt Research DACH (as of 2025). 62 million DACH questions; cluster-logic basis.

11a Sistrix Promptindex (sistrix.de Handbook AI, as of 22 October 2025 and 2 January 2026). AI brand-coverage tracking on a template inventory of around 10 million prompts; binary measurement logic (brand appeared yes/no) on the three-engine cut ChatGPT, Google AI Mode, Google AI Overview; third-party triangulation anchor for the platform mechanics in Chapter 6.2.

12 Reuters (October 2025). Perplexity CEO revenue statement; market-maturity anchor for Chapter 8.

A.3Sector research houses

13 Jarich, Hatt, Borole, GSMA Intelligence (January 2025, as of Q4 2024). 4-pillar framework; 65 per cent AI-strategy metric; 49 per cent cybersecurity as top barrier; 88 per cent phishing/smishing as the foremost threat.

14 Shrivastava et al., McKinsey & Company (28 February 2025, as of Q1 2025). 4 strategic paths in the telco AI market; USD 30 to 50 billion fibre opportunity; USD 35 to 70 billion GPUaaS by 2030.

A.4AI-Anbieter-CoC-Frameworks

Category anchored in Chapter 19; full anchoring in the final review after chapter-19 consolidation.

15 SK Telecom AI Code of Conduct. 16 Anthropic Responsible Scaling Policy und Usage Policy. 17 OpenAI Usage Policies. 18 Google AI Principles. 19 Microsoft Responsible AI Standard. 20 Meta Responsible AI Framework.

A.5Engine-Vendor-Dokumentation

21 OpenAI. help.openai.com; platform.openai.com/docs/bots. As of April 2026.

22 Anthropic. support.anthropic.com/en/articles/8896518; anthropic.com/supported-countries; news "Claude Europe" of 14 May 2024. As of April 2026.

23 Perplexity. docs.perplexity.ai/docs/resources/perplexity-crawlers, as of April 2026; perplexity.ai/hub/blog/agents-or-bots-making-sense-of-ai-on-the-open-web, 4 August 2025.

24 Microsoft. microsoft.com/en-us/microsoft-365/blog, 4 November 2025 (Paul Lorimer); support.microsoft.com/copilot-actions-in-edge; learn.microsoft.com/microsoft-copilot-studio.

25 Google. developers.google.com/search/docs/crawling-indexing/google-common-crawlers, update 28 April 2025; gemini.google/release-notes/; business.google.com/en-all/think/ai-excellence/ai-mode-marketing-europe/, October 2025.

26 Cloudflare reports. blog.cloudflare.com, 4 August 2025, updated 29 January 2026; UK Google AI crawler policy, 30 January 2026.

27 HumanSecurity. humansecurity.com/ai-agent/copilot-actions/, 12 January 2026.

28 Trade press. Search Engine Journal, 28 January 2026; Search Engine Roundtable, 9 December 2025; Search Engine Land, 26 March 2025; Deutsche Telekom press (telekom.com, Comet partnership).

A.6Legal foundations

29 UWG (version as in force April 2026). § 2 (1) No. 2; § 5 (1); § 5a (1), (2), (4), (4) sentence 2 (consideration presumption); § 5b (4); § 8 (2).

30 MStV. § 22 (1) sentence 1.

31 DDG 2024. § 6 (1) No. 1; § 7.

32 TKG 2021 (version as in force April 2026). §§ 54 to 57, § 66, § 109 ff., §§ 165 to 168, § 228; amendments by Art. 25 NIS2UmsuCG (in force 6 December 2025) and Art. 6 of the act of 11 March 2026 (BGBl. 2026 I No. 66).

33 BSIG 2025. § 28 (1) sentence 1 No. 3; § 28 (2) No. 2; § 28 (3) (negligibility clause); § 28 (5) No. 1 and sentence 4; §§ 30, 33; § 39 (4); § 65.

34 NIS-2 regime. Directive (EU) 2022/2555; NIS2UmsuCG (BGBl. I 2025 No. 301, in force 6 December 2025); Implementing Regulation (EU) 2024/2690 of 17 October 2024; ENISA Technical Implementation Guidance of 26 June 2025; NIS Cooperation Group EU ICT Supply Chain Security Toolbox of 30 January 2026; BNetzA security-catalogue consultation under § 167 TKG (draft October 2025, final version expected Q2/Q3 2026).

35 KRITIS regime. KRITIS-Dachgesetz (BGBl. 2026 I No. 66, partly in force 17 March 2026); CER Directive (EU) 2022/2557.

36 DORA. Regulation (EU) 2022/2554 (applicable since 17 January 2025).

37 EU AI Act, Regulation (EU) 2024/1689. Art. 3 No. 3 (provider); Art. 3 No. 4 (deployer); Art. 50 (transparency duties).

38 UCP Directive 2005/29/EC. Art. 7 (5); full-harmonisation basis of UWG.

39 PAngV. § 11 (1).

40 Austrian NISG 2026. BGBl. I No. 94/2025, promulgated 23 December 2025, in force 1 October 2026; reference comparative jurisdiction (cf. Ch. 3.1).

41 Commission proposal CSA2. Draft of 20 January 2026; political agreement expected early 2027; open monitoring point.

42 BGH case-law corpus. BGH I ZR 211/17 and I ZR 90/17 (influencer disclosure); BGH I ZR 96/19 of 25 June 2020 (LTE speed); BGH I ZR 125/20 of 9 September 2021 (Influencerin II, norm hierarchy § 5a UWG / § 22 MStV / § 6 DDG); BGH I ZR 35/21 of 13 January 2022 (Influencer III, consideration presumption § 5a (4) sentence 2 UWG); BGH I ZR 27/22 of 26 January 2023 (liability for affiliates, narrow agent precondition under § 8 (2) UWG); BGH I ZR 176/19 of 26 October 2023 (Zigarettenausgabeautomat III, advertising concept); BGH I ZR 98/23 of 27 June 2024 (climate-neutral); BGH I ZR 164/23 of 11 July 2024 (nicotine-containing liquids); BGH I ZR 112/23 of 23 October 2024 (online marketplace liability); BGH I ZR 53/24 of 23 January 2025 (continuation of § 5a/§ 5b doctrine); BGH I ZR 183/24 of 9 October 2025 (Netto/price reduction, press release 184/2025); BGH I ZR 28/25 of 11 March 2026 (Google Ads, central support of the Ba line in Ch. 14.6).

43 CJEU case-law corpus. CJEU C-236/08 to C-238/08 Google France; CJEU C-324/09 L’Oréal/eBay; CJEU C-540/08 Mediaprint; CJEU C-261/07 VTB-VAB and C-299/07 Galatea; CJEU C-304/08 Plus Warenhandelsgesellschaft (UCP full-harmonisation triad); CJEU C-18/18 Glawischnig-Piesczek; CJEU C-682/18 and C-683/18 YouTube and Cyando; CJEU C-330/23 of 26 September 2024 (Aldi Süd); CJEU C-492/23 Russmedia of 2 December 2025.

44 Administrative-court referral. VG Berlin 32 K 222/24 of 10 July 2025 (referral order to the CJEU, pending as of April 2026; open monitoring point (cf. Ch. 2)).

45 Secondary literature, legal commentators.

  • § 165 (2c) TKG (management liability): Noerr (26 September 2024); Morrison Foerster (20 February 2025); Meyer-Köring (18 July 2025); Opexa Advisory (June 2025); Kritisschutz (24 August 2025); Proliance (17 December 2025); Pöppel Rechtsanwälte (14 January 2026).
  • KRITIS-Dachgesetz: Deloitte Legal (9 March 2026); GÖRG (9 March 2026).
  • DORA demarcation telco-payment: Paytechlaw (12 August 2025); Digital Chiefs (11 February 2026); secjur (March 2026).
  • § 28 (3) BSIG: Hessel/Schneider, RDi 2026, 25; TeleTrusT statement (July 2025); BSI FAQ on NIS-2 (as of April 2026); Schönherr NISG 2026 overview (January 2026); DORDA NISG 2026 (January 2026).
  • § 165 (2a) No. 4 TKG: Lexology Part 3 Supply Chain Security (4 December 2025); Schjødt, Navigating the NIS 2 Directive; Turing Law, NIS2 and contracting (15 December 2025); BSI #nis2know secure supply chain; ISO/IEC 27001:2022 A.5.19 to A.5.22 and A.8.30; Bird & Bird, Morrison Foerster, DLA Piper, BTL Rechtsanwälte (December 2025 to January 2026).
  • Attribution of autonomous LLM recommendation (Position-7 and Position-14 research): CMS, commentary on AI-supported advertising (6 March 2024); IT-Recht-Kanzlei, AI compliance AI washing (25 June 2024); DFN Research Office, liability for unlawful third-party conduct on the internet (May 2024); Bongers-Gehlert, WRP 2025, 407; Kuhlmann, Legal Tribune Online (18 August 2025); Wettbewerbszentrale, guidelines on disclosure of AI-generated content, version 1.1 (4 February 2026).

A.7Criteria-register mirror (procurement standard)

Compact cross-reference form of the 18 criteria from Chapter 12 with the two responsibility dimensions. The methodological substance sits in Chapter 12.2 (A class) and Chapter 12.3 (B class) and in Chapter 12.1.1 (two-level responsibility separation) and in Chapter 12.4 (two-stage verification workflow). The mirror here serves as a source-anchor annex, not as an independent norm.

Criterion Responsibility level Sphere of influence
A 01 · URL path Briefing compliance Publisher per-article
A 02 · DOM disclosure (textual inline disclosure permissible; advertorial template disqualifying) Briefing compliance Publisher per-article
A 03 · Domain reputation Publisher pre-check Publisher policy
A 04 · Index status Briefing compliance Publisher per-article
A 05 · Paywall status Publisher pre-check Publisher policy
A 06 · Bot policy (robots.txt) Publisher pre-check Publisher policy
A 07 · URL persistence (12 months) Briefing compliance Publisher per-article
A 08 · Outbound links (rel="nofollow sponsored") Briefing compliance Publisher per-article
B 01 · Byline (editorially responsible author; preparatory work by clients permissible) Briefing compliance Joint editorial
B 02 · Schema markup (Article or NewsArticle; schema subtype publisher-internal) Briefing compliance Publisher per-article
B 03 · Substance (≥ 800 words) Briefing compliance Joint editorial
B 04 · Citation hooks (statistics, direct quotes) Briefing compliance Joint editorial
B 05 · Front-loading Briefing compliance Joint editorial
B 06 · Definitive language Briefing compliance Joint editorial
B 07 · Entity consistency Briefing compliance Joint editorial
B 08 · Question headlines Briefing compliance Joint editorial
B 09 · Listicle structure Briefing compliance Joint editorial
B 10 · Update (dateModified revision quarterly) Briefing compliance Joint editorial

The distribution gives three criteria in the publisher pre-check (A 03, A 05, A 06 — publisher policy at domain level, re-validated quarterly before booking) and 15 in briefing compliance (A 01, A 02, A 04, A 07, A 08 plus B 01 to B 10 — per article, verified before the final invoice). The sphere-of-influence column separates purely publisher-controlled criteria from those carried jointly editorially (B 01 with admissible preparatory work; B 03 to B 09 as the Phase-03 editorial layer from Chapter 15). The operational rule of the price-factor matrix from Chapter 14.2 differentiates Briefing-FAIL (invoice reduction) from Pre-check-FAIL (non-booking) per this responsibility-level separation.

Retrieval-validation design

Measurement architecture, 4-week cycle, check procedures.

Northbridge
As of May 2026
Publication

This annex carries two thrusts. The first three sections (B.1 worked example, B.2 sensitivity table, B.3 limits of the illustration) illustrate how the price-factor matrix unfolded in Chapter 14 leads to a final price in a typical placement; the mechanic is hypothetical, not an empirical substitute. Sections B.4 to B.7 carry methodological-technical deepening on the signal levels in Chapter 7.6, 7.7, 8.3.1 and 12.6: B.4 specifies the 6 measurement-discipline building blocks; B.5 carries the engine-aggregation formula with worked example to the 0.89×; B.6 documents the audit chain as a minimal implementation; B.7 specifies the metrics, triggers and roles of the 7-day dashboard. The 7 sections are complementary: B.1 to B.3 make the price mechanic tangible; B.4 to B.7 make the measurement, aggregation, audit and dashboard discipline operationally traceable. Annex C carries the tool stack as an independent layer.

B.1Worked example

Take a typical placement in DE telco aggregator space: list price EUR 1,200, A-criteria profile all 8 fulfilled, B-criteria profile 6 of 10. From this follows Mixed-Buy classification per the three-class systematics from Chapter 14.2, a criteria factor of 0.6× and a model-blended factor of 0.82× for a ChatGPT-Perplexity principal (Chapter 8.3, model-blended factor composed from two engine weights). The calculation reads EUR 1,200 × 0.6 × 0.82 = EUR 590. The final price sits around 51 per cent below the list price. The two factor levels — criteria evaluation of the placement itself (criteria factor) and the model-specific weighting of the mandate focus (model-blended factor) — multiply, because they address methodologically independent dimensions.

B.2Sensitivity table, three parameter variations

Scenario Variation Criteria factor Model-blended factor Final price Delta to A
A (base) 8 of 8 A-criteria, 6 of 10 B-criteria, ChatGPT-Perplexity mandate 0.60× 0.82× EUR 590
B+ Model-blended factor +10 per cent (engine weighting shifted) 0.60× 0.902× EUR 649 +10 per cent
B– Model-blended factor −10 per cent 0.60× 0.738× EUR 531 −10 per cent
C+ B-criteria profile 8 of 10 (approaches Full-Buy) 0.75× 0.82× EUR 738 +25 per cent
C– B-criteria profile 4 of 10 (approaches Mention-Buy) 0.45× 0.82× EUR 443 −25 per cent

The table shows that the final-price range between the weakest (C–) and strongest (C+) B-criteria profile sits at EUR 443 to EUR 738; the model variation, by contrast, sits only at EUR 531 to EUR 649. The B-criteria evaluation has the higher leverage, because it determines the price class of the placement via the three-class assignment from Chapter 14.2; the model variation is secondary and reflects only the engine weighting within a fixed mandate focus. The criteria-factor values for C+ and C– are illustrative assumptions for the range within and between classes; the precise per-class derivation sits in Chapter 14.

B.3Limits of the illustration

The illustration has three limits. First, the factor values are typical but not mandate-measured; every real mandate carries its own criteria matrix and its own model weight from the Phase-01 mandate intake per Chapter 15. Second, the run-through does not replace empirical validation of the price factors themselves; the factor derivation from Chapter 8.3 and 14 remains the foundation; Annex B only illustrates the multiplication logic. Third, the sensitivity reactions hold under the assumption that the criteria factor and model factor are multiplicatively independent; in concrete mandate constellations, coupling effects between A-criteria profile and model weight may arise that dampen or amplify linearity, and that are documented separately in the mandate folder.

B.46 measurement-discipline building blocks · specification

The 6 measurement-discipline building blocks named in Chapter 7.6 carry the methodological substance on which the three-dimensional measurement logic operationally runs. They are described here in function and effect; the precise threshold values, cache TTLs and re-calibration steps are mandate-specifically configurable and are set in the respective mandate setup.

B.4.1 Seed, prompt and cache protocols. Each measurement wave documents the engine parameters used, the exact prompt formulation and the cache configuration in an append-only protocol file. Reports without protocol ID are not delivered. Without a protocol, no drift statement is methodologically load-bearing, because every observed deviation may also be a protocol deviation.

B.4.2 Cluster integrity and noise threshold. A prompt cluster must carry a minimum number of distinct prompt variants of the same intent category; otherwise it is not statistically robust and is excluded from reporting, not carried with a warning. A noise-rate threshold marks waves with contradictory citation patterns as unstable.

B.4.3 Baseline cohort before mandate kick-off. Before each mandate, a baseline measurement runs over several weeks at weekly cadence; it covers the same prompt clusters and engines as the later Phase-04 measurement. Without a baseline, no attribution of an intervention is possible.

B.4.4 Threshold flags with follow-up action. Per measurement metric, a threshold flag is defined. Citation-share deviations, raised volatility, publisher losses across consecutive waves and engine drift against the multi-wave mean each trigger a concrete, predefined follow-up action — from editorial review to procurement-standard reassessment. Without flag triggering, no ad-hoc adjustment occurs; flags are the sole adjustment trigger.

B.4.5 Case-based history per cluster. Every active prompt cluster carries a measurement history over several weeks documenting citation rate, persistence share, quality sample and flag events. A single wave shows a state; a history shows a curve. The persistence metric from Chapter 7.3 and the cluster share from the 7-day dashboard (B.7) run on the same measurement history.

B.4.6 Validation metrics. Per prediction from the price-factor matrix (Chapter 14.2) or from the hypothesis validation (Chapter 16.1), four metrics are carried: decision threshold, Pearson correlation between prediction and measurement, false-negatives rate and, for learning-based models, GBM loss. They are the layer that positions a measurement report as a robust forecast rather than a snapshot.

B.5Engine-aggregation formula and fall-back mechanic

The aggregation discipline named in Chapter 8.3.1 is here specified with formula, worked example to the 0.89× and fall-back mechanic. 5 sub-sections in clear structure.

B.5.1 · Three-source minimum configuration and independence criteria

The minimum configuration for a load-bearing model-blended factor consists of three inference sources that meet the following independence criteria:

  • Vendor separation. The three sources are offered by at least three distinct legal operators.
  • Model-weight separation. The three sources do not share a common model-weight basis. A GPT variant and a GPT-derived application do not count as two independent sources.
  • Training-corpus separation. The training corpora of the three sources do not overlap fully. Full corpus overlap produces systematically correlated answers and breaks the aggregation logic.

The 6 engines from Chapter 9 (ChatGPT, Copilot, Perplexity, Claude, Gemini, Google AI Overviews) meet these criteria structurally. Claude and ChatGPT are different vendors with different models; Perplexity uses GPT derivatives but is its own vendor with its own retrieval layer; Gemini and Google AI Overviews are both Google products with shared model-weight basis and are therefore counted together as one source if the independence clause is applied strictly. In the current study version they are nonetheless carried separately, because the retrieval architectures are sufficiently distinct (Gemini as conversational engine, AI Overviews as search-snippet integration).

B.5.2 · Weight-vector calculation · the aggregation formula

The model-blended factor arises as a scalar product of two vectors:

Principal weighting vector w = (w₁, w₂, …, w₆) with Σwᵢ = 1

Each entry wᵢ indicates the share of mandate reach carried by engine i. The values are calibrated in the Phase-00 initial conversation, either from explicit principal engine prioritisation or from a baseline measurement across the mandate market.

Engine value vector e = (e₁, e₂, …, e₆)

Each entry eᵢ gives the working-hypothesis value for engine i, as tabulated in section 8.3. These values are derived from documented retrieval architecture plus model self-reporting and are recalibrated quarterly (Ch. 8.4).

The model-blended factor is the scalar product:

modell_blended_faktor = w · e = Σᵢ (wᵢ × eᵢ)

Mathematically this is a weighted average of the engine values, weighted by the principal weighting vector. Structurally it is the translation of mandate-specific engine prioritisation into a single multiplicative price factor.

B.5.3 · Measurement time window and fall-back mechanic

The single-engine values are not static quantities. They are recalibrated quarterly against retrieval architecture and model self-reporting (Ch. 8.4), secured against drift through weekly sample verification (Ch. 7.3), and re-measured outside the quarterly rhythm on known model updates.

Engine outages are rare but not excluded: API timeouts, rate limits, response delays beyond the measurement time window, or error messages without usable retrieval output. In all three cases, the engine value for the affected wave is replaced by a defined neutral partial value, not removed from aggregation. Otherwise the weight-vector structure breaks between waves and comparability over time is destroyed. The fall-back deployment is marked in the protocol so that affected waves remain identifiable.

B.5.4 · Worked example · How does 0.89× arise?

The equally distributed scenario from section 8.3 (mandate with full market coverage) arises from the following vectors:

Principal weighting vector (equally distributed across 6 engines): w = (1/6, 1/6, 1/6, 1/6, 1/6, 1/6) ≈ (0.167, 0.167, 0.167, 0.167, 0.167, 0.167)

Engine value vector (as of April 2026, NB Retrieval Study, values from table Ch. 8.3): e = (1.0, 1.0, 0.95, 0.9, 0.8, 0.7) (Order: ChatGPT, Copilot, Perplexity, Claude, Gemini, Google AI Overviews)

Scalar product:

model_blended_factor = (1/6)(1.0 + 1.0 + 0.95 + 0.9 + 0.8 + 0.7)
                     = (1/6)(5.35)
                     = 0.8917
                     ≈ 0.89×

Analogously for the ChatGPT-Perplexity-prioritised scenario with w = (0.4, 0.075, 0.3, 0.075, 0.075, 0.075):

model_blended_factor = 0.4 × 1.0 + 0.075 × 1.0 + 0.3 × 0.95 + 0.075 × 0.9 + 0.075 × 0.8 + 0.075 × 0.7
                     = 0.4 + 0.075 + 0.285 + 0.0675 + 0.06 + 0.0525
                     = 0.94×

The worked example shows: the scenario values are not guessed working hypotheses but the deterministic result of an explicit scalar-product calculation from two cleanly defined vectors. The working hypotheses are the single-engine values, not the blended factor. This clarification methodologically resolves the GEO-Lead objection to the 0.89× as a "plausibilised rather than evidenced number": the blended factor is evidenced (the calculation is transparent); the single-engine values are the working hypotheses (and are empirically validated in Chapter 16).

B.6Audit chain · minimal implementation

The three audit-chain building blocks named in Chapter 12.6 — hash basis, append-only log and external Git anchoring — together produce robust tamper resistance with standard cryptographic tools, without key management and without certificate infrastructure.

Hash basis. Per release decision, verification report and Phase-04 report, a SHA-256 hash is computed over the canonicalised document content. On later verification, the document is canonicalised and hashed again; any deviation marks a manipulation.

Append-only log. The log file carries, per entry, timestamp, actor, action type, document ID and hash, plus the hash of the preceding line as a chain. The file is filesystem-protected against overwrite and deletion; any subsequent insertion or deletion visibly breaks the chain. Carried per mandate, rotated monthly; rotated files are immutable.

External Git anchoring. The log file is committed once daily or after every Phase-02 release into a Git repository outside the mandate infrastructure. The commit hash thereby sits outside direct control of the mandate system. In a later evidence situation, the log content can be checked against the commit history; deviation means subsequent change after commit. The measurement-protocol files run with into the same Git anchoring, so that measurement protocol and release history receive the same tamper resistance.

Verification path. Three steps: re-hash of the document in question and comparison with the log entry; hash-chain check across the log lines; reconciliation of the log file with the commit history of the external Git server. Breaks at any of the three steps mark the manipulation point.

For mandates with DORA context or BSIG/KRITIS relevance, the audit chain can optionally be extended by a cryptographic signature per log entry (ECDSA P-256 or EdDSA Ed25519). This extension is not part of the standard; the minimal implementation suffices for the majority of DE telco mandates.

B.77-day dashboard · metrics, triggers, roles

The 7-day dashboard reading named in Chapter 7.7 carries five core metrics per active prompt cluster and engine: citation share week-on-week as percentage-point change against the prior wave, persistence rate top-20 as the share of top-20 URLs cited in at least three of the last four waves, the publisher-loss flag as boolean indicator for publisher domains absent across two consecutive waves, the volatility index as standard deviation of the citation rate across four waves, and the engine-drift indicator as the shift of an engine’s citation pattern against the four-wave mean. The dashboard additionally displays an aggregated citation rate across all engines as a pure overview value; the three measurement dimensions from Chapter 7 remain engine-separated as the actual decision basis (aggregation prohibition per Chapter 7.5).

Trigger logic. The threshold flags from B.4.4 act on the weekly cadence: citation-share deviations above ten per cent against baseline or prior wave trigger an editorial review; sustained volatility leads to densified measurement cadence; the publisher-loss flag triggers a procurement-standard reassessment; engine drift is documented as an engine event in the case-based history. Every trigger has an immediate action; every immediate action an escalation step. Without trigger, no ad-hoc adjustment.

Role assignment. Three roles use the same data foundation with different aggregation depth and alert prioritisation: the in-house SEO/GEO lead carries the daily-to-weekly review and immediate-action decisions; the CMO or marketing officer reviews the aggregated cluster trends weekly and decides on escalations; the NB consultant integrates findings into the Phase-04 feedback loop on quarterly and semi-annual cadence.

Connection to the mandate feedback loop. The weekly dashboard findings flow into the quarterly recalibration from 11.5. Each wave produces a weekly snapshot; after twelve weeks, three months of aggregated evidence are present, feeding into the quarterly hypothesis validation from Chapter 16. The dashboard is the data antecedent of the feedback loop, not an independent second feedback loop.


Operational tool-stack in the mandate

Measurement, audit and reporting tools along the mandate phases.

Northbridge
As of May 2026
Publication

The operational tool stack is ordered by measurement and intervention layers. The overview that follows is not exhaustive and not vendor-prioritising; it shows the function layers in which a mandate builds its measurement infrastructure.

Layer Function Example vendors
Visibility tracking Citation collection across the 6 retrieval engines, Share of Model Voice Peec, Profound, Rankscale
Accuracy layer Detection of factual errors in LLM answers (tariff details, mandatory information) Scrunch
DE search index and AI Overview DE keyword tail, capture of Google AI Overview triggers Sistrix
Backlink and global tracking Hub-and-spoke substance, Brand Radar, international AI Overview visibility Ahrefs
Technical audit Schema validation, robots.txt, llms.txt across URL corpora Screaming Frog SEO Spider
Indexing acceleration Instant indexing on Bing line, URL inspection on Google Google Search Console, IndexNow
Principal reporting Dashboard with citation trends, sentiment, SoMV Reporting platform with connectors to visibility and accuracy tools
Manual model QA Pro accounts of the 4 central models for claim extraction and draft review Claude Pro, ChatGPT Plus, Perplexity Pro, Gemini Advanced

The layers are complementary, not substitutable; a visibility tracker does not replace an accuracy layer, and no automated tool replaces manual model QA. The tool-mix decision follows the measurement strategy from Chapter 7, the procurement standard from Chapter 12, and the validation architecture from Annex B; the mandate-specific selection sits in the supermatrix.


Colophon Northbridge Systems · Compliance-GEO in regulated consumer markets
DE telco study, May 2026

Methodology condensation: Compliance-GEO Codex
Application tools: Tools · Procurement: Procurement Standard
Empirics extract: BGH case-law map
Author authority: About and team
DE telco study
Retrieval procurement
in the German telco market
May 2026