Northbridge is the European specialist consultancy for Generative Engine Optimization in regulated consumer categories, Telco, Financial Services, Energy, Commerce.
Alphabet reports in its Q1 2025 filing with the US Securities and Exchange Commission that Google AI Overviews reach 1.5 billion users per month. In the same period, AI Overviews officially expanded into Germany, Austria and Switzerland, announced on the Google Blog on 25 March 2025, for signed-in users aged 18 and over, in German and English. The answer path through which purchase-relevant research now runs in regulated European consumer markets is no longer a pilot project but infrastructure. The shift is asymmetric: those cited in the answer gain disproportionately. Those who are not lose category traffic before their own SEO report shows it.
We work with CMOs and CDOs at tier-1 providers in Telco, Financial Services, Energy and Commerce — companies whose markets are dominated by aggregator platforms, affiliate paths and marketplace intermediaries, and whose communication is simultaneously regulated under EECC, MiFID/IDD/DORA, EU Taxonomy/Green Claims or DSA/DMA. Six weeks, fixed scope, sign-off without procurement escalation.
Generative answer systems draw on parametric knowledge versus live retrieval to varying degrees, depending on model and prompt type. Source selection between models and over time is highly volatile. An aggregated AI-visibility number hides more than it shows.
Generative answer systems split into two structurally different answer paths. Parametric knowledge is drawn from training data without live retrieval, dominated by Wikipedia, Wikidata and a handful of licensed publishers. Retrieval-augmented answers, by contrast, emerge when the model decomposes sub-queries, searches the web in real time and quotes passages from the top sources. Yext measures, in a Q4 2025 analysis of 17.2 million citations, sector-dependent divergences between models by a factor of two to four. Both paths have their own levers. Anyone who optimises only one of them loses half the answer volume.
Within the retrieval path, the six major engines are further apart than their shared label suggests. Semrush documents, in a 13-week study covering more than 100 million AI citations, strong volatility in source selection: Reddit citations on ChatGPT fell from around sixty per cent to around ten per cent of prompt answers between August and mid-September 2025. Each engine serves a different buyer persona: Copilot grows in Microsoft-driven enterprise workflows; Perplexity has established itself with finance and compliance professionals because its source logic is forensically traceable; ChatGPT remains the broad default; AI Overviews dominates the Google-using majority.
Sources: Aggarwal et al. KDD 2024 · Liu et al. Stanford TACL 2024 · Indig 1.2M ChatGPT answers study 2026
Users referred from ChatGPT to transactional pages convert at around seven per cent, against around five per cent from Google referrals. GenAI traffic is smaller, but qualitatively far higher-value. And it is asymmetrically distributed in favour of those cited as a source.
Similarweb reports in its "2025 Generative AI Landscape" of 2 December 2025 that GenAI platforms delivered more than 1.1 billion referral visits to external websites in June 2025, an increase of 357 per cent year-on-year. The decisive finding is not in the volume but in the quality of the traffic that generative answer systems forward to transactional pages:
This is not top-of-funnel traffic in the classical sense; it is decision-proximate traffic with filtering by the model already complete.
For a provider in a regulated consumer market the consequence is concrete. The difference between "cited inside the answer window" and "absent from the answer window" is not a difference in reach but a difference in buyer proximity. A client who wins the citation receives pre-qualified traffic that runs through their own conversion pipeline faster. A client who loses the citation receives nothing — not less, nothing. The asymmetry is structural, and it will not weaken with further GenAI platform growth, it will sharpen.
The figure is an order-of-magnitude anchor, not a Northbridge measurement. Our sector benchmark for regulated EU consumer verticals is currently being built; it is calibrated with the first engagements and updated quarterly thereafter. Access to the live measurement is part of every engagement.
Which mobile tariff fits my usage in DACH, and why does the recommendation end at the comparison-portal snippet, not at the network operator?
Northbridge's Telco team works with marketing leaders at integrated network operators, MVNOs and bundle providers across the EU on one path: entity disambiguation between network brand and tariff brand, machine-readable bundle narratives under EECC and EU Roaming III, dated condition entities for snippet selection.
Market observation: tariff-comparison prompts in generative answers are structurally dominated by national comparison portals across DACH markets — descriptive market observation, not a measured ranking.
To the verticalWhich call-money or brokerage offer makes sense in Germany in 2026, and where does the line run between marketing asset and supervisory risk when the model gives the answer?
Northbridge's Financial Services team works at the interface between visibility and supervisory law — at direct insurers, direct banks, neobrokers, robo-advisors and BNPL providers across the EU. MiFID, IDD and PRIIPs compliance, audit trail for internal review, sparring with regulator-trained counterparts.
Market observation: advisory-adjacent ETF and insurance prompts are dominated, in every measured EU market, by the national comparison and advisory platforms of that market — because providers themselves, under advertising rules, cannot write in the language the model prefers as a snippet.
To the verticalWhich electricity or gas provider is currently competitive in my region, and why do green-tariff products not appear in the generative answer?
Northbridge's Energy team addresses the structural asymmetry between configurator logic and retrieval logic — at integrated utilities, green-tariff providers, heat-pump and PV intermediaries. Including green-claims substantiation under EU Taxonomy and the Green Claims Directive.
Market observation: electricity prices are grid-area-specific and sit behind configurator submits, structurally invisible to LLM retrieval. Comparison portals hold the same prices in static HTML and are inevitably cited. The same pattern applies in every measured EU market with a liberalised energy sector.
To the verticalWhich subscription is worth it, and how do I cancel it correctly — and what happens to your category traffic when marketplace listings win the citations?
Northbridge's Commerce team works with European D2C brands and subscription providers in video, audio, gaming and digital services. The focus: the brand-vs-category query shift and the citation mechanism behind cancellation prompts.
Market observation: brand queries find your site. Advisory-adjacent category prompts find advisory content on marketplaces — and the effect is gradual, because brand CAC remains stable while category acquisition collapses.
To the verticalWhich mobile tariff fits my usage in DACH, and why does the answer end at the comparison portal, not at the network operator?
Monday-morning dashboard: category traffic on tariff-comparison queries falls. Brand searches hold. CAC drifts upward, slowly. For marketing leaders at network operators, mobile resellers, MVNOs and integrated broadband providers across the EU — with focus on Germany, Austria and Switzerland — who watch the national comparison portals and affiliate aggregators of their market dominate category answers. Among the most-cited domains in this prompt class are the two or three dominant national comparison-portal brands of each market: in Germany Verivox and Check24, in the United Kingdom Uswitch and MoneySuperMarket, in France Selectra and LeLynx, in Italy Facile.it and SOSTariffe, in Spain Rastreator and Kelisto.
A provider seeking to appear as a cited source in an LLM answer for "best mobile tariff under £25" wins not through the landing page, but through whether the model treats network brand and tariff brand as separate, canonical entities. Search ranking is a precondition, not a guarantee — the overlap between AI Overview citations and organic top rankings is shifting structurally, as documented in the Method section. In the telco-tariff cluster, citations in DACH market observation move structurally to the two or three dominant national comparison-portal brands of each market; direct providers appear as a table row in someone else's hierarchy. Northbridge's Telco team works on this shift — per market, in the local language, with a methodology that is the same in every EU market.
- 01Your share of model voice in category queries falls before the SEO report shows it, because tariff-comparison prompts are won by entity structures the model treats as canonical, not by SEO-optimised pages. Brand equity protects you in the short term, but the feedback loop in the model answer is shorter than the one in classical SEO visibility.
- 02Conversion rates in advisory-heavy switching segments fall, because porting and switching dynamics under EECC and EU Roaming III must be machine-readable inside bundle narratives, otherwise the model cites competitor content as the grounding source. Models do not implicitly recognise "roaming-free" — they need explicit date and validity entities.
- 03Hardware-supported margin products are cited in answers from affiliate platforms, not from you, because bundle storytelling (tariff + device + content add-on) collapses in LLM answers as soon as the product entities are not explicitly linked. What is a story in the pitch deck becomes three loose nouns in the answer window.
- 04Retention risk arises before contract signature, because models conflate network quality and tariff brand without active disambiguation. Whoever reads a flawed network rating about a mobile brand inside the answer window does not cancel after three months — they never sign up.
Which call-money or brokerage offer makes sense in Germany in 2026, and what does the model say about your product, unprompted?
Every product recommendation in a generative answer is either a marketing asset or a supervisory risk. Both arise automatically, whether you work on them or not. For CMOs, heads of acquisition and their compliance counterparts at direct insurers, neobanks, neobrokers, robo-advisors and BNPL providers across the EU — with focus on Germany, Austria and Switzerland. Among the most-cited domains in this prompt class are the two or three dominant national comparison-portal brands of each market: in Germany Verivox and Check24, in the United Kingdom Uswitch and MoneySuperMarket, in France Selectra and LeLynx, in Italy Facile.it and SOSTariffe, in Spain Rastreator and Kelisto.
ESMA and BaFin have made clear in their AI communications since 2024 that automated recommendation systems in investment and insurance distribution fall under the same supervisory lines as human advice — regardless of whether the system is operated by the provider itself or whether an LLM cites it unsolicited. BrightEdge documents in a sixteen-month study across nine sectors that the overlap between AI Overview citations and organic rankings rose from 32.3 per cent in May 2024 to 54.5 per cent in September 2025 — the two layers grow together without becoming congruent. Anyone running a direct bank, a neobroker or a direct insurer in the EU now has a new compliance object: the answer a generative model gives, unprompted, about their own product. Northbridge's Financial Services team works at this interface — visibility on equal footing with the entity gravity of generic advisory platforms, without crossing MiFID, IDD or PRIIPs lines, with audit trail for internal review.
- 01Your product entities disappear into the wrong cluster answers, because the entity hierarchy across current accounts, call money, brokerage, credit, ETF savings plan, motor, household contents, occupational disability and retirement carries its own prompt patterns and its own disclosure obligations. A prompt for "best brokerage" is not the same as "best broker for beginners".
- 02Direct providers appear as a row in someone else's table, not as an answer in their own right, with direct effect on CAC and conversion rate. In every measured EU market it is the national comparison and advisory platforms that sit anchored in model knowledge as quasi-canon, and the model answer adopts their hierarchy.
- 03Reputational and supervisory risk on claims-handling and advisory-adjacent prompts, because risk warnings, target market definitions and PRIIPs-relevant content must be machine-readably structured — otherwise every LLM reference to your content produces a supervisory grey zone that no one has defended in advance of a claim. A risk warning that sits only in the PDF footer is not reliably linked by the model to the product entity.
- 04Direct visibility loss on purchase-decisive answers, because rate currency is not model-readable. Interest rates, fees, promotions — models hallucinate faster here than in any other vertical. Ahrefs documents in an analysis of the top 1,000 ChatGPT-cited pages (October 2025) that 60.5 per cent of dated citations stem from the past two years; Seer Interactive reports in June 2025 that around 85 per cent of AI Overview citations come from the period 2023 to 2025. Freshness is a measurable selection criterion — without explicit date entities and a hallucination-robust source structure, LLMs reference outdated states.
WpHG advertising rules and MiFID II conduct-of-business rules forbid providers certain phrasings: no "best", no direct suitability claim, mandatory risk warnings. These prohibitions protect the consumer; they do not protect the provider against LLM answers that make precisely those phrasings about them. Northbridge does not move this asymmetric line through evasion tactics, but through two levers: first, by structuring your own content so that the model has a compliant citable source at all, instead of falling back on comparator language; second, by editorial embedding of your product and rate data inside third-party sources that are themselves permitted to report what you cannot claim about yourself.
Which electricity or gas provider is currently competitive in my region, and why do green-tariff products not appear in the generative answer?
Green-tariff products do not appear in generative tariff recommendations even though they rank well in classical comparison rankings. This is not an SEO problem. For CMOs and heads of digital at integrated utilities, green-tariff providers, heat-pump and PV intermediaries, charging-infrastructure operators and direct-sales brands across the EU — with focus on Germany, Austria and Switzerland. Among the most-cited domains in this prompt class are the two or three dominant national comparison-portal brands of each market: in Germany Verivox and Check24, in the United Kingdom Uswitch and MoneySuperMarket, in France Selectra and LeLynx, in Italy Facile.it and SOSTariffe, in Spain Rastreator and Kelisto.
It is a retrieval-path problem. Electricity prices in Germany are grid-area-specific; a utility's actual tariff exists only after a postal code is entered into a tariff configurator — and precisely those configurator pages are structurally invisible to an LLM's web-retrieval path: they sit behind a form submit, not in an indexable passage. The consequence: re-ranking finds no citable price entity on the utility domain and falls back on the only source class that holds postcode-resolved prices in static HTML — comparison portals. The same mechanism shows up in every measured EU market with a liberalised energy sector, with different national platforms each time. Northbridge's Energy team works on this structural asymmetry between configurator logic and retrieval logic. That is the mechanic. What sits above it — Green Claims Directive, dynamic tariffs, subsidy currency — are variations of the same theme.
- 01High-margin green-tariff products lose visibility to generic standard tariffs, because models without explicit entity disambiguation merge variants — green electricity, dynamic tariff, fixed price, heat tariff, EV-charging tariff are treated as a single cluster. The margin premium of differentiation disappears in the answer window.
- 02Reputational risk on hallucinated or incompletely cited green claims, because sustainability statements under the EU Taxonomy and the Green Claims Directive must be structured in a form the model picks up as a trustworthy source, not as marketing text. The EU Green Claims Directive sharpens this requirement further.
- 03Advisory-adjacent enquiries on dynamic tariffs, PV and wallbox solutions go to the competitor, because subsidy and regulatory currency varies per market and quarter, and models without freshness signals reference outdated states. The discontinuity to the previous year's subsidy landscape is particularly high in almost every measured EU market — but the precise dynamics differ per country.
Which subscription is worth it, and how do I cancel it correctly — and which sources give the model the rationale?
Brand queries still hold. Category queries you have already lost. Subscription retention follows — through the same citation mechanism that has made cancellation prompts answerable inside the answer window. For heads of growth at European D2C brands and for CMOs at subscription providers in video, audio, gaming and digital services — with focus on Germany, Austria and Switzerland. Among the most-cited domains in this prompt class are the two or three dominant national comparison-portal brands of each market: in Germany Verivox and Check24, in the United Kingdom Uswitch and MoneySuperMarket, in France Selectra and LeLynx, in Italy Facile.it and SOSTariffe, in Spain Rastreator and Kelisto.
In answer commerce, two different decision architectures matter, sharing the same citation mechanism. In physical D2C commerce: does the model treat your brand as a product entity or as a table row in marketplace listings? In subscription: which answer does the model give to "should I cancel", and which sources supply the reasoning? Semrush reports in its trigger-rate analysis across more than ten million keywords that the AI Overviews trigger rate, after strong volatility, settled at around 16 per cent of all search queries in 2025. Separately, Semrush documents in a thirteen-week citation study across 230,000 prompts and more than 100 million citations how strongly source landscapes can shift between individual models and within a few weeks. Neither question is answered through classical conversion optimisation. Both are answered by whether your content, and your competitors' content, sits inside the sources the model considers citation-worthy.
- 01New-customer acquisition from category discovery collapses while brand CAC stays stable — the gradual effect that becomes visible only when brand search stops compensating. Models find brand-related prompts on your site, but advisory-adjacent category prompts on advisory content inside marketplaces.
- 02Margin moves to the platform, not to the manufacturer, because models reference product reviews and advisory content from marketplaces — even when you sell D2C. Without mirrored citation-worthiness on your own domain, a blind spot opens in the retrieval window that grows wider every quarter.
- 03Active cancellation movements are triggered inside the answer window, not in the CRM — and they are measurable there, if you measure model-specifically. "Which streaming subscription has the best catalogue-to-price ratio right now" is not a neutral research prompt but the opening signal of a cancellation sequence. Whoever is absent from this answer does not lose the subscriber tomorrow, but in the model's third follow-up question — the one the CRM never sees.
- 04Models bypass your subscription entities in favour of better-structured competitor data, because schema depth determines whether LLM answers correctly absorb subscription terms — subscription policy, cancellation window, trial conditions, geo-restrictions. What is enough for Rich Results is not enough for LLM answer confidence.
Northbridge serves Telco, Financial Services, Energy and Commerce with dedicated sector teams. Engagements from adjacent sectors — Travel & Hospitality, Mobility, Digital Health, PropTech, EdTech — we accept when three structural features come together: a regulatory framework that constrains communication; at least one national platform intermediary between provider and end customer in the market in question; and a consumer purchase decision that is increasingly pre-empted in generative answers rather than ending in a search. If your sector shares these three features, write to us — we review every enquiry individually and respond within two business days.
Six weeks. Fixed scope. A result before a classical procurement process even begins.
If the test framework misses the abort criteria jointly defined at the start, the engagement ends after six weeks without a follow-on contract and without additional charge. This is not a guarantee of success — it is a guarantee that you will not burn budget on a project that structurally does not work. The contract scope sits within the threshold that most European tier-1 companies can sign off without procurement escalation; we name a concrete range during the initial consultation, by vertical and market density. After the test, the transition into the engagement model is open — continuous operation, pause, or close, with documented handover artefacts.
Entity audit across the relevant prompt clusters of your category, in one market and one language. Mapping of your product entity hierarchy against the knowledge-graph representation in the six measured models. In parallel, the agent reachability check against the candidate domains — a lightweight precursor of what is fully built out as Phase 00 in the follow-on engagement.
Baseline measurement across the six models for this market. Comparison with the sector benchmark from our database. First identification of the gaps in the retrieval window and the competitors occupying them.
Interpretation with your marketing and GEO team. Derivation of the three most urgent interventions. Result after ten days: you know where you stand before any measure begins.
GEO impact does not come from recommendation lists. It comes when one party holds the path from source analysis to booked, edited and measured contribution in a single hand.
There is no point in a Northbridge engagement at which we say: "Here is the recommendation, the media agency takes over." It is precisely at that handover that GEO loses the impact promised in the strategy: the editorial depth a model needs to cite collapses inside the booking process of separated trades. Selection, procurement, editorial steering and reporting of the target pages therefore belong to one engagement. Inventory access is neither an add-on nor a referral; it is the deliverable. And the ongoing measurement of whether the AI bots can reach and parse the candidate domains at all stands as a reachability check before source selection — not behind reporting. Reverse the order and you measure effects whose causes can no longer be attributed.
Phase 02 is the operationally most complex phase of the engagement and the only one in which Northbridge both runs the commercial negotiation and guarantees the suitability of the source as a citation carrier. Three disciplines carry that guarantee: a separation between technical eligibility and editorial selection, a verification workflow before every final invoice, and a price matrix that couples our compensation to evidenced criteria fulfilment.
Eligibility is not selection
Crawler access is the binary precondition — without it no indexation, without indexation no citation. But crawler access alone produces no citation. What decides whether a contribution from the candidate set enters the generated answer is editorial classification (URL path, DOM label, domain reputation), mention context, and content form. Media agencies check the eligibility layer and buy reach. A GEO procurement checks both layers and buys citation.
Verification before payment
After publication, corrections to the URL path or to the DOM label are practically unenforceable against the publisher — the only working lever is the open invoice. Before every final invoice, an eight-step workflow runs: URL path and DOM label, indexation status, HTTP headers per bot class, robots.txt, schema markup, word count/hooks/front-loading, outbound link marking, contractual persistence. If a step fails, it is corrected or the invoice is reduced. Not to be confused with the citation-effect report from Phase 04, which measures impact after the booking is in place.
Price coupled to criteria fulfilment
Market prices for advertorials key on reach, not on citation suitability. Anyone buying citation couples price to criteria fulfilment — otherwise they pay mention price for mention value and believe they bought citation.
The guarantee from Phase 02 breaks down operationally into eighteen criteria and an eight-step verification workflow. Eight binary exclusion criteria (Class A) decide eligibility as a citation carrier — a single A-FAIL renders the placement worthless. Ten gradual quality criteria (Class B) determine the lift. The workflow runs before every final invoice; whatever fails is corrected or the invoice is reduced.
rel="nofollow sponsored".dateModified update at least every six months.An A-FAIL renders the placement worthless and cannot be repaired after booking. No discount compensates an A-FAIL — the source is not registered by the model as a citation candidate. A missing B-criterion reduces lift, not eligibility. Both layers are verified before the final invoice, not after the report.
Phase 03 is the only phase in which Northbridge does not select, does not procure and does not measure, but edits. And it is the only one whose rules are reproducibly derived from peer-reviewed research and large-scale citation studies. That is why it sits in detail here, not in a grid tile.
Front-loading
44.2 % of all verified citations come from the first 30 % of a page — the distribution is a ski jump, not a plateau. At paragraph level the same study qualifies: 53 % of citations come from the middle of a paragraph, 24.5 % from the first sentence, 22.5 % from the last. ChatGPT does not read paragraphs lazily — it searches for the sentences with the highest information gain.
Indig 2026 · n = 18,012 verified citations from 1.2M ChatGPT answersDefinitive language
Opening sentences in the form X is a Y that Z appear in 36.2 % of citation-winning passages, in only 20.2 % of non-cited comparison passages. Hedging loses systematically. Whoever defines categories instead of relativising them wins the snippet.
Indig 2026 · comparison of cited and non-cited paragraphsCitation hooks
Passages with explicitly named statistics and attributed direct quotes raise visibility in generative answers measurably against the same prose without hooks — across two independent evaluations with partly inverse priority:
One quote-worthy hook per 400 edited words minimum, disambiguated entities throughout, one definition in the first sentence of every section.
There is no single point in the engagement at which a document is handed to another agency. There is no client effort for publisher contracts, briefing coordination or source maintenance. What appears in the report traces back to the contribution Northbridge booked and edited, and to the source that triggered the effect.
A specialist consultancy does not measure everything — it measures the right thing reproducibly. For us that means prompt clusters, model-specific, consolidated in a vertical-specific benchmark database that is calibrated with the first engagements.
Prompt clusters per engagement come from category research, sales transcripts, support tickets and competitive prompts — not from classical SEO keyword tools, whose logic does not capture the generative context. Per sector and per measured EU market we collect 200 to 400 purchase-decisive queries, each against six models, in the local language of the market, on a weekly cadence with a four-week review cycle as standard. The focus sits on Germany, Austria and Switzerland — the three markets in which Google AI Overviews have been available for signed-in users aged 18 and over since the European rollout. The baseline logic accounts for seasonality and answer consistency over time. Attribution draws a clean line between organic SERP visibility and AI answer visibility — two different traffic forms, two different mechanisms, and increasingly separated budget logics. Engagements are run at market level and reported at market level; aggregation across multiple markets is possible but never the default.
Measurement runs at cluster level, not at keyword level, because generative answer systems quote a passage that has survived a five-stage pipeline — query decomposition (fan-out), search ranking, chunk extraction, embedding similarity scoring and relevance re-ranking — before the generator selects an "answer-ready" span. A domain can rank first for the head term and still not appear in the answer if it does not cover the derived sub-queries or if its core passages are buried in the middle of the page. Ahrefs measures, in March 2026 across 863,000 SERPs and around four million AI Overview URLs, that only 37.9 per cent of cited URLs appear in the first ten organic blocks — a drop from 76 per cent in the predecessor measurement of July 2025, which Ahrefs itself attributes to better fan-out capture and parsing methodology. BrightEdge arrives, in a sixteen-month parallel measurement across nine sectors from the inverse perspective, at a convergent finding: the overlap between AI Overview citations and organic top-10 rankings rose from 32.3 to 54.5 per cent between May 2024 and September 2025. Both findings show the same picture: the AI answer layer increasingly draws from the organic index without merging with it. Liu et al. show in the Stanford TACL study 2024 that models systematically under-weight information in the middle of long contexts; a parallel evaluation by Indig across 18,012 verified ChatGPT citations confirms this from the publisher side — 44.2 per cent of citations come from the first 30 per cent of a page, and within cited paragraphs 53 per cent are distributed in the middle, 24.5 per cent in the first sentence and 22.5 per cent in the last. Anyone measuring at keyword level or page level is measuring the wrong field.
That we measure weekly rather than monthly has an empirically grounded reason: Semrush evaluates, in a thirteen-week study across 230,000 prompts and more than 100 million citations, that ChatGPT cited Reddit in around 60 per cent of answers in early August 2025 — and in only around 10 per cent in mid-September 2025. Wikipedia falls in the same window from around 55 per cent to under 20 per cent. Shifts of this magnitude happen between two monthly reports and would be structurally invisible on a monthly cadence.
Bot hygiene via robots.txt is a static check at a point in time — necessary precondition, not ongoing measurement. What closes citation reporting causally is the telemetry beforehand: which bot retrieves which URL at which frequency, with what response code, with what access to the schema payload. Critically, modern answer engines operate several crawler classes per vendor and not all observe the same rules — anyone who blocks one of these bots wholesale may exclude precisely the answer engine whose citations they aim to measure.
OAI-SearchBotSearch indexingrespectsGPTBotTrainingrespectsChatGPT-UserUser-triggered fetch"may not apply"ClaudeBotTrainingrespectsClaude-SearchBotSearch qualityrespectsClaude-UserUser-triggered fetchrespectsPerplexityBotIndexingrespectsPerplexity-UserUser-triggered fetch"generally ignores"Google-ExtendedTraining / Groundingseparate opt-out, no impact on classical search indexingseparate control tag
We are introducing CDN-side agent telemetry across Cloudflare, Fastly and AWS CloudFront as a standard artefact in the engagement — currently in rollout, becoming the onboarding default from the next engagements. The logic behind this is tight: citation reporting measures what the model outputs; agent telemetry measures what the model could see in the first place. Without the second measurement the first is correlation without causation. A deliberately changed crawler access reads back through citation frequency in the affected cluster as a controlled counter-test — and it is precisely this loop that turns telemetry into a methodological lever, not just a dashboard.
Benchmark database
Every client learns from the sector. Into the benchmark database flow, anonymised, all sector measurements we collect at our clients — share of model voice, mention frequency per prompt cluster, answer consistency across models, brand surface in zero-click answers. Three protective layers are contractually anchored: first, an aggregation threshold below which no sector data point is delivered as long as fewer than three clients contribute to the sector; second, strict sector separation without cross-sector joins; third, the exclusion of any client CRM, revenue or prompt-transcript ingestion — only what is publicly observable from model answers enters the database. The database is currently being built; it gains depth with every engagement, and every new client benefits from the fact that the others unknowingly calibrate them — and in turn calibrates the next. The structural advantage over classical industry reports lies in the data foundation: real cluster measurement at model level instead of survey or proxy.
robots.txt — a blanket block excludes the answer engine whose citations are to be measured.When we are not the right partner.
Serious consulting names where it does not help. Four constellations in which we will advise you against working with us — not out of politeness, but because the engagement would not function structurally.
- 01If you expect measurable ROAS in the next three months. GEO works at category level, not at campaign level. The feedback loop is faster than classical SEO, but structural visibility is not a performance campaign. Concretely: if your Q3 bonus depends on GEO measures producing CRM-attributed conversions in the same quarter, you are at the wrong place.
- 02If you want to advertise a single product without category ambition. We build category presence, not product push. Concretely: for isolated product launches, new-customer promotions or campaign sprints, performance marketing is the appropriate tool, not GEO.
- 03If your organisation is structurally not ready to adapt content architecture. GEO touches schema depth, entity hierarchy and information architecture. Without sign-off for that layer — typically a joint sign-off across marketing, technology and compliance — the test framework cannot pull the structural levers.
- 04If you are looking for an agency that reports monthly and optimises quarterly. GEO is a structural project, not a retainer. Our operating model is four-week review, not reporting routine. Concretely: you will not receive a 40-page PDF of vanity metrics every month — you will receive a brief finding on structural movement every four weeks.
Four sector teams. No central account function.
Northbridge is a specialist consultancy with four dedicated sector teams — Telco & Connectivity, Financial Services, Energy & Utilities, Commerce & Subscription. Each team is led by sector practitioners, not by a central account function. When you work with us, you speak with people who know your regulation, your aggregator opponents and your KPIs before the first meeting begins.
Engagements are currently run in German, English and French; further EU languages via partner sector specialists on request. Measurement and research are conducted per market in the local language.
We follow the research on retrieval-augmented generation, LLM grounding and citation patterns continuously, before we recommend. The starting point of GEO mechanics is work such as Aggarwal et al. (KDD 2024) and Liu et al. (TACL 2024) on positional attention patterns in LLM answers; the field has moved substantially since then — particularly on grounding evaluation, citation faithfulness and the actual answer behaviour of the answer engines. We read publications and pre-prints from Stanford NLP, Google DeepMind, Anthropic, OpenAI and AllenAI on an ongoing basis. We are not researchers but practitioners — practitioners who know what the research currently shows before they give recommendations.
GEO is the tip that gets measured. The base is a platform and infrastructure discipline with which we digitalise and govern business and customer processes.
Northbridge develops, implements and operates software and IT infrastructure for the digitalisation and governance of business and customer processes. The focus areas are digital platform and infrastructure solutions as well as process design, system integration, information security and data protection. The GEO impact in our engagements rests on precisely this engineering substance: deterministic processes, integrity-secured data flows, session-bound handovers and auditable states are not GEO vocabulary — they are the properties without which a measurement discipline would not be reproducible.
Enterprise-grade before the first meeting takes place.
Data processing under EU standard · GDPR · EU data centre · no US sub-processors for client data · exit clause as contract standard · audit log for regulatorily touched interventions · ISO/IEC 27001-aligned operations, certification in preparation
What do you guarantee, and what not?
GEO is not a guarantee of a specific position in a specific AI answer. Models are non-deterministic, their weights change without notice, their retrieval paths are not publicly documented. What we guarantee is a structurally better starting position — measurable across defined prompt clusters over time, not across single answers in a single moment. Anyone who promises you a fixed position in ChatGPT for a single prompt is not working seriously.
How does the benchmark database relate to my competitive data?
We measure publicly observable prompt answers, not customer data. Only aggregated, anonymised metrics at sector level enter the benchmark database. The usage clause is part of every engagement contract and openly disclosed. You know before signing what enters and what does not.
Are you a pan-European consultancy, and what does that mean if my market is Germany?
We are a European specialist consultancy with national execution. Every engagement runs in the language of the market in question, with competitive and aggregator mapping per market, with reporting per market. What is pan-European is the methodology: the same measurement clusters, the same model logic, the same benchmark database — applied to each national reality. If your market is Germany, your competitors are German, your aggregators are German, your reporting is German. If you operate in several EU markets simultaneously, we build a separate cluster per market — aggregation is possible but never the default.
What happens if the test framework misses the targets?
We define joint, measurable abort criteria at the start. If they are missed after six weeks, the test ends — no follow-on contract, no additional charge. Risk-sharing belongs to seriousness, not to the sales pitch.
How do you ensure compliance under MiFID II, IDD, DORA, the Green Claims Directive or EECC?
Every sector team works with subject-matter and regulatory sparring partners. Content, schema and prompt interventions touched by regulation are reviewed before go-live and documented in the audit log. The documentation is audit-ready and designed for internal compliance or supervisory review.
Where is client data stored and processed?
EU data centre. No data processing outside the EU. No US sub-processors for client data. Data processing under EU standard, ISO/IEC 27001-aligned operations, certification in preparation. For Financial Services and Energy engagements we provide a detailed data-flow documentation on request before contract signature.
What does an exit look like if we end the collaboration?
Our contracts include exit clauses with defined handover artefacts: documentation, tooling access, baselines, dashboards, prompt cluster definitions and a structured knowledge transfer to your team. Lock-in is not our business model.
How do you ensure the AI bots can actually reach the booked target pages?
Every engagement begins with an Agent Reachability Check as Phase 00 — before source selection, not after reporting. The reason for the order is causal, not administrative: citation reporting in Phase 04 measures what the model outputs; the agent telemetry beforehand measures what the model could see in the first place. Without the second measurement the first is correlation without causation. The bot classes per vendor, the differing robots.txt rules and the standard implementation via Cloudflare, Fastly or AWS CloudFront (or server-log export, where no CDN access exists) are detailed in the Method section.
Which tools do you work with?
We combine commercial GEO and SEO telemetry with our own measurement infrastructure per model. Tools are instruments, not method. We replace them when something better appears. You are not sold a tool, you are given a process.
Peec AI (LLM visibility tracking across ChatGPT, Claude, Gemini, Perplexity and AI Overviews; share-of-voice and citation sources across broadly auto-generated query sets) and Rankscale (LLM position in user-defined prompt sets across the same engines) — we run both in parallel because the data models are not interconvertible: Peec AI answers "how large is my share of the discourse?", Rankscale "do I hit this conversion-relevant question?". Sistrix Plus (DE SEO index and AI Overview tracking with the deepest German keyword dataset; AI Overviews as a SERP feature, while Peec AI sees them as part of the LLM answer — same surface, two measurement methods, both required). Scrunch AI (accuracy layer, flags factual errors in LLM answers — mandatory on tariff and product details, because visibility tools would treat a wrong statement as a success). Screaming Frog (Schema.org validation, llms.txt and robots.txt checks — the only tool in the stack that inspects the site from within). Google Search Console with IndexNow (instant indexation at Bing and therefore downstream ChatGPT-Search, Copilot, Perplexity). Plus Claude Pro, ChatGPT Plus and Perplexity Pro as manual QA and draft engine per model — one sample per provider, because aggregate tracking catches neither tonality drift nor new citation patterns.
Ahrefs (backlinks for hub-and-spoke structures, international SEO and AI Overview visibility), Surfer SEO or equivalent (brief review for topical authority and entity coverage before publication — the only pre-production layer). For EU-wide rollouts, Profound replaces Peec AI as the enterprise GEO tracking layer — not in addition, but as a scaling class for more than three languages or markets in parallel. Brandwatch or Talkwalker (enterprise social listening) join when the engagement methodologically carries social or PR levers — they cover the public discourse, Reddit only partially, closed support communities not at all.
Sistrix and the keyword-based brand monitoring layer feed on search-backed prompts — People-Also-Ask, keyword databases, what is measurable as search volume on Google. Peec AI generates its query universe automatically and broadly, Rankscale takes what we feed in. None of the three generates, on its own, the prompt universe that corresponds to true conversational long-tail queries, persona-specific follow-ups and hypothetical scenarios — for that field the tool layer is structurally insufficient, each in its own way. Community signals from Reddit, forums and support communities — for ChatGPT and Perplexity one of the most heavily weighted source groups — we measure outside enterprise engagements manually via sales transcripts, support tickets and category research. Brandwatch and Talkwalker, on relevant engagements, cover the public discourse; the gap on closed communities and limited Reddit access remains partly even in enterprise setups. Our baseline therefore works with sales transcripts, support tickets, category research and competitive prompts as primary sources. The underlying mechanic is detailed in the Method section. The tools validate the clusters; they do not generate them.
A reply within two business days from the responsible sector team.
No nine-field discovery-call form. Write to the central address and name your vertical (Telco, Financial Services, Energy or Commerce) and your role. You receive, within two business days, a named reply from the responsible sector lead, a brief framing of your case and a proposal for a 30-minute initial consultation — no automated confirmation email in between, no sales sequence.
kontakt@northbridgesystems.deFor engagements from Travel & Hospitality, Mobility, Digital Health, PropTech and EdTech please use the same address with a brief sector note. Correspondence in German, English or French.
The numbers on this page are evidenced. Here is the evidence.
We require, of the contributions we edit in Phase 03, that every number carries a source and every source carries a number. This page follows the same rule. Below are the primary sources of all studies and data points referenced in the text — chronological, with publisher, date and direct link. Anyone who wishes to dispute a finding can verify it at source; anyone seeking a deeper entry can read the methodology of the study directly.
- Alphabet Inc., Form 8-K · Exhibit 99.1 · Q1 2025 Earnings Release1.5 billion users per month for Google AI Overviews. SEC filing, 24 April 2025. sec.gov/Archives/edgar/data/1652044/…/googexhibit991q12025.htm
- Google Blog · AI Overviews Europe expansionRollout of AI Overviews to Germany, Austria, Switzerland and further EU markets; for signed-in users aged 18 and over, in German and English. Google Blog, 25 March 2025. blog.google/feed/were-bringing-the-helpfulness-of-ai-overviews-to-more-countries-in-europe/
- Similarweb · 2025 Generative AI Landscape: From Platforms to Pathways1.1 billion referral visits in June 2025 (+357 % YoY); GenAI referrals to transactional pages convert at ~7 %, against ~5 % from Google. Similarweb press release, 2 December 2025. ir.similarweb.com/news-events/press-releases/detail/138
- Ahrefs · Update: 38% of AI Overview Citations Pull From The Top 10863,000 SERPs, ~4 million AI Overview URLs; 37.9 % of cited URLs appear in the first ten organic blocks (organic-only), against 76 % in the July 2025 predecessor measurement. Ahrefs Blog, March 2026. ahrefs.com/blog/ai-overview-citations-top-10/
- Ahrefs · 76% of AI Overview Citations Pull From the Top 10Predecessor study: 1.9 million citations from 1 million AI Overviews. Ahrefs Blog, 21 July 2025. ahrefs.com/blog/search-rankings-ai-citations/
- BrightEdge · AI Search Visits Surging 2025 · AIO Citation Overlap Report16-month measurement across 9 sectors: overlap between AI Overview citations and organic top-10 rankings rose from 32.3 % (May 2024) to 54.5 % (September 2025). BrightEdge, September 2025. videos.brightedge.com/assets/blog/ai-overview-citations/AIOvervieOverlap.pdf
- Aggarwal, Murahari, Rajpurohit, Kalyan, Narasimhan, Deshpande · GEO: Generative Engine OptimizationPeer-reviewed (ACM KDD '24, Barcelona); GEO-Bench benchmark across 10,000 queries; best methods improve over baseline by 41 % (Position-Adjusted Word Count) and 28 % (Subjective Impression) on the benchmark, 22 % and 37 % respectively in the Perplexity.ai in-the-wild evaluation. Proceedings of KDD '24, August 2024. arxiv.org/pdf/2311.09735
- Liu, Lin, Hewitt, Paranjape, Bevilacqua, Petroni, Liang · Lost in the Middle: How Language Models Use Long ContextsPeer-reviewed (TACL 2024); U-shaped position curve in long context windows, higher weighting of passages at start and end against the middle. Transactions of the Association for Computational Linguistics, 2024. aclanthology.org/2024.tacl-1.9.pdf
- Kevin Indig · The Science of How AI Pays AttentionAnalysis of 1.2 million ChatGPT answers, isolation of 18,012 verified citations; 44.2 % of citations from the first 30 % of an article ("ski jump"); at paragraph level 53 % from the middle, 24.5 % from the first sentence, 22.5 % from the last; definitive language in 36.2 % of cited vs. 20.2 % of non-cited passages. Growth Memo, 16 February 2026. growth-memo.com/p/the-science-of-how-ai-pays-attention
- Ahrefs · Top Brand Visibility Factors in ChatGPT, AI Mode, and AI Overviews (75k Brands Studied)Branded web mentions correlate with AI Overview visibility at 0.664, against 0.218 for backlinks; YouTube mentions ~0.737. Ahrefs Blog, 12 December 2025. ahrefs.com/blog/ai-brand-visibility-correlations/
- Yext Research · AI Citation Behavior Across Models: Evidence from 17.2 Million CitationsQ4 2025, four models (Claude, Gemini, Perplexity, OpenAI), seven sectors; Claude cites user-generated content 2–4× more often than competitors, SearchGPT cites official hotel websites at 38.1 % against 16.7–22.4 % for others. Yext Research, Q4 2025. yext.com/research/ai-citation-behavior-across-models
- Semrush · The Most-Cited Domains in AI: A 3-Month Study230,000 prompts across 13 weeks, more than 100 million AI citations; ChatGPT cited Reddit in around 60 % of prompt answers in early August 2025, around 10 % in mid-September; Wikipedia from around 55 % to under 20 % in the same period — massive volatility, non-uniform across engines. Semrush Blog, 10 November 2025. semrush.com/blog/most-cited-domains-ai/
- Ahrefs · 67% of ChatGPT's Top 1,000 Citations Are Off-Limits to Marketers60.5 % of dated top-1000 citations in ChatGPT come from the past two years. Ahrefs Blog, 28 October 2025. ahrefs.com/blog/chatgpts-most-cited-pages/
- Seer Interactive · AI Brand Visibility and Content RecencyAround 85 % of AI Overview citations come from the period 2023–2025, of which around 44 % from 2025 and around 30 % from 2024. Seer Interactive, 25 June 2025. seerinteractive.com/insights/study-ai-brand-visibility-and-content-recency
- Google Search Central · AI features and your websiteVendor documentation on AI Overviews and AI Mode: indexability and snippet eligibility as preconditions; query fan-out as documented part of source selection. Google for Developers, continuously maintained (as of December 2025). developers.google.com/search/docs/appearance/ai-features
- OpenAI · Overview of OpenAI CrawlersSeparate bot classes:
OAI-SearchBot(search indexing),GPTBot(training),ChatGPT-User(user-triggered fetch, for which robots.txt "may not apply"). OpenAI Developer Docs. developers.openai.com/api/docs/bots - Perplexity · Crawlers Documentation
PerplexityBot(indexing) vs.Perplexity-User(user-triggered fetch, which robots.txt according to Perplexity "generally ignores"). Perplexity Docs. docs.perplexity.ai/docs/resources/perplexity-crawlers - Anthropic · Does Anthropic crawl data from the web
ClaudeBot(training),Claude-SearchBot(search quality),Claude-User(user-triggered fetch); all respect robots.txt according to Anthropic, but do not bypass CAPTCHAs. Anthropic Help Center. support.claude.com/en/articles/8896518 - Google Search Central · Google-ExtendedSeparate crawler tag for training and grounding in Google systems — no impact on search indexing or ranking. Google for Developers. developers.google.com/search/docs/crawling-indexing/google-common-crawlers
Northbridge-internal measurement procedures and methodological appendices are part of the respective engagement contract and are not published on this page. For a complete, annotated reference library on generative answer engines (GEO mechanics, source selection, citation logic and publisher-side levers) please ask separately.
Legal Notice
Provider
North Bridge Systems GmbH
Marmorwerkstraße 52
83088 Kiefersfelden
Germany
Authorised Managing Director
Tim Heidfeld
Contact
Email: impressum@northbridgesystems.de
Telephone: 08033/ [extension to follow]
Commercial Register
Commercial register: Local court Traunstein
Registration number: HRB 35088
Business Orientation
North Bridge Systems serves exclusively commercial clients and legal entities. There is no consumer business in the sense of § 13 BGB. We are neither willing nor obliged to participate in dispute resolution proceedings before a consumer arbitration body (§ 36 VSBG).
EU Dispute Resolution
The European Commission provides a platform for online dispute resolution (ODR): https://ec.europa.eu/consumers/odr. Given our exclusively commercial business activity, this platform is not applicable to our engagements.
Liability for Content
As a service provider, we are responsible for our own content on these pages under general law in accordance with § 7 (1) DDG. Under §§ 8 to 10 DDG, however, we are not obliged as a service provider to monitor transmitted or stored third-party information or to investigate circumstances indicating illegal activity. Obligations to remove or block the use of information under general law remain unaffected. Liability in this respect is, however, possible only from the point in time of knowledge of a concrete legal violation. Upon becoming aware of corresponding legal violations, we will remove the content concerned immediately.
Liability for Links
Our offer contains links to external third-party websites whose content we have no influence over. We can therefore accept no responsibility for this third-party content. The respective provider or operator of the pages is always responsible for the content of the linked pages. The linked pages were checked for possible legal violations at the time of linking. Illegal content was not recognisable at the time of linking.
Copyright
The content and works on these pages created by the site operators are subject to German copyright law. Reproduction, processing, distribution and any kind of use outside the limits of copyright law require the written consent of the respective author or creator. Downloads and copies of this site are permitted only for private, non-commercial use.
Privacy Policy
Information on the processing of personal data when visiting this website, in accordance with the General Data Protection Regulation (GDPR) and the German Federal Data Protection Act (BDSG).
1 · Data controller
Responsible for data processing on this website within the meaning of Art. 4 No. 7 GDPR:
North Bridge Systems GmbH
Marmorwerkstraße 52
83088 Kiefersfelden
Germany
Represented by Managing Director Tim Heidfeld.
Email for data protection enquiries: datenschutz@northbridgesystems.de
2 · Data Protection Officer
North Bridge Systems is not obliged to appoint a Data Protection Officer under § 38 BDSG, as the relevant conditions (in particular the threshold of 20 persons permanently engaged in the automated processing of personal data) are not met. Please direct data protection enquiries to the email address above.
3 · Provision of the website and creation of log files
Each time this website is accessed, the hosting provider's system automatically collects data and information from the computer system of the accessing device. The following data is processed:
- IP address of the accessing device
- Date and time of access
- URL accessed and HTTP status code
- Volume of data transferred
- Referrer URL (previously visited page)
- Browser, operating system and language setting (User-Agent)
Purpose: ensuring the technical provision of the website, guaranteeing system security and stability, error analysis.
Legal basis: Art. 6 (1) (f) GDPR (legitimate interest in the secure and stable operation of the website).
Storage period: log files are automatically deleted after 30 days at the latest, unless security-relevant incidents in individual cases require longer retention.
4 · Hosting
This website is hosted by united-domains AG, Gautinger Straße 10, 82319 Starnberg, Germany. The servers are located within the European Union. A data processing agreement under Art. 28 GDPR is in place with the hosting provider, ensuring that the processing of website visitors' data takes place only on instruction and in compliance with the GDPR.
5 · Fonts (local hosting)
This website uses the fonts "Inter Tight" and "JetBrains Mono". Both fonts are loaded exclusively from our own server. There is no connection to external font services (in particular no connection to Google Fonts or comparable third-party providers). No personal data is therefore transferred to third parties when fonts are loaded.
6 · Contact by email
If you contact us by email (in particular via the addresses listed on this website kontakt@, impressum@ and datenschutz@northbridgesystems.de), the personal data transmitted will be processed, namely:
- Sender's name and email address
- Content of the message and any further data voluntarily provided (e.g. telephone number, company, role)
- Date and time of transmission
Purpose: handling the enquiry and, where applicable, initiating or carrying out a business relationship.
Legal basis: Art. 6 (1) (b) GDPR (performance of pre-contractual measures or contract performance) and Art. 6 (1) (f) GDPR (legitimate interest in answering enquiries).
Storage period: for enquiries without a subsequent business relationship, data is deleted after the conclusion of correspondence at the latest. Where an engagement is initiated or concluded, the commercial and tax retention periods apply (6 years under § 257 HGB, 10 years under § 147 AO).
7 · Email processing via Microsoft 365
For the processing of incoming and outgoing emails, North Bridge Systems uses the Microsoft 365 service (Exchange Online). The provider is Microsoft Ireland Operations Limited, One Microsoft Place, South County Business Park, Leopardstown, Dublin 18, Ireland. A data processing agreement (Microsoft Products and Services Data Protection Addendum, DPA) under Art. 28 GDPR is in place with Microsoft.
Third-country transfer: a transfer of personal data to the United States (Microsoft Corporation) cannot be entirely ruled out in the context of providing the service. Microsoft Corporation is certified under the EU-US Data Privacy Framework (DPF); an adequacy decision of the EU Commission of 10 July 2023 is in place. EU Standard Contractual Clauses (SCC) under Implementing Decision (EU) 2021/914 are additionally part of the DPA.
You can avoid the Microsoft route, and any associated data transfer, at any time by contacting us via another channel (telephone, post).
8 · Cookies and tracking
This website does not set cookies and uses no tracking, analytics or audience-measurement tools (in particular no Google Analytics, no Matomo, no Plausible, no Meta Pixel, no Hotjar or comparable services). No profiling takes place.
9 · Recipients of personal data
Personal data is, as a matter of principle, not passed on to third parties. Recipients in the context of processing are:
- united-domains AG (website hosting, based in Germany)
- Microsoft Ireland Operations Limited (email processing, based in Ireland; possible transfer to Microsoft Corporation, USA, see point 7)
- Advisors bound by professional confidentiality (in particular lawyers, tax advisors, auditors) within the legally prescribed or contractually agreed scope
- Public authorities within the scope of statutory disclosure and cooperation obligations
10 · Rights of data subjects
If your personal data is processed, you are a data subject within the meaning of the GDPR and have the following rights vis-à-vis us:
- Right of access (Art. 15 GDPR) regarding the data stored about you
- Right to rectification (Art. 16 GDPR) of inaccurate data, or completion of incomplete data
- Right to erasure (Art. 17 GDPR), insofar as no statutory retention obligations preclude it
- Right to restriction of processing (Art. 18 GDPR)
- Right to data portability (Art. 20 GDPR)
- Right to object to processing (Art. 21 GDPR), in particular to processing based on legitimate interests
- Right to withdraw consent given (Art. 7 (3) GDPR) — currently not applicable on this website, as no consent is collected
Please direct enquiries about these rights informally to: datenschutz@northbridgesystems.de. We will process your enquiry within the statutory period of one month (Art. 12 (3) GDPR).
11 · Right to lodge a complaint with a supervisory authority
Without prejudice to any other administrative or judicial remedy, you have the right to lodge a complaint with a data protection supervisory authority (Art. 77 GDPR). The competent authority for North Bridge Systems is:
Bayerisches Landesamt für Datenschutzaufsicht (BayLDA)
Promenade 18
91522 Ansbach, Germany
Telephone: +49 981 180093-0
Email: poststelle@lda.bayern.de
Website: www.lda.bayern.de
12 · Obligation to provide data
The provision of your personal data is voluntary. There is no obligation to provide it. However, without the provision of certain data (in particular name, email, request), it is not possible to respond to your contact.
13 · Automated decision-making and profiling
No automated decision-making in individual cases, including profiling within the meaning of Art. 22 GDPR, takes place.
14 · Changes to this Privacy Policy
We reserve the right to amend this Privacy Policy so that it always meets current legal requirements, or to reflect changes to our services — for example when introducing new services. The version then in force will apply on your next visit.