<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.2">Jekyll</generator><link href="https://tiktok-audit.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://tiktok-audit.com/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-04-17T21:37:18+00:00</updated><id>https://tiktok-audit.com/feed.xml</id><title type="html">auditing TikTok</title><subtitle>our journey into TikTok&apos;s recommender systems
</subtitle><entry><title type="html"></title><link href="https://tiktok-audit.com/blog/2026/2025-07-31-GenAI-Algorithmic-Virality/" rel="alternate" type="text/html" title="" /><published>2026-04-17T21:37:18+00:00</published><updated>2026-04-17T21:37:18+00:00</updated><id>https://tiktok-audit.com/blog/2026/2025-07-31-GenAI-Algorithmic-Virality</id><content type="html" xml:base="https://tiktok-audit.com/blog/2026/2025-07-31-GenAI-Algorithmic-Virality/"><![CDATA[<p><em>This report was published by <a href="https://aiforensics.org/work/gen-ai-slop">AI Forensics</a>.</em></p>

<p>AI-generated content is flooding TikTok’s search results — and the platform is barely labeling it. Our comparative study across TikTok and Instagram in Spain, Germany, and Poland found that one in four top search results on TikTok contains synthetic AI imagery, with labeling failures widespread on both platforms.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>TikTok’s search results are dominated by AI content.</strong> 25% of TikTok’s top search results contain synthetic AI imagery, compared to significantly lower rates on Instagram.</p>

<p><strong>Agentic AI Accounts drive the problem.</strong> Over 80% of AI-generated content on TikTok originates from what we term Agentic AI Accounts (AAAs) — accounts that use generative AI tools to mass-produce content. On Instagram this figure is 15%, suggesting TikTok’s algorithm particularly favors this type of automated production.</p>

<p><strong>Labeling failures are widespread.</strong> Only half of TikTok’s AI-generated content receives any labeling at all. On Instagram the figure is 23%, meaning the problem is severe on both platforms. Labels that do exist often lack visibility — particularly on Instagram’s desktop interface.</p>

<p><strong>Most AI content is photorealistic.</strong> Over 80% of AI-generated content is photorealistic in style, maximizing its potential to deceive audiences who may not recognize it as synthetic.</p>

<h2 id="methodology">Methodology</h2>

<p>Researchers manually annotated 30 top search results across 13 politically and culturally significant hashtags (#trump, #zelensky, #pope, #health, #history) on both TikTok and Instagram, in three European countries, comparing content type, AI generation status, and label presence.</p>

<h2 id="context">Context</h2>

<p>This study introduced the concept of “Agentic AI Accounts” — a phenomenon we subsequently investigated in more detail in our <a href="https://tiktok-audit.com/blog/2025/Agentic-AI-Accounts/">December 2025 report</a>. The DSA requires platforms to label AI-generated content; this research provides systematic evidence that TikTok falls substantially short of this obligation in practice.</p>]]></content><author><name></name></author></entry><entry><title type="html">♻️ (AIF) Prompt, Upload, Repeat: Agentic AI Accounts on TikTok</title><link href="https://tiktok-audit.com/blog/2025/Agentic-AI-Accounts/" rel="alternate" type="text/html" title="♻️ (AIF) Prompt, Upload, Repeat: Agentic AI Accounts on TikTok" /><published>2025-12-03T10:00:00+00:00</published><updated>2025-12-03T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/Agentic-AI-Accounts</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/Agentic-AI-Accounts/"><![CDATA[<p><em>This report was first publisheed by <a href="https://aiforensics.org/work/agentic-ai-accounts">AI Forensics</a></em>.</p>

<p>We uncovered 354 Agentic AI Accounts (AAAs) on TikTok that together accumulated more than 4.5 billion views through over 43,000 posts created almost exclusively with generative AI tools. These accounts operate semi-autonomously — using AI to generate content at scale, upload it systematically, and exploit TikTok’s recommendation algorithm to achieve massive reach with harmful and misleading content.</p>

<h2 id="what-are-agentic-ai-accounts">What Are Agentic AI Accounts?</h2>

<p>An Agentic AI Account is a TikTok account where the content creation process is largely automated using generative AI: AI tools generate images, video, voiceover, or text, which are then uploaded — typically at high volume and with minimal human editing. We introduced this concept in our <a href="https://tiktok-audit.com/blog/2025/GenAI-Algorithmic-Virality/">July 2025 report on AI content virality</a>; this study provides the first systematic characterization of the phenomenon.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>Scale.</strong> 354 AAAs generated over 43,000 posts with 4.5 billion combined views. More than 65% of these accounts were established in early 2025, indicating rapid growth of this phenomenon.</p>

<p><strong>TikTok labels almost none of it.</strong> TikTok’s AI-generated content labels appeared on less than 1.38% of this content. The DSA requires platforms to label synthetic content; TikTok’s near-total failure to do so for AAA content represents a significant compliance gap.</p>

<p><strong>Harmful content is prevalent.</strong> Nearly one-third of the 354 accounts — and half of the top 10 most-viewed — contained content that sexualized female bodies, including minors. Widespread false news, anti-immigrant narratives, and explicit content were common across the dataset.</p>

<p><strong>Creators don’t label either.</strong> Only 10% of AAA creators consistently applied their own AI-generated content labels. 55% of AI content in the study was unlabeled by any mechanism.</p>

<h2 id="full-report">Full Report</h2>

<p>The full report is available at <a href="https://aiforensics.org/work/agentic-ai-accounts">AI Forensics</a>*</p>]]></content><author><name>Natalia Stanusch, Martin Degeling, Raziye Buse Çetin, Marcus Bösch, Salvatore Romano.</name></author><category term="research" /><category term="analysis," /><category term="AI," /><category term="ads" /><summary type="html"><![CDATA[354 accounts posting AI-generated content amassed 4.5 billion views with content TikTok barely labeled]]></summary></entry><entry><title type="html">🪖 (AIF) From FYP to WW3</title><link href="https://tiktok-audit.com/blog/2025/From-FYP-to-WW3/" rel="alternate" type="text/html" title="🪖 (AIF) From FYP to WW3" /><published>2025-10-18T10:00:00+00:00</published><updated>2025-10-18T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/From-FYP-to-WW3</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/From-FYP-to-WW3/"><![CDATA[<p><em>This report was published by <a href="https://aiforensics.org/work/fyp-to-ww3">AI Forensics</a> (October 17, 2025).</em></p>

<p>During the 2025 NATO Summit in The Hague, TikTok’s For You Page served up a dramatically different picture of the event than its search function — one dominated by military imagery, weapons, and war speculation. Our investigation compared personalized recommendation feeds against search results across 12 Dutch user accounts during the Summit period.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>FYPs prioritized conflict content.</strong> Military and weapons content represented 40% of For You Page videos during the NATO Summit period — a striking share for what was officially a diplomatic event. War-related speculation made up a further 19% of FYP recommendations.</p>

<p><strong>“World War III” was a FYP phenomenon.</strong> The term “World War III” appeared in approximately 1 out of 25 FYP vidoes.</p>

<p><strong>Search told a different story.</strong> While For You Pages showed military spectacle and war speculation, search results for the same period surfaced content about NATO perspectives, ongoing conflicts, and news coverage. The two interfaces presented fundamentally different versions of the event.</p>

<p><strong>Coverage evolved over time.</strong> As the Summit progressed, FYP coverage shifted from factual reporting toward participatory and humorous content formats.</p>

<h2 id="background">Background</h2>

<p>This research follows our <a href="https://tiktok-audit.com/blog/2024/For-You-Feed/">For You Feed analysis</a> and <a href="https://tiktok-audit.com/blog/2024/Search-Suggestions/">Search Suggestions study</a>, which together mapped how TikTok’s two main discovery surfaces behave differently and can shape exposure to political content. The NATO Summit offered a concrete real-world event to test these dynamics.</p>

<p>The finding that FYPs systematically emphasized military and war-speculation content during a major international diplomatic event raises important questions about how TikTok’s recommendation algorithm weights engagement signals — and what the downstream effects are on young audiences’ perceptions of geopolitical events.</p>]]></content><author><name>Miazia Schueler, Natalie Kerby, Martin Degeling, Giovanni Astante, Salvatore Romano.</name></author><category term="research" /><category term="analysis," /><category term="FYP," /><category term="elections" /><summary type="html"><![CDATA[How TikTok's For You Page amplified war speculation and military content during the 2025 NATO Summit]]></summary></entry><entry><title type="html">🪟 (ISD) Towards Transparent Recommender Systems: Lessons from TikTok Research Ahead of the 2025 German Federal Election</title><link href="https://tiktok-audit.com/blog/2025/Towards-Transparent-Recommender-Systems/" rel="alternate" type="text/html" title="🪟 (ISD) Towards Transparent Recommender Systems: Lessons from TikTok Research Ahead of the 2025 German Federal Election" /><published>2025-07-14T10:00:00+00:00</published><updated>2025-07-14T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/Towards-Transparent-Recommender-Systems</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/Towards-Transparent-Recommender-Systems/"><![CDATA[<p><em>This dispatch was published by <a href="https://www.isdglobal.org/digital-dispatch/towards-transparent-recommender-systems-lessons-from-tiktok-research-ahead-of-the-2025-german-federal-election/">ISD Global</a> (July 14, 2025).</em></p>

<p>Drawing on ISD’s research into TikTok’s For You Page during Germany’s 2025 federal election campaign, this dispatch examines why researchers still cannot meaningfully assess political bias in recommender systems — and what structural changes are needed to make that possible.</p>

<p>ISD’s election research found that far-right AfD fan pages were disproportionately represented among the first political videos shown to test accounts, and that multiple studies point toward right-leaning content receiving greater algorithmic amplification even when users engage equally with diverse political content. Yet despite EU Digital Services Act requirements, the transparency measures TikTok has implemented remain insufficient for independent verification.</p>

<h2 id="four-barriers-to-meaningful-research">Four Barriers to Meaningful Research</h2>

<p><strong>Restricted Virtual Compute Environment (VCE) access.</strong> Civil society organizations cannot access TikTok’s VCE on equal terms with academic researchers, creating a two-tier system that limits independent oversight.</p>

<p><strong>No ability to test algorithmic variables.</strong> Researchers cannot isolate or test individual factors that may drive recommendation patterns, making it impossible to determine why certain content gets amplified.</p>

<p><strong>Opaque political content classification.</strong> TikTok’s internal criteria for what counts as political content are not disclosed, meaning researchers cannot assess whether labeling and classification are applied consistently or correctly.</p>

<p><strong>Limited access to non-public platform data.</strong> Key data needed to evaluate recommender system behavior remains inaccessible to outside researchers.</p>

<h2 id="recommendations">Recommendations</h2>

<p>The authors call on TikTok and EU regulators to grant civil society organizations research API access comparable to academic researchers, implement dynamic testing capabilities that allow algorithmic variables to be examined, and disclose internal content categorization systems so external researchers can conduct evidence-based assessments.</p>

<h2 id="full-report">Full Report</h2>

<p>The full dispatch is available at <a href="https://www.isdglobal.org/digital-dispatch/towards-transparent-recommender-systems-lessons-from-tiktok-research-ahead-of-the-2025-german-federal-election/">isdglobal.org</a>.</p>]]></content><author><name>Marisa Wengeler, Anna Katzy-Reinshagen, Solveig Barth, Martin Degeling</name></author><category term="research" /><category term="analysis," /><category term="elections," /><category term="transparency," /><category term="recommender-systems," /><category term="Germany," /><category term="research-access" /><summary type="html"><![CDATA[ISD dispatch examining why current transparency measures fall short for assessing political bias in TikTok's recommender systems, and what access researchers need to enable evidence-based policy]]></summary></entry><entry><title type="html">❓ (AIF) TikTok’s Research API: Problems without Explanations</title><link href="https://tiktok-audit.com/blog/2025/TikTok-Research-API-Problems/" rel="alternate" type="text/html" title="❓ (AIF) TikTok’s Research API: Problems without Explanations" /><published>2025-06-12T10:00:00+00:00</published><updated>2025-06-12T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/TikTok-Research-API-Problems</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/TikTok-Research-API-Problems/"><![CDATA[<p><em>This report was published by <a href="https://aiforensics.org/work/tk-api">AI Forensics</a>. A summary was published on <a href="https://www.techpolicy.press/unpacking-tiktoks-data-access-illusion/">TechPolicy.Press</a>.</em></p>

<p>The Digital Services Act mandates that very large online platforms provide researchers access to data. TikTok’s Research API is the primary mechanism for this. Our investigation reveals that the API systematically fails to return metadata for a significant share of videos — including high-profile official content and advertisements — without any explanation.</p>

<p>An interactive dashboard exploring our findings is available at <a href="https://playground.tiktok-audit.com/api-na/">playground.tiktok-audit.com/api-na/</a>.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>One in eight videos inaccessible.</strong> Testing 260,000 TikTok URLs via data donation methodology over 64 days, we found that approximately 12.5% of videos cannot have their metadata retrieved through the Research API, despite being publicly visible on the platform.</p>

<p><strong>Official TikTok content excluded.</strong> Videos published by TikTok’s own corporate account — including CEO statements with over 30 million views — are not accessible via the API. This makes it impossible for researchers to study official platform communications.</p>

<p><strong>Major creators missing.</strong> Content from prominent accounts including Taylor Swift, Brook Monk, and major news outlets cannot be retrieved, creating systematic blind spots in research datasets.</p>

<p><strong>Thousands of ads hidden.</strong> Advertisements that are publicly visible in the ad library are not accessible through the Research API, undermining cross-referencing between commercial content and organic content studies.</p>

<p><strong>Unexplained account exclusions.</strong> Approximately 1% of creators appear to be randomly excluded from API access, with no explanation provided by TikTok.</p>

<h2 id="research-methodology">Research Methodology</h2>

<p>The investigation used two complementary approaches: (1) systematic testing of 260,000 TikTok URLs from data donations over 64 days, comparing API responses against direct web access; and (2) real-time monitoring of 100 daily videos from German For You Pages, verified against web scraping results.</p>

<h2 id="implications-for-research">Implications for Research</h2>

<p>The unreliability of the Research API is not merely a technical inconvenience — it directly compromises research validity. Studies using data donation methodologies rely on complete datasets to draw conclusions about algorithmic behavior. Systematic gaps in API coverage mean that research findings may be skewed by the very content that is most prominent or commercially significant.</p>

<p>Previous work on <a href="https://tiktok-audit.com/blog/2023/the-TikTok-research-API-falls-woefully-short/">TikTok’s data access landscape</a> identified structural issues with the API when it launched. This report documents that those problems persist and have new dimensions. We call on TikTok to provide transparency about the criteria for API exclusions and to remediate the gaps as a matter of DSA compliance.</p>]]></content><author><name>Carlos Entrena-Serrano, Martin Degeling, Salvatore Romano, Raziye Buse Çetin</name></author><category term="research" /><category term="methodology," /><category term="analysis," /><category term="API" /><summary type="html"><![CDATA[TikTok's Research API fails to provide metadata for one in eight videos, undermining independent research]]></summary></entry><entry><title type="html">🗳️ (AIF) TikTok’s Polish Elections Labels: Only Sometimes, And Only For Some</title><link href="https://tiktok-audit.com/blog/2025/TikTok-Polish-Elections/" rel="alternate" type="text/html" title="🗳️ (AIF) TikTok’s Polish Elections Labels: Only Sometimes, And Only For Some" /><published>2025-06-09T10:00:00+00:00</published><updated>2025-06-09T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/TikTok-Polish-Elections</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/TikTok-Polish-Elections/"><![CDATA[<p><em>This report was published by <a href="https://aiforensics.org/work/tiktok-polish-elections">AI Forensics</a>.</em></p>

<p>During Poland’s 2025 Presidential Election, TikTok was required under the Digital Services Act (DSA) and European Commission guidelines to label election-related content with information notices. Our investigation found that TikTok applied these labels sporadically and inconsistently, raising serious concerns about the platform’s compliance with transparency obligations.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>Diaspora exclusion.</strong> Election information labels were geographically restricted, meaning the more than 20 million Polish diaspora users worldwide — a significant portion of the Polish-speaking audience — were excluded from seeing the labels. This undermines the very purpose of election integrity measures.</p>

<p><strong>Fraud allegations spread unlabeled.</strong> We identified 23 videos spreading claims of a rigged election that accumulated over 4.5 million views. Approximately 80% of these videos lacked the required election labels, allowing disinformation to circulate without any contextual warning.</p>

<p><strong>AI-generated content went undisclosed.</strong> Four videos containing AI-generated imagery failed to display mandatory AI-disclosure labels, in direct violation of TikTok’s own policies and DSA requirements.</p>

<p><strong>Voter suppression content unaddressed.</strong> Content discouraging voter participation was present on the platform without moderation action.</p>

<h2 id="full-report">Full Report</h2>

<p>The full report, including methodology and data, is available at <a href="https://aiforensics.org/work/tiktok-polish-elections">aiforensics.org</a>.</p>]]></content><author><name>AI Forensics</name></author><category term="research" /><category term="analysis," /><category term="elections," /><category term="labels" /><summary type="html"><![CDATA[TikTok's systematic failures in applying election information labels during Poland's 2025 Presidential Election]]></summary></entry><entry><title type="html">🤡 (ISD) Crushing Comments: Gendered Harassment During the 2024 EU Parliament Elections on TikTok</title><link href="https://tiktok-audit.com/blog/2025/Crushing-Comments/" rel="alternate" type="text/html" title="🤡 (ISD) Crushing Comments: Gendered Harassment During the 2024 EU Parliament Elections on TikTok" /><published>2025-03-24T10:00:00+00:00</published><updated>2025-03-24T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/Crushing-Comments</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/Crushing-Comments/"><![CDATA[<p><em>This report was published by <a href="https://www.isdglobal.org/publication/crushing-comments-gendered-harassment-during-the-2024-eu-parliament-elections-on-tiktok/">ISD Global</a> (March 24, 2025). The research was funded by the German Federal Foreign Office.</em></p>

<p>During the 2024 European Parliament elections, TikTok became a significant campaign platform for candidates across Europe. This study analyzed comments on videos posted by French, German, and Hungarian parliamentary candidates to assess whether female candidates faced disproportionate online harassment — and what forms it took. For this analysis 326,826 comments were collected from 1,448 videos published by 102 candidates. Two annotators per language (French, German and Hungarian) analysed a randomised sample of 3,000 comments each. This resulted in a dataset of 9,000 comments across 873 videos by 74 candidates.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>Female candidates faced substantially higher harassment.</strong> Comments on videos by women candidates contained significantly more hateful, defamatory, derogatory, and discriminatory content compared to those targeting male candidates.</p>

<p><strong>Harassment targeted appearance, age, and competence.</strong> Common tactics included objectifying remarks about candidates’ physical appearance, dismissals based on age, and gendered double standards applied to assessments of political competence.</p>

<p><strong>Deliberate misgendering as a harassment tool.</strong> A notable pattern emerged around the deliberate misgendering of women perceived as gender nonconforming, deployed as a targeted form of online abuse.</p>

<p><strong>Democratic participation at stake.</strong> The study concludes that this disproportionate harassment creates real barriers to women’s political participation — contributing to self-censorship and limiting their engagement in democratic discourse at a critical moment.</p>

<h2 id="full-report">Full Report</h2>

<p>The full report is available at <a href="https://www.isdglobal.org/publication/crushing-comments-gendered-harassment-during-the-2024-eu-parliament-elections-on-tiktok/">isdglobal.org</a>.</p>]]></content><author><name>Paula-Charlotte Y. Matlach, Allison Castillo Small, Charlotte Drath, Martin Degeling</name></author><category term="research" /><category term="analysis," /><category term="elections," /><category term="harassment," /><category term="gendered," /><category term="EU," /><category term="comments" /><summary type="html"><![CDATA[Analysis of TikTok comments on videos from French, German, and Hungarian parliamentary candidates during the 2024 EU elections, finding female candidates faced substantially higher levels of harassment]]></summary></entry><entry><title type="html">🪧 (ISD) Wahlkampf im Feed: How TikTok Handles Party-Political Content Before the 2025 German Federal Election</title><link href="https://tiktok-audit.com/blog/2025/Wahlkampf-im-Feed/" rel="alternate" type="text/html" title="🪧 (ISD) Wahlkampf im Feed: How TikTok Handles Party-Political Content Before the 2025 German Federal Election" /><published>2025-02-22T10:00:00+00:00</published><updated>2025-02-22T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/Wahlkampf-im-Feed</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/Wahlkampf-im-Feed/"><![CDATA[<p><em>This report was published by <a href="https://isdgermany.org/wahlkampf-im-feed-wie-tiktok-mit-parteipolitischen-inhalten-im-vorfeld-der-bundestagswahl-2025-umgeht/">ISD Germany</a> (February 22, 2025).</em></p>

<p>Ahead of Germany’s federal election on February 23, 2025, ISD Germany investigated how TikTok handled party-political content in its recommendation feeds. The study focuses on two core questions: how consistently TikTok applied election-information labels to political content, and whether algorithmic amplification patterns favored any particular party.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>Widespread labeling failures.</strong> TikTok only classified 59% of content from official party and candidate accounts as political — meaning more than 40% of clearly partisan content received no election-information label. Fan page content fared even worse, with only 47% labeled. Overall, 45% of videos from official party accounts or fan pages carried no notice linking to election-related information.</p>

<p><strong>AfD disproportionately represented in early political content.</strong> Across test accounts, AfD content accounted for 49% of the first five political videos shown — a striking overrepresentation relative to the party’s actual support. This pattern held across accounts with differing political orientations.</p>

<p><strong>Political content remains a small share of the overall feed.</strong> No test account saw more than 30% of its overall feed classified as political. TikTok remains predominantly an entertainment platform, but the political content that does appear is not distributed neutrally.</p>

<h2 id="full-report">Full Report</h2>

<p>The full report (in German) is available at <a href="https://isdgermany.org/wahlkampf-im-feed-wie-tiktok-mit-parteipolitischen-inhalten-im-vorfeld-der-bundestagswahl-2025-umgeht/">isdgermany.org</a>.</p>]]></content><author><name>Anna Katzy-Reinshagen, Martin Degeling, Solveig Barth, Mauritius Dorn</name></author><category term="research" /><category term="analysis," /><category term="elections," /><category term="political-content," /><category term="Germany," /><category term="Bundestagswahl," /><category term="labels" /><summary type="html"><![CDATA[Analysis of TikTok's algorithmic distribution and labeling of party-political content ahead of Germany's February 2025 federal election, finding widespread labeling failures and disproportionate AfD amplification]]></summary></entry><entry><title type="html">⛓️‍💥 TikTok’s Ad Library: Persistent Issues and New Challenges*</title><link href="https://tiktok-audit.com/blog/2025/TikTok-Ad-Lib-Perstisting-Issues/" rel="alternate" type="text/html" title="⛓️‍💥 TikTok’s Ad Library: Persistent Issues and New Challenges*" /><published>2025-02-07T10:00:00+00:00</published><updated>2025-02-07T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2025/TikTok-Ad-Lib-Perstisting-Issues</id><content type="html" xml:base="https://tiktok-audit.com/blog/2025/TikTok-Ad-Lib-Perstisting-Issues/"><![CDATA[<ul>
  <li>Organizational Note: The project at SNV/interface that build the basis of this blog ended in 2024. This blog is now maintained by Martin Degeling and is no longer affiliated with SNV/interface.*</li>
</ul>

<p>In the course of my work with <a href="AI Forensics">https://aiforensics.org/</a>, I am still analysing advertisements on TikTok, primarily utilizing the ad library mandated by the Digital Services Act (DSA), which TikTok operates at <a href="https://library.tiktok.com/">https://library.tiktok.com/</a>.</p>

<p>Last year, we had alread outlined several critical points regarding the <a href="ad library">https://tiktok-audit.com/blog/category/ad-library/</a>, which mostly remain unchanged:</p>

<ul>
  <li>The ad library does not <a href="provide any contextual information about the content of the advertisements">https://tiktok-audit.com/blog/2024/Tik-Tok-oclock/</a>, such as video descriptions, links, or audio transcriptions.</li>
  <li>Additionally, for ads that contain photos instead of videos, not even the photo is displayed.</li>
</ul>

<p>Not much has changed, since the ad library has not seen any noticeable update since it’s launch in summer 2024. But over the last weeeks I have noticed additional problems that hinder the effective use of the ad library for research and analysis purposes:</p>

<h3 id="interface-issues">Interface Issues</h3>

<p>The ad library’s interface presents significant challenges for research and investigation. Consider the following scenario: I wanted to determine if any ads related to Alice Weidel had been posted since January 1, 2025, which would violate TikTok’s policies.</p>

<p><strong>Problem 1: Time Selection</strong></p>

<p>The time selection field is always set to October 2022, requiring 27 clicks to select the January 2025 period.</p>

<p><strong>Problem 2: Search Modes</strong></p>

<p>The search function has two modes:</p>

<ul>
  <li>
    <p><strong>Exact Search</strong>: Using the search term “alice weidel” in quotation marks yields 0 results:
https://library.tiktok.com/ads?region=DE&amp;start_time=1735686000000&amp;end_time=1738710000000&amp;adv_name=%22alice%20weidel%22&amp;adv_biz_ids=&amp;query_type=1&amp;sort_type=last_shown_date,desc</p>
  </li>
  <li>
    <p><strong>Inexact Search</strong>: Without quotation marks, the search yields 747 results:
https://library.tiktok.com/ads?region=DE&amp;start_time=1735686000000&amp;end_time=1738710000000&amp;adv_name=alice%20weidel&amp;adv_biz_ids=&amp;query_type=1&amp;sort_type=last_shown_date,desc</p>
  </li>
</ul>

<p>This requires extensive patience and scrolling through endless duplicates of videos from Plarium Global LTD to find relevant information. For example, finding an ad from the user “alice_weidel_deutschland” requires considerable effort: <a href="https://library.tiktok.com/ads/detail/?ad_id=1820418463412306">https://library.tiktok.com/ads/detail/?ad_id=1820418463412306</a>.</p>

<h3 id="missing-data">Missing Data</h3>

<p>Besides the aforementioned issues with missing description data and slideshows, I have observed additional problems:</p>

<ul>
  <li>Some ads do not have an “Advertiser” listed, making it impossible to view other videos from the same advertiser using the “See all ads” feature (e.g., <a href="https://library.tiktok.com/ads/detail/?ad_id=1822749626174498">https://library.tiktok.com/ads/detail/?ad_id=1822749626174498</a>).</li>
  <li>Other videos have an “Advertiser” listed, but clicking on “See all ads” still results in an empty list (e.g., <a href="https://library.tiktok.com/ads/detail/?ad_id=1820795546159121">https://library.tiktok.com/ads/detail/?ad_id=1820795546159121</a>).</li>
  <li>Loading Issues: An ad that displayed a video last week now shows nothing (e.g., <a href="https://library.tiktok.com/ads/detail/?ad_id=1820418463412306">https://library.tiktok.com/ads/detail/?ad_id=1820418463412306</a>).</li>
</ul>

<p>These issues highlight the ongoing challenges with TikTok’s ad library, which impede effective research and analysis. Addressing these problems is crucial for ensuring transparency and compliance with regulatory requirements.</p>]]></content><author><name>Martin Degeling</name></author><category term="ad-library" /><category term="ads," /><category term="problems" /><summary type="html"><![CDATA[The problems I have when trying to do research on the ad library]]></summary></entry><entry><title type="html">📣 (ISD) Systemic Risk for Elections? TikTok’s Algorithmic Amplification of Political Content”</title><link href="https://tiktok-audit.com/blog/2024/Systemisches-Risiko-Wahlen/" rel="alternate" type="text/html" title="📣 (ISD) Systemic Risk for Elections? TikTok’s Algorithmic Amplification of Political Content”" /><published>2024-11-19T10:00:00+00:00</published><updated>2024-11-19T10:00:00+00:00</updated><id>https://tiktok-audit.com/blog/2024/Systemisches-Risiko-Wahlen</id><content type="html" xml:base="https://tiktok-audit.com/blog/2024/Systemisches-Risiko-Wahlen/"><![CDATA[<p><em>This report was published by <a href="https://isdgermany.org/systemisches-risiko-fuer-wahlen/">ISD Germany</a> (November 19, 2024).</em></p>

<p>Ahead of Brandenburg’s state election on September 22, 2024, ISD Germany examined how TikTok’s recommendation algorithm distributes party-political content — and whether it creates asymmetric exposure patterns depending on the political orientation of a user’s account. The research is part of the AHEAD.tech project, funded by the Mercator Foundation, and develops methodology for assessing systemic election-related risks under the Digital Services Act framework.</p>

<h2 id="key-findings">Key Findings</h2>

<p><strong>AfD content dominated across accounts.</strong> AfD-related videos appeared most frequently — 75 videos across 9 of 10 test accounts — far outpacing SPD content (35 videos) and other parties. This asymmetry held even when accounts were not oriented toward right-leaning content.</p>

<p><strong>Political orientation shapes content diversity.</strong> Right-leaning accounts received a diet of predominantly AfD content, while center and left-leaning accounts encountered a broader mix of party representations. The political framing of an account influences how diverse the party-political content it sees becomes.</p>

<p><strong>Most TikTok content remains non-political.</strong> Only 9.6% of total content consumed across test accounts qualified as explicitly political. TikTok remains primarily an entertainment platform, but the political slice is not neutral.</p>

<p><strong>AfD content was varied and emotionally charged.</strong> Compared to other parties, AfD content incorporated more diverse formats, thematic breadth, and emotionally engaging material — factors that likely contribute to its algorithmic spread.</p>

<h2 id="full-report">Full Report</h2>

<p>The full report (in German) is available at <a href="https://isdgermany.org/systemisches-risiko-fuer-wahlen/">isdgermany.org</a>.</p>]]></content><author><name>Anna Katzy-Reinshagen, Solveig Barth, Martin Degeling</name></author><category term="research" /><category term="analysis," /><category term="elections," /><category term="political-content," /><category term="Germany," /><category term="algorithmic-amplification" /><summary type="html"><![CDATA[Study of asymmetric amplification of political content on TikTok surrounding Brandenburg's 2024 state election, examining how algorithmic recommendations differ by account political orientation]]></summary></entry></feed>