turchyn.dev

Social Networks Could Kill the Bots. But They Didn't.

Bots have flooded social networks, but the companies that run them do nothing to destroy them.

Bots vs Users

February 22, 2026 · 10 min read

Chapter 1: The Dead Internet Theory Wasn’t Supposed to Come True

Today it’s hard to find a social network that isn’t overrun with bots. They’ve filled everything: comments, posts, ads, and so on. Constantly simulating lives that don’t exist. Like a swarm of bees following their queen, they attack everything their creators point them at. Not a single bot was created for a good purpose. Some manipulate public opinion, while others harvest data from people who don’t even suspect they’re victims. Enormous resources are spent on keeping them running, but the results exceed all expectations. With bots, you can achieve outcomes that were previously simply impossible.

Want to make someone famous? Bam! Millions of bots will drag a mediocre person into thought leadership. Did something bad? No problem! Spread thousands of different versions of the event and the truth will dissolve in a sea of lies.

And on and on and on. An endless stream of lies and manipulation that never seems to end.

But this isn’t the only problem. Bots wouldn’t be so effective if people could intuitively filter them out. A deep analysis of a bot allows any person to identify it, but people simply don’t pay attention. Why? Some might say it’s human laziness, and there’s a grain of truth in that, but I’m convinced the root of the problem lies in how people have consumed information for the past thousands of years. What was written on paper, in a book, in a newspaper, in a magazine, on television, on the radio - all of it was perceived as something truthful. People aren’t used to questioning what they read or see. They simply accept it as truth.

“This message - it looks so natural, as if my friend or relative wrote it. It can’t be a planted person or an AI. A machine isn’t capable of this!” It is capable! And they do it.

The most vulnerable turned out to be people who grew up before the Internet or in its early stages. They lack the instinct to filter information - they simply accept everything as truth. The younger generation isn’t immune to this either, because a new player has emerged - AI. But that’s not what this is about. It’s about the fact that bots exist, they manipulate people and entire countries, harvest data, and dissolve real crimes in a flood of disinformation.

Despite the fact that there may not be that many of them quantitatively, they generate orders of magnitude more content than humans. And this content has an enormous impact on the people who consume it. It shapes their opinions, their views, their beliefs.

Every day, when you visit social networks, you literally plunge into an ocean of fake information. And the companies that run these social networks know this. They create the appearance of fighting them, but it’s only an appearance, because they already have all the technical capabilities to kill them all. Just think about it: they can detect copyright violations in seconds, recognize faces, identify explicit content, advertise specific products relevant only to you - but they can’t detect bots? Bots that even a person with zero experience can identify? It’s a show for the public, and the finale is already known.

Chapter 2: The Machine Knows It’s You

Every time you visit any web page, a lot is already known about you. Without registration or login, you’re already unique in your own way. It’s not just your IP address, but much more. Your browser, its version, operating system, screen resolution, installed fonts, language, time zone, geolocation, and so on. A website developer can assign you a unique identifier based on this data and record every action you performed on their site. Next time, you’re no longer anonymous to them - you’re a specific person interested in a particular type of information who visits their site at specific hours. And this is without registration or login.

If you’re registered, that’s even more information about you. Your name, email, phone number, date of birth, gender, interests, and so on. You might say that all of this could be fake! And that’s true, but even if you create a fake account, it will still be unique and trackable. It doesn’t matter whether the account is fake or not. Its behavioral patterns will reveal more about you than you think.

Now imagine what social networks are capable of. They know so much about you, and most importantly - they know how a real person behaves. A real person, when scrolling through a feed, lingers on certain posts, gets distracted, types at a certain speed, makes mistakes, and so on. Our imperfection makes us human, and therefore visible to algorithms. Bots can’t imitate human behavior - they can only copy it. But add just 1-2 new parameters, and they’ll reveal themselves like snowdrops in spring.

Each of us has a unique routine on social networks, but bots have one too. They activate at certain times and orbit heavily around specific topics. Even if you’re the most dedicated anarchist, you still won’t push the same ideas from morning to night. It’s simply how our minds work - we get tired, we get bored, we get distracted by other things. Bots don’t get bored. They’re not human, and social network owners know this.

Chapter 3: The Week When Bots Could Have Died

OK, you might say. So they know about bots, but maybe they don’t have the tools to block them. Maybe they’re afraid, because if they start blocking bots, millions of innocent people will get blocked too. That’s not the case either. Nobody is saying to block accounts immediately - you can start by limiting the number of actions for new accounts, requiring phone number and email verification, limiting the number of posts and comments for new accounts, and so on.

It’s logical to object and say that this could harm real people, and that bot operators will just buy thousands of phone numbers and register thousands of email accounts. They did exactly that, and said they’d done everything they could, and that the cost of operating bot farms had increased - but that’s not true. They simply created a new niche for bots that specialize in bypassing these restrictions. They can buy thousands of phone numbers and email accounts, but they can’t create thousands of accounts with unique behavioral patterns. They can’t create thousands of accounts that behave like real people.

This is where the strength of social networks should come into play - they can detect bots by their behavior, because they have all the data they need, and most importantly - the expertise, technology, and resources. Let me remind you once more - they created LLMs that can generate text at a human level, but they can’t detect a bot that generates text at a human level?

And we’re only discussing basic methods that banks, SaaS companies, and other organizations already use to detect fraudsters. Even social networks themselves use them, but for social networks it’s only important to eliminate the most obvious and toxic bots - the ones that hurt their business model. If you’re only hurting people’s mental health, that’s not a big deal - here’s a candy.

Chapter 4: The Numbers Nobody Wanted to Kill

When modern social networks like MySpace (2003), Facebook (2004), and Twitter (2006) first appeared, people talked about them in terms of user counts. On their home pages, they displayed the number of registered users. In their announcements, they only talked about reaching such-and-such number of users. That was their main metric - the one that demonstrated their success. But user count isn’t a measure of success when you’re selling ads and that’s the foundation of your business.

So they introduced the term “active users.” It became important for them to show that users spend so much time on the platform, liking, commenting, sharing content, and so on. The graphs went up, and with them, their ad revenue.

And you know who else is very active on social media? Bots. They don’t just create content - they also force real people to interact with them. Everyone wins. Bots get what they want - the attention of real people, and social networks get what they want - active users. Well, except for real people, who get what they don’t want - toxic bots and fake information.

But that’s not so important, because real people aren’t their target audience. Their target audience is advertisers who pay for access to active users. And if bots help them achieve that goal, then why kill them? They might be toxic, but they bring in money. They might be fake, but they bring in money. They might be harmful, but they bring in money. And that’s the main reason why social networks don’t kill bots - they simply don’t want to lose money.

Chapter 5: The Moderation Theater

Again, you might object and say they do something after all. They delete some bots, they block some accounts, they introduce new rules, and so on. That’s true. They even provide metrics now showing how many real active users they have and roughly how many bots. Usually they say that 10-15% of their users are bots. The numbers don’t seem large, but wait - so you know these are bots, and you can block them, and you’re not doing it?

I wouldn’t be surprised if they already know every single bot on their platforms right now but aren’t trying to block them. It’s simply moderation theater, where they proudly report to the US Senate or the European Commission about their successes in fighting bots. But they do it at just the right scale - enough so their graphs don’t drop and their ad revenue doesn’t shrink.

The companies that run social networks are commercial organizations with one main goal - to make money. They’re not charities, and they do exactly as much to fight bots as regulatory bodies require of them. Don’t kid yourselves - if regulators removed all restrictions tomorrow, they wouldn’t destroy a single bot. Maybe they’d shut down a few, but only if it affects their revenue. It’s simply an endless game of cat and mouse, where social networks pretend to fight bots, and bots continue to exist and evolve.

Chapter 6: The Choice

The bot problem exists not because it’s impossible to solve, but because it has a price - and everyone has to pay it. For users, it means stricter restrictions: limits on the number of posts, comments, likes, and reposts for new accounts, mandatory phone number and email verification, and so on. For social networks, it means losing part of their ad revenue and spending additional resources to detect ever-new iterations of bots. Advertisers may be unhappy that their ads will be shown to fewer people. Even regulatory bodies will have to pay more attention to ensuring social networks keep their promises about fighting bots.

It’s a choice that every participant in this process has to make - or not make. And as we know, a choice not made is still a choice. If we don’t make a choice, social networks will make it for us, and it will be a choice in favor of bots.

We’ve given them free rein for too long, and the price has been our mental health, our privacy, our security, and even our democracy. We have every means to pressure social networks. Some have taken the simple path - deleted their accounts and stopped using social media altogether. It’s a radical step, but it can be effective if enough people do it. Others may choose a more moderate approach - demand greater transparency from social networks regarding their algorithms and moderation policies, support legislative initiatives aimed at fighting bots, or simply be more cautious about what they consume on social media and how they interact with content.