Corporations do not like it though, because they want to control spread of information. They prefer you re-enter page, be targeted with more ads, or spoonfed with their news algorithm on-site.
However RSS only works, if someone knows about your page. YouTube is quite good with RSS though. You can have channel RSS, be notified about new videos, while channels still are discoverable.
Also shameless plug to my own database of RSS feeds
h ttps://github.com/rumca-js/awesome-database-feeds
It works pretty well. But importantly, it's so cheap that I have never really seen it on my bill. An earlier prototype used OpenAI embeddings. I loaded 5$ API credits and after a year the credits expired.
https://aws.amazon.com/blogs/machine-learning/use-language-e...
https://github.com/aws-samples/rss-aggregator-using-cohere-e...
I also wonder how much is dead traffic. Dead as in people who add a ton of stuff in their RSS readers but don’t actually read.
RSS is a bit of a black box when it comes to this but maybe that's a good thing.
More important, when you see my traffic your post is definitely read on your website itself (or at least opened to read). There's no background stuff going on in my reader, pulling in feeds without reading them.
My way of reading feeds is more Tinder-style: the latest post from a random feed per time, then another one, etc. I feel irky even thinking about a firehose of content coming my way.
Also, RSS readers are generally automated. I know I've had them around for years pulling in articles that I never read. Like a podcast "listen" is actually just an automated download, RSS traffic does not necessarily involve anyone actually reading the article, whereas search traffic is generally high intent and is at least resulting in eyeballs on the site, if not actual readers.
+ With a strong enough social network you probably don't have to care about SEO as much
You can title your post about bad customer service practices in a unique way without a second thought [0] and your more traditionally titled posts can still make the first page of a Google search with a reasonable query [1].
+ Depending on your niche your target audience is likely to already be tapped in well enough to not have to rely on search engines for content catering to their interests.
I feel like search engine practices trend along the curve shown in that meme where it's the "fool" on one end and then the "normie" in the middle and then the "Jedi" on the other end who does the same thing as the idiot. Except in this case "Jedis" only search for what's not present in their feeds (which doesn't have to be only RSS feeds) and fools can eventually cultivate their own feeds for their interests and reserve search engine use for mundane purposes that essentially fulfill the responsibility of some kind of pop culture almanac, phonebook and portal to Wikipedia.
[0]: https://shkspr.mobi/blog/2026/03/bored-of-eating-your-own-do...
[1]: https://shkspr.mobi/blog/2026/04/does-mythos-mean-you-need-t... — I Googled "mythos and open source". Interestingly, a forum discussion about this post came before it: https://itsfoss.community/t/does-mythos-mean-you-need-to-shu...
Meanwhile, the long-time users who subscribed via RSS are still showing up like they always have. If this is the case, it’s a bit of a sad reality for content creators.
Also the embedded mastodon feed on my site uses rss.
I think many platform did support RSS, or API, but at some time it was dropped. It is not that hard to provide RSS. It is not like it has to be implemented from scratch. It is just dropped by platforms. There must be reasons for dropping. One may argue it is not worth it, but even to drop functionality there needs to be a decision from management. The management always think about money. Is RSS or API monetized correctly? Free? No? Then we drop it, because data from algorithms serving user contents can be easily monetized. Just follow the money, the oldest truth.
This is what lead to algorithm based filtering. Hacker News uses a simplistic algorithm but it is definitely using one and it works well enough. It's why I come here. We all collectively vote things up and what remains is nominally interesting enough to skim from the front page. With a bit of editorializing.
Social networks tried to game the algorithms for ad revenue. Which is why they are a lot less popular these days. Sites like Medium, Substack, Tumblr, etc. took over from simple blogs and immediately started raising walled gardens around them to become discovery platforms, have recommendations, etc.
But at least they support RSS. A lot of websites still do. If you run any kind of website publishing regular news or article content and you don't support a feed, you are being an idiot. It's easy, doesn't really cost anything, and you might actually get people using your feed once in a while. Your site might actually have one without you realizing. Most news papers have feeds. They are everywhere. The main issue isn't finding them but sifting through them. It always was.
With agent based approaches, you control the algorithm. That wasn't possible in the past. LLMs can summarize, aggregate, categorize, group, filter, etc.
Same pattern as say a sub-reddit subscriber count compared to the number of nicks currently in an associated IRC channel, though IRC lurkers soften that distinction.
The data doesnt purport to cover any more than the 1 website. Its not like there are any generalisations about other websites derived from the data. Its just "These are where my hits come from"
Every major RSS reader supports folders. Your problem is that you engage with RSS as if it were a social media feed, with it's single monolithic reverse-chronological feed.
Just don't do that. Stick all the high volume news feeds in a folder, and you can skim read the headlines & hit "mark all as read" once you're done or for whatever other reason don't want to look at the news anymore.
Stick the low volume things you care about in their own folder, and those will remain unread, in their own ordering for you to read at your own leasure.
Even for sites that don't offer granular feeds, every major feed reader offers filtering options, a lot of them offer fairly complex regex filtering.
> This is what lead to algorithm based filtering.
Feed aggregators (and most social media) exist because of discoverability, finding new stuff from new people you hadn't heard about before.
> With agent based approaches, you control the algorithm. That wasn't possible in the past. LLMs can summarize, aggregate, categorize, group, filter, etc.
You'd be spending tens of dollars of compute on something that every major RSS client was doing back in 2006 with the equivalent of less than a single penny worth of current day compute.
I think the open web needs to come back, but in a fair way for everyone, giving readers control over their feeds while also sending traffic and comments back to the original sources. Not quite sure how to do that yet.
They try to address that
> I added RSS and Newsletter tracking. These data are very lossy. If someone is subscribed to my RSS feed and opens a post and their client downloads a lazy-loaded image at the end of the post, I get a hit.
Though some seem to go out of their way to make RSS impossible.
https://aws.amazon.com/blogs/machine-learning/use-language-e...
I'm dumbfounded by the number of times I see comments of the form "if the author is reading this ..." on a 3rd party comment side, with a link posted by somebody else, on a forum the author is likely never going to watch, followed by an actually useful comment that you could have _ensured_ the author reads by you know... just contacting him?
Forum comments are just recipe for instant spam, and have been so in the last 10+ years. If you want to make them useful, it currently needs to be actively policed (not to mention: you can be responsible for the content posted as well in several countries now). As an author, only if you're trying to create an audience around your blog, all the hassle around it might be worth it.
Yeah yeah, I know, data-point of 1.
I recently read Susam's blog post where they said that "most of the traffic to my personal website still comes from web feeds" - I wondered if that was true for my site.
I've been writing this blog for a while. I've never much bothered with "aggressive" SEO - I have a fairly semantic layout, all my reviews have metadata, and stuff like that - but I'm not cramming in keywords, using AMP, or whatever other chickens Google requires to be sacrificed for a higher ranking. Nevertheless, I do OK.
Last year, I added a bit of local-only, lightweight statistics-gathering to my blog. I can see which sites people click on to reach mine. Google is right up the top, DuckDuckGo is surprisingly high, Bing is lucky to crack the top 20 on any day. Similarly, I can see how much traffic I get from the Fediverse and BlueSky (Twitter has all but vanished).
A few weeks ago I added RSS and Newsletter tracking. These data are very lossy. If someone is subscribed to my RSS feed and opens a post and their client downloads a lazy-loaded image at the end of the post, I get a hit. For email it's broadly the same. If an email is opened and the tracker image is loaded, I get a hit (although Gmail does obfuscate that somewhat).
I'm not looking for super-accurate numbers (although I do block as many AI crawlers and bots as possible). I'm not creepily following people around the web nor am I trying to sell them anything. I just want a rough idea of where people find me.
Here are my blog's views for the last 28 days.

Some months I get a surge of hits from link aggregators like HN or Reddit. Sometimes I'm linked to from a popular site or cited in academic work. But most of the time I bumble along getting hits from here, there, and everywhere. Nevertheless, it's lovely to see so many people choosing to subscribe (for free!) and astonishing that they provide more traffic than a major search engine.
Obviously, these are two very different types of traffic. People who are searching for a specific thing and stumble upon my blog are different from those who decide to like and subscribe.
But, yeah, about 25% of my traffic comes from people who have chosen to subscribe.
I'm just delighted that so many people read my random thoughts.