Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.
So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.
[0]https://developers.google.com/search/docs/essentials/spam-po...
If you are automating it, I don't see why not. Kitboga, a you-tuber kept scam callers in AI call-center loops tying up there resources so they cant use them on unsuspecting victims.[0]
That's a guerilla tactic, similar in warfare, when you steal resources from an enemy, you get stronger and they get weaker, its pretty effective.
I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!
Every time I released an update, and new crack would appear. For the next six months I worked on improving the anti-copying code until I stumbled across an article by a coder in the same boat as me.
He realised he was now playing a game with some other coders where he make the copyprotection better, but the cracker would then have fun cracking it. It was a game of whack-a-mole.
I removed the copy protection, as he did, and got back to my primary role of serving good software to my customers.
I feel like trying to prevent AI bots, or any bots, from crawling a public web service, is a similar game of whack-a-mole, but one where you may also end up damaging your service.
It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.
A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.
This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hosted
I think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...
It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!
I'm open to crypto not all being hustles and scams
Tell me what you think?
The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?
Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.
Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.
Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.
This project's selective protection of the major players reinforces that effect; from the README:
" Be sure to protect friendly bots and search engines from Miasma in your robots.txt!
User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "
Why not have a library of babel esq labrinth visible to normal users on your website,
Like anti surveillance clothing or something they have to sift through
In 2000s there was some company in Russia selling English courses. It spammed so much, that people were really pissed off. To make long story short, the company disappeared from a public space when Golden Telecom joined the party of retaliatory "spam" calls and make computer to call the company using Golden Telecom modem pool.
So, yeah, you kinda can achieve something in this way, but to make sure you should lease a modem pool for that.
It’s one of the best time investments I’ve ever made. They just don’t call me anymore.
I think they have two lists: the “do not call” list, and the “unprofitable to call” list. You want to be on the latter list.
phone scammers have a very high personel cost, hence why some resort for human traffic.
if everyone picked up the phone and wasted a few seconds, it would be enough to make their whole enterprise worthless. but since most people who would not fail shutdown right away, they have the best ROI of any industry. they don't even pay the call for first seconds.
Is this how low we've sunk - that even below taking a single personal anecdote and generalizing it to everything - now we're taking zero experience and dismissing things based on vibes?
I've seen lots of LLM-slop-lovers doing the same thing. Maybe it's a pattern.
The content is for everyone. They can have it. Just don't also take it away from everybody else.
I wonder if you could've won by making the cracking boring. No new techniques, bare minimum changes to require compiling a new crack, and just enough to make it difficult to automate. I.e. turn the cracking into a job.
But in reality, there are other community-driven motivations to put out cracks.
Unfortunately social media and snowballing copyright maximalism has inflated egos to the point where more and more people think they need to control everything.
> The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped.
So we should all just do nothing and accept the inevitable?
Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?
Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?
Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?
If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?
Is this a solution in search of a problem?
Can't the LLMs just ignore or spoof their user agents anyway?
It's like if someone was trying to "trap" search crawlers back in the early 2000s.
Seems counterproductive
Depending on your goals, this may be a pro or a con. I, personally, would like to see a return of "small web" human-centric communities. If there were tools that include anti-scraping, anti-Google (and other large search crawlers) as well as a small web search index for humans to find these sites, this idea becomes a real possibility.
Many bots cycle through short DHCP leases on LTE wifi devices. One would have to accept blocking all cell phones which I have done for my personal hobby crap but most businesses will not do this. Another big swath of bots come from Amazon EC2 and GoogleCloud which I will also happily block on my hobby crap but most businesses will not.
Some bots are easier to block as they do not use real web clients and are missing some TCP/IP headers making them ultra easy to block. Some also do not spoof user-agent and are easy to block. Some will attempt to access URL's not visible to real humans thus blocking themselves. Many bots can not do HTTP/2.0 so they are also trivial to block. Pretty much anything not using headless Chrome is easy to block.
Seems a clever and fitting name to me. A poison pit would probably smell bad. And at the same time, the theory that this tool would actually cause “illness” (bad training data) in AI is not proven.
It's not all that productive, it's an act of desperation. If you can't stop the enemy, at least you can make their action more costly.
One positive outcome I could see it AI companies becoming more critical of their training data.
No, what you're basically describing is "I shared something but then I didn't like how it ended up being used". If you put stuff out in public for anyone to use, then find out it's used in a way you don't like, it's your right to stop sharing, but it's not "similar" to stealing beyond "I hate stealing"
I don't think that's the case. I'm not even arguing they aren't the worst people on the planet - might as well be. But all is see them doing is burning money all over the place.
Websites are an endless stream of cookies.
The analogy doesn’t hold.
I'm also going to download a car.
Depends on the trust level of your society. where the store resides.
The internet is a cesspool of vagrants, thieves, mentally unstable, people and software with no impulse control, pirates and that is just talking about corporations. It gets so much worse with individuals.
You are allowed to take one cookie. But you are allowed to view a public website multiple times if you so want.
From a practical perspective you also have to have a steady stream of features for the newer versions to be worth cracking. Otherwise why use v1.09 when v1.01 works fine? Moreover spending less effort into improving the DRM is still playing at the cat and mouse game, albeit with less time investment. If you're making minimal changes, the cracker also has to spend minimal time updating the crack.
I daresay rate-limiting will result in better outcomes than well-poisoning with hidden links that are against the policies of search engines.
Lots of potential for collateral damage, including your own websites' reputations and search visibility, with the well-poisoning approach.
If you want an AI bot to crawl your website while you pay for that bandwidth then you wont use the tool.
https://www.libraryjournal.com/story/ai-bots-swarm-library-c...
I'm completely uncertain that the unsophisticated garbage I generated makes any difference, much less "poisons" the LLMs. A fellow can dream, can't he?
1. Simple, cheap, easy-to-detect bots will scrape the poison, and feed links to expensive-to-run browser-based bots that you can't detect in any other way.
2. Once you see a browser visit a bullshit link, you insta-ban it, as you can now see that it is a bot because it has been poisoned with the bullshit data.
My personal preference is using iocaine for this purpose though, in order to protect the entire server as opposed to a single site.
You can use security challenges as a mechanism to identify false positives.
Sure bots can get tons of proxies for cheap, doesn’t mean you can’t block them similar to how SSH Honeypots or Spamhaus SBL work albeit temporarily.
Nope. Copyright is a thing, licenses are a thing. Both are completely ignored by LLM companies, which was already proven in court, and for which they already had to pay billions in fines.
Just because something is publicly accessible, that does not mean everybody is entitled to abuse it for everything they see fit.
Everything is a Remix culture. We should promote remix culture rather than hamper it.
Everything is a Remix (Original Series) https://youtu.be/nJPERZDfyWc
Me and my 9 friends stand around the cookie-serving person blocking everyone else.
It's taking all the cookies over a period of time.
The analogy was good.
If I can poison them and their families, I will.
More centralized web ftw.
[1]: in quotes, because I dislike the term, because it’s immaterial whether or not an ugly block of concrete out in the sticks is housing LLM hardware - or good ol’ fashioned colo racks.
We need a Crawler blacklist that can in realtime stream list deltas to centralized list and local dbs can pull changes.
Verified domains can push suspected bot ips, where this engine would run heuristics to see if there is a patters across data sources and issue a temporary block with exponential TTL.
There are many problems to solve here, but as any OSS it will evolve over time if there is enough interest in it.
Costs of running this system will be huge though and corp sponsors may not work but individual sponsors may be incentivized as it’s helps them reduce bandwidth, compute costs related to bot traffic.
Like, what if you actually post something that gains traction, is it going to bankrupt you or something?
If your site exists to share information, then the information gets disseminated, whether via LLM or some browser, it doesn't make a difference to me
And search crawlers/results have been producing snippets that prevent users from clicking to the source for well over a decade.
Edit: it loaded. I don't see how the problem isn't simply solved by an off the shelf solution like cloud flare. In the real world, you wouldn't open up a space/location if you couldn't handle the throughput. Why should online spaces/locations get special treatment?
If you keep getting harrassed by people wearing black hoodies, would it be ethical to start taking countermeasures against all people who wear black hoodies?
My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.
Good enough for me.
> More centralized web ftw.
This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.
It's not just some light bump in traffic. It's a headache that shouldn't need to be dealt with if they would respect ROBOTS.txt. Quite simple really.
This is no different than saying “robbers aren’t causing any problems, you just need to lock your doors, buy and set up sensors on every point of potential ingress, and pay a monthly cost for an alarm system. That’s on you.”
Secondly, denial-of-service implies intentionality and malice that I don't think is present from AI scrapers. They cause huge problems, but only as a negligent byproduct of other goals. I think that the tragedy of the commons framing is more accurate.
EDIT: my first point was arguably incorrect because some scrapers do use decentralized infrastructure and my second point was clearly incorrect because "denial-of-service" describes the effect, not the intention. I retract both points and apologize.
...the same courts that ruled that AI training is probably fair use? Fair use trumps whatever restrictions author puts on their "licenses". If you're an author and it turned out that your book was pirated by AI companies then fair enough, but "I put my words out into the world as a form of sharing" strongly implied that's not what was happening, eg. it was a blog on the open internet or something.
From a legal perspective, it's a pretty clear "no". The instructions in recipes aren't copyrightable. The moral question is more ambiguous, but it's still pretty weak. Most recipes are uncredited, and it's unclear why someone can force everyone to attribute the recipe to them when all they realistically did was tweak the dish a bit. In the example above, I doubt you invented cookies.
… browses memory and storage prices on NewEgg …
Hmm.
But the word digital is distracting us.
The word information is the important one. The question isn't where information goes. It's where information comes from.
Is new information post scarcity?
Can it ever be?
They don't have to hate the copyright.
Don't post anything online that you don't want to be brought up in court later.
many scraper already know not to follow these, as it's how site used to "cheat" pagerank serving keyword soups
Why are you presenting the latter option as if it were mainstream? It's such a small percentage of use cases that it probably isn't even a rounding error.
People who want to disseminate information also want the credit.
I'd still like to know why you are presenting this false dichotomy. What reason do you have for presenting a use case that has fractions of a percentage as if it were a standard use case? What is your motivation behind this?
AI companies continually scrape the internet at an enormous scale, swallowing up all of its contents to use as training data for their next models. If you have a public website, they are already stealing your work.
Miasma is here to help you fight back! Spin up the server and point any malicious traffic towards it. Miasma will send poisoned training data from the poison fountain alongside multiple self-referential links. It's an endless buffet of slop for the slop machines.
Miasma is very fast and has a minimal memory footprint - you should not have to waste compute resources fending off the internet's leeches.
Install with cargo (recommended):
cargo install miasma
Or, download a pre-built binary from releases.
Start Miasma with default configuration:
miasma
View all available configuration options:
miasma --help
Let's walk through an example of setting up a server to trap scrapers with Miasma. We'll pick /bots as our server's path to direct scraper traffic. We'll be using Nginx as our server's reverse proxy, but the same result can be achieved with many different setups.
When we're done, scrapers will be trapped like so:
Within our site, we'll include a few hidden links leading to /bots.
<a href="/bots" style="display: none;" aria-hidden="true" tabindex="1">
Amazing high quality data here!
</a>
The style="display: none;", aria-hidden="true", and tabindex="1" attributes ensure links are totally invisible to human visitors and will be ignored by screen readers and keyboard navigation. They will only be visible to scrapers.
Since our hidden links point to /bots, we'll configure this path to proxy Miasma. Let's assume we're running Miasma on port 9855.
location ~ ^/bots($|/.*)$ {
proxy_pass http://localhost:9855;
}
This will match all variations of the /bots path -> /bots, /bots/, /bots/12345, etc.
Lastly, we'll start Miasma and specify /bots as the link prefix. This instructs Miasma to start links with /bots/, which ensures scrapers are properly routed through our Nginx proxy back to Miasma.
We'll also limit the number of max in-flight connections to 50. At 50 connections, we can expect 50-60 MB peak memory usage. Note that any requests exceeding this limit will immediately receive a 429 response rather than being added to a queue.
miasma --link-prefix '/bots' -p 9855 -c 50
Let's deploy and watch as multi-billion dollar companies greedily eat from our endless slop machine!
robots.txtBe sure to protect friendly bots and search engines from Miasma in your robots.txt!
User-agent: Googlebot
User-agent: Bingbot
User-agent: DuckDuckBot
User-agent: Slurp
User-agent: SomeOtherNiceBot
Disallow: /bots
Allow: /
Miasma can be configured via its CLI options:
| Option | Default | Description |
|---|---|---|
port |
9999 |
The port the server should bind to. |
host |
localhost |
The host address the server should bind to. |
max-in-flight |
500 |
Maximum number of allowable in-flight requests. Requests received when in flight is exceeded will receive a 429 response. Miasma's memory usage scales directly with the number of in-flight requests - set this to a lower value if memory usage is a concern. |
link-prefix |
/ |
Prefix for self-directing links. This should be the path where you host Miasma, e.g. /bots. |
link-count |
5 |
Number of self-directing links to include in each response page. |
force-gzip |
false |
Always gzip responses regardless of the client's Accept-Encoding header. Forcing compression can help reduce egress costs. |
poison-source |
https://rnsaffn.com/poison2/ |
Proxy source for poisoned training data. |
Contributions are welcome! Please open an issue for bugs reports or feature requests. Primarily AI-generated contributions will be automatically rejected.
Maybe I don't understand the problem as well as I should, and I'm open to hearing what it is you think that I'm missing.
But from my perspective, this is a solution for a non-problem, which in my eyes is a problem itself.
This is psychological projection.
The use case you present is so small it can be ignored as an option, yet you present it as the only other option.
You don't know what that means.
In any case, people who want to disseminate information with credit can do so without standing up a blog (any place that allows posting of comments, such as Reddit, HN, etc).
In the context of this discussion, we're talking about site owners; people who put up a blog.
No. Reading something, learning from it, then writing something similar, is legal; and more importantly, it is moral. There is no violation here. Copyright holders already have plenty of power; they must not be given the power to restrict the output of your brain forever more for merely having read and learnt. Reading and learning is sacred. Just as importantly, it's the entire damn basis of our profession!
If you do not want people to read and learn from your content, do not put it on the web.
Fair use is part of "copyright and licensing laws".
In that case it's a terrible analogy because if you can't get people to agree on the cookies case, what hope do you have to extend it to the case you're trying to apply the analogy to? It's like saying "You wouldn't pirate a movie, why would you pirate a blog post", because most people would pirate movies.
When a crawler aggressively crawls your site, they're permanently depriving you the use of those resources for their intended purpose. Arguably, it looks a lot like conversion.
boo. took all the fun out of it ;)
What it the model then creates a virtual actor that is very close to the real actor?
my comment was about the very human need to be recognized for something created, made, or thought by a person. People are ok with writing blog posts, they're ok with writing software, and they're ok with give it all for free, but they want their name attached and their contribution recognized.
"Likeness" is a separate concept from copyrights
And I specifically addressed that aspect:
>The moral question is more ambiguous, but it's still pretty weak. Most recipes are uncredited, and it's unclear why someone can force everyone to attribute the recipe to them when all they realistically did was tweak the dish a bit. In the example above, I doubt you invented cookies.
The cookies analogy was terrible because recipes are rarely credited, but even ignoring the terrible analogy the "recognition" argument still fails. If you wrote a blog post on how to set up kubernetes (or whatever), then it's fair enough that you get recognized for that specific blog post. If my friend asked me how to set up kubernetes, it wouldn't be cool for me to copy paste your blog post and send it over.
However similar to copyright, the recognition you deserve quickly drops off once it moves beyond that specific work. If I absorbed the knowledge from your blog post, then wrote another guide on setting up kubernetes, perhaps updated for my use case, it's unreasonable to require that you be credited. It might be nice, and often times people do, but it's also unreasonable if you wrote an angry letter demanding that you be credited. You weren't the inventor of kubernetes, and you probably got your knowledge of kubernetes from elsewhere (eg. the docs the creators made), so why should everyone have to credit you in perpetuity?
if humans read my blog posts and then things without credit that would be fine. i like human eyeballs and i like them on my content. that's exactly the purpose of the blog post (_in this particular example_), to get human eyeballs on the content.
Or maybe you're just terrible at writing.
>if humans read my blog posts and then things without credit that would be fine.
I'm not sure how I (or anyone) was supposed to come away with this conclusion when you were writing stuff like:
"i'm ok with giving the recipe for free, i just want my name out there"
"the very human need to be recognized for something created"
"they want their name attached and their contribution recognized".
but, in the spirit of critical reading education, what i meant is: human attention good, machine ingestion bad.
But instead we've got people posting "honey pots" that an LLM will immediately detect and route around.