(Of course there are tons of other red flags not looked at in the article, eg. how does an employees machine get access to production systems and from there access to customers connected with oauth and how does the attacker get to env vars from a google workspace account)
And I thought it was bad when my son got compromised by a Roblox cheat, but they only they grabbed his Gamepass cookies and bought 4 Minecraft licenses, which MS quickly refunded...
So sensitive doesn’t mean encrypted. It means the UI doesn’t show the dev what value’s stored there after they’ve updated it. Not sensitive means it’s still visible. And again, I presume this is only a UI thing, and both kinds are stored encrypted in the backend.
I don’t work for Vercel, but I’ve use them a bit. I’m sure there are valid reasons to dislike them, but this specific bit looks like a strawman.
It's almost like the denials were in fact false and Delve truly was just selling a sticker, not providing an actual service.
If I were a VC that had funded Delve for a considerable amount of time, I'd be embarrassed that we did not catch that. I'd probably rework my processes, publicly analyse how this alleged fraud got past me and go far and beyond in disclosing my findings to rebuild trust. I'd most certainly not think just cutting funding is sufficient given the situation. Even more so if I'd encouraged other companies funded by me to use their "services". I'd maybe even reevaluate whether a circular approach wherein our funded companies are incentivised to rely on other also by us funded companies leads to the best options being chosen and whether that isn't antithetical to a forward thinking environment and competition. At the same time, I'd also think that maybe such a setup just hides unsuccessful companies and potentially even alleged fraud which once it gets to the broader market, may cause significant harm...
[0] https://web.archive.org/web/20250918025724/https://trust.del...
[1] https://web.archive.org/web/20260217220817/https://www.conte...
Tools that sit in the middle (like Context.ai) end up becoming a pretty large attack surface without feeling like one.
It begs the question why there is no 2FA? And why did they had such a broad access to being with?
If this is not case, the only other option I can muster is perhaps API credentials but stored in google workspaces? It is possible but odd.
My son even asked me just the other day why I don’t have Roblox on the Mac….yeah stuff like this is why.
Failed to verify your browser Code 11 Vercel Security Checkpoint, arn1::1776759703-rtDgRAtRyXvjD4IoU4RbqvkGmvQQCP7H
Gah.
Vercel April 2026 security incident
Initially, we identified a limited subset of customers whose Vercel credentials were compromised. We reached out to that subset and recommended that they rotate their credentials immediately.
At this time, we do not have reason to believe that your Vercel credentials or personal data have been compromised.
If I don't see asterisks, I'm not hitting save on the field with a secret in it. Maybe they were setting them programmatically? They should definitely still be looking to pass some kind of a secret flag, though. This is a weird problem for a company like Vercel to have.
We'll keep dangerous devices like the SuperBox in our homes, if it helps us get access to free movies and tv.
We'll use single-use plastics, even if we know they're bad for the environment, because they're just so damn easy.
We'll let AI run that thing for us, because it's just too easy.
A whole generation has grown up without knowing what it was like to infect your computer with AIDS trying to download an MP3, and it shows. That caution will come back, just at a terrible cost.
Feels like the employee pulled a LastPass Plex move.
The cheat contains an infostealer.
> March 2026. The attacker uses Context.ai's compromised infrastructure to pivot into a Vercel employee's Google Workspace account. This Vercel employee had signed up for Context.ai's "AI Office Suite" using their enterprise credentials and granted "Allow All" permissions. Let that sink in for a second. A Vercel engineer gave a third-party AI tool full access to their corporate Google account.
I swear this AI 'boom' is melting people's brains and zombifying them like Toxoplasma gondii[1] does to rodents, making them do risky things that ultimately get them eaten (or hacked...).
Yeah, I'm very confused. It's not possible to encrypt env vars that the program needs; even if it's encrypted at rest, it needs to be decrypted anyway before starting the program. Env vars are injected as plain text. This is just how this works, nothing to do with Vercel.
This situation could some day improve with fully homomorphic encryption (so the server operates with encrypted data without ever decrypting it), but that would have very high overhead for the entire program. It's not realistic (yet)
For reference, look at how Disney got hacked. One employee downloaded compromised software on a personal computer. One thing led to another and boom. IT in many companies are much more incompetent than you think. I have seen that first hand.
One for which the Context.ai employee needs to have their arse booted up and down the car park for.
So I believe the author has exposure to the issue and interest in understanding it, that’s more than AI alone has got.
The thing that concerns me is that even at a site like HN, where a lot of people are very familiar with LLMs, it seems to be passing.
I hate to think this will become the norm but it's not the first HN linked post that's gotten a lot of earnest engagement despite being AI generated (or partly AI generated).
I'm very comfortable with AI generated code, if the humans involved are doing due diligence, but I really dislike the idea of LLM generated prose taking over more and more of the front page.
More generically, our species' Achilles heel is our inability to factor in the long-term cost of negative externalities when evaluating processes that yield short-term positive results.
If you spin up an EC2 instance with an ftp server and check the "Encrypt my EBS volume" checkbox, all those files are 'encrypted at rest', but if your ftp password is 'admin/admin', your files will be exposed in plaintext quite quickly.
Vercel's backend is of course able to decrypt them too (or else it couldn't run your app for you), and so the attacker was able to view them, and presumably some other control on the backend made it so the sensitive ones can end up in your app, but can't be seen in whatever employee-only interface the attacker was viewing.
PoC or GTFO.
I think you'll find it's a bit harder to do than you expect.
It’s not a competitive platform like say WoW or overwatch; nobody is really there to win and there are zero stakes if you do or don’t.
You can blame individuals, but security is a property of the system.
Various certifications require this, I guess because they were written before hyper scalers and the assumed attack vector was that someone would literally steal a hard drive.
A running machine is not “at rest”, just like you can read files on your encrypted Mac HDD, the running program has decrypted access to the hard drive.
What's best practice to handle env vars? How do poeple handle them "securely" without it just being security theater? What tools and workflows are people using?
(And modern Linux is unusable without root access, thanks to Docker and other fast-and-loose approaches.)
Heck, not giving the person Admin privileges would have sufficed to prevent this. Or better hiring preventing people who install Roblox cheats on work devices...
There is no excuse and no fine line here. Even outside them boasting about SOC 2 Type II, this would be embarrassing for an SME not in the tech sector.
However I do feel now like my sensitive things are better off deployed on a VPS where someone would need a ssh exploit to come at me.
For non-sensitive environment variables, they also show you the value in the dashboard so you can check and edit them later.
Things like 'NODE_ENV=production' vs 'NODE_ENV=development' is probably something the user wants to see, so that's another argument for letting the backend decrypt and display those values even ignoring the "running your app" part.
You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.
Oops - you said the opposite of what I read, my mistake.
Because I never do, unless I'm down in the depths of /var/lib/docker doing stuff I shouldn't.
Do you want to let any applicant be screened by the security team?
Notice how their tutorial says "run 'dotenvx run -- yourapp'". If you did 'dotenvx run -- env', all your secrets would be printed right there in plaintext, at runtime, since they're just encrypted at rest.
The equivalent in vercel would be encrypted in the database (the encrypted '.env' file), with a decryption key in the backend (the '.env.keys' file by default in dotenvx) used to show them in the frontend and decrypt them for running apps.
The point of encryption is often times about what other software or hardware attacks are minimized or eliminated.
However, if someone figures out access to a running system, theres really no way to both allow an app to run and keep everything encrypted. It certainly is possible, like the way keepass encrypts items in memory, but if an attacker has root on a server, they just wait for it to be accessed if not outright find the key that encrypted it.
This is to say, 99.9% of the apps and these platforms arn't secure against this type of low level intrusion.
Piping to /dev/null is of course pointless.
What you really want is the /dev/null as a Service Enterprise plan for $500/month with its High Availability devnull Cluster ;)
If specific to my hiring comment, was meant a bit facetious, though I will point out this line in their "compliance" report by "auditor" Delve:
> The organization carries out background and/or reference checks on all new employees and contractors prior to joining in accordance with relevant laws, regulations and ethics. Management utilizes a pre-hire checklist to ensure the hiring manager has assessed the qualification of candidates to confirm they can perform the necessary job requirements.
Maybe those pre-hire checklists should include a question like "Are you a massive idiot, who'd install a game on their work computer, then on top of that be the type of idiot who likes to cheat, then on top of that be the type of idiot to install cheats on your work computer?", maybe that'd prevent this in the future. Or again, just don't give everyone Admin privileges...
You can, theoretically, decompile the system memory dump and try to mine the credentials out of the credential server's heap, but that exploit is exponentially more difficult to do that a simple `cat /proc/1234/environ`.