As Cursor Cloud Agents become a core part of how engineering teams ship software—offloading tasks from Slack, GitHub, Linear, and the IDE itself—they're quietly becoming a significant point of credential exposure. Each agent boots a fresh Ubuntu VM, clones your repo, and starts running. If it needs to talk to a database, hit an internal API, or install a private package, it needs secrets. The question is: how do those secrets get there safely?
At Infisical, we've been thinking about this a lot. The patterns we're seeing in MCP servers and CI/CD pipelines are showing up in Cloud Agents too—hardcoded credentials, .env files committed alongside environment configs, and secrets baked into snapshots that outlive their usefulness. Here's a better way.
When you kick off a Cursor Cloud Agent task - whether from the IDE, Slack, or a GitHub webhook - Cursor spins up an isolated Ubuntu VM for that specific task. It restores from a snapshot, clones your repo at the relevant branch, and then runs your environment lifecycle in order:
install - runs once after the snapshot, gets cached. Think npm install or pip sync.start - runs on every boot, before the agent begins working. The right place to fetch secrets.This lifecycle is configured in .cursor/environment.json at the root of your project. Cursor also has a Secrets UI (Settings → Background Agents → Secrets) that injects key-value pairs as encrypted environment variables at runtime.
The problem: the Secrets UI only goes so far. It works for a handful of static values, but it doesn't handle rotation, audit trails, access isolation between team members, or any of the other things you'd expect from a real secrets management workflow. As soon as your agent needs to do anything meaningful (like connect to a database, hit an internal service, install a private registry package), you're going to want something more robust.
Cursor's cloud agent environment introduces a few specific problems that aren't always obvious:
npm install with an .npmrc containing an auth token, and then snapshot the disk, that token is now frozen into the image.environment.json: This file is meant to be committed to your repo, so anything sensitive you put directly in the env field is a liability.The cleanest approach: store only your Infisical machine identity credentials in the Cursor Secrets UI, then use those to pull everything else from Infisical at runtime. Your actual secrets never touch Cursor's storage.
First, create a machine identity in Infisical scoped specifically to your agent environment - give it access only to the secrets that agent needs, nothing more.
# Authenticate non-interactively using Universal Auth export INFISICAL_TOKEN=$(infisical login \ --method=universal-auth \ --client-id= \ --client-secret= \ --silent \ --plain)
Store INFISICAL_CLIENT_ID and INFISICAL_CLIENT_SECRET in the Cursor Secrets UI. These are the only values that live there - everything else comes from Infisical.
The simplest pattern. Use infisical run in your start script to inject secrets as environment variables into whatever process you're launching.
# .cursor/start.sh export INFISICAL_TOKEN=$(infisical login \ --method=universal-auth \ --client-id=$INFISICAL_CLIENT_ID \ --client-secret=$INFISICAL_CLIENT_SECRET \ --silent --plain)
infisical run --env=production --projectId= -- node server.js
{ "snapshot": "snapshot-...", "install": "npm install", "start": "bash .cursor/start.sh" }
The process launched by infisical run gets all your secrets injected as environment variables. Nothing is written to disk, nothing leaks into the snapshot, and the credentials are fetched fresh on every agent boot.
Some tools expect secrets in a file rather than as environment variables - a .env file, a YAML config, a JSON blob. infisical export handles this.
# Write to .env format infisical export --env=production --projectId= --output-file=.env
infisical export --format=json --env=production --projectId= --output-file=./config/secrets.json
infisical export --format=yaml --env=production --projectId= --output-file=./config/secrets.yaml
A practical start.sh for a Node project that needs a private .npmrc written before dependencies install:
#!/bin/bash
export INFISICAL_TOKEN=$(infisical login \ --method=universal-auth \ --client-id=$INFISICAL_CLIENT_ID \ --client-secret=$INFISICAL_CLIENT_SECRET \ --silent --plain)
infisical export \ --env=production \ --projectId= \ --path=/npm-config \ --format=dotenv \ --output-file=.npmrc
echo "Secrets ready"
Here's what a complete, secure environment.json setup looks like for a typical web project:
{ "snapshot": "snapshot-20250309-xxxxxxxx", "install": "bash .cursor/install.sh", "start": "bash .cursor/start.sh" }
# .cursor/install.sh
export INFISICAL_TOKEN=$(infisical login \ --method=universal-auth \ --client-id=$INFISICAL_CLIENT_ID \ --client-secret=$INFISICAL_CLIENT_SECRET \ --silent --plain)
infisical export --env=production --projectId= --path=/npm-config --output-file=.npmrc
npm install
# .cursor/start.sh export INFISICAL_TOKEN=$(infisical login \ --method=universal-auth \ --client-id=$INFISICAL_CLIENT_ID \ --client-secret=$INFISICAL_CLIENT_SECRET \ --silent --plain)
sudo service docker start
infisical run --env=production --projectId= -- node server.js
The Cursor Secrets UI holds only INFISICAL_CLIENT_ID and INFISICAL_CLIENT_SECRET. Every other secret is fetched fresh on every agent boot, rotatable at any time, and fully auditable.
One more thing worth doing: don't give your agent a machine identity with access to everything. Create a dedicated Infisical identity scoped specifically to the secrets that agent task actually needs.
cursor-agent-dev → access to /dev secrets only cursor-agent-prod → access to /production secrets only, requires approval cursor-agent-ci → access to /ci secrets, read-only
This way, if an agent is ever compromised via prompt injection, which is a real risk with autonomous agents that auto-execute terminal commands, the blast radius is contained to what that identity could access, not your entire secret store.
Cursor Cloud Agents are powerful precisely because they can act autonomously, and that autonomy creates real credential exposure if you're not careful. Baking secrets into snapshots, storing long-lived tokens in the Secrets UI, or hardcoding values in environment.json are all patterns that will eventually cause problems.
The core principle is simple: store as little as possible in Cursor, and use those minimal credentials to fetch everything else from Infisical at runtime. Fresh secrets on every boot, full audit trail, and rotation that doesn't require touching your environment config.
Infisical is available as a fully managed cloud service or self-hosted. Get started in under 5 minutes and make the security foundation of your AI agent workflows as robust as the agents themselves.
The security risk is created if you're careful or not. The best you can do is reduce the size of the fresh attack surface you're creating.
https://infisical.com/blog/secure-secrets-management-for-cur...
One adjacent risk worth noting: the URLs these agents visit during research. Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.
Why is this problem (UGC instruction injection) still a thing, anyway? It feels like a problem that can be solved very simply in an agentic architecture that's willing to do multiple calls to different models per request.
How: filter fetched data through a non-instruction-following model (i.e. the sort of base text-prediction model you have before instruction-following fine-tuning) that has instead been hard-fine-tuned into a classifier, such that it just outputs whether the text in its context window contains "instructions directed toward the reader" or not.
(And if that non-instruction-following classifier model is in the same model-family / using the same LLM base model that will be used by the deliberative model to actually evaluate the text, then it will inherently apply all the same "deep recognition" techniques [i.e. unwrapping / unarmoring / translation / etc] the deliberative model uses; and so it will discover + point out "obfuscated" injected instructions to exactly the same degree that the deliberative model would be able to discover + obey them.)
Note that this is a strictly-simpler problem to that of preventing jailbreaks. Jailbreaks try to inject "system-prompt instructions" among "user-prompt instructions" (where, from the model's perspective, there is no natural distinction between these, only whatever artificial distinctions the model's developers try to impose. Without explicit anti-jailbreak training, these are both just picked up as "instructions" to an LLM.) Whereas the goal here would just be to prevent any UGC-tainted document containing anything that could be recognized as "instructions I would try to follow" from ever being injected into the context window.
(Actually, a very simple way to do this is to just take the instruction-following model, experimentally derive a vector direction within it representing "I am interpreting some of the input as instructions to follow" [ala the vector directions for refusal et al], and then just chop off all the rest of the layers past that point and replace them with an output head emitting the cosine similarity between the input and that vector direction.)