Exploitation of Ivanti Endpoint Manager Mobile (EPMM) has been relentless since vulnerability disclosure. That’s not necessarily news. Major institutions - governments included - have already been compromised through this vector, and we’re tracking another exploitation wave as it develops.
On February 4th, 2026, a coordinated campaign started across our telemetry with a differing pattern to previous mass exploitation. Rather than the smash-and-grab post-exploitation you’d expect - dropping traditional webshells, running recon and enumeration commands - this operator did something more deliberate, uploading a payload, confirming it landed, and leaving.
No commands were executed, the implant was simply left in place.
Key Takeaway: This campaign deployed a dormant in-memory Java class loader to
/mifs/403.jsp- a somewhat lesser common webshell path. The implant can only be activated with a specific trigger parameter, and no follow-on exploitation has yet been observed. This is suggstive of initial access broker (IAB) tradecraft: gain a foothold, then sell or hand off access later.
Ivanti disclosed two critical vulnerabilities in EPMM - CVE-2026-1281 and CVE-2026-1340 - both covering authentication bypass and remote code execution, affecting different packages (aftstore and appstore respectively). The practical outcome is the same: unauthenticated access to application-level endpoints. Ivanti published patching guidance via their security advisory, and exploitation in the wild followed shortly after.
Most of the early activity was predictable - opportunistic scanning, mass exploitation, and commodity webshell drops.
Every exploit from this campaign dropped a webshell to the path:
/mifs/403.jsp
The path itself isn’t new - 403.jsp has been observed in prior compromises targeting Ivanti and MobileIron infrastructure, typically alongside other JSP-based webshells. What made this campaign worth writing about isn’t where the payload was placed, but what it did once it got there.
Rather than deploying a conventional webshell with interactive command execution, the operator delivered a Base64-encoded Java class file via HTTP parameters. Each payload decoded to valid Java bytecode (identifiable by the CAFEBABE header) functioning as a dormant in-memory class loader - not an immediately usable backdoor. The distinction matters, and we’ll get into why below.
base.Info - The In-Memory LoaderThe payload is a compiled Java class (base.Info, source file Info.java). It’s not a webshell in the traditional sense - it doesn’t provide command execution, file access, or any interactive capability on its own. It’s a stage loader: its only purpose is to receive, load, and execute a second Java class delivered via HTTP at a later time.
The class uses equals(Object) as its entry point, which is unusual but intentional - standard servlet handler methods like doGet or doPost are more likely to be flagged by security tooling. The parseObj method extracts HttpServletRequest and HttpServletResponse from the argument, with fallback logic to handle PageContext objects and various servlet wrapper/facade patterns. This makes the loader portable across different container environments.
Once invoked, the loader checks for an HTTP parameter named k0f53cf964d387. If present, it strips a two-character prefix from the value and Base64-decodes the remainder into raw bytes. These bytes are then loaded as a Java class in memory via a reflective call to ClassLoader#defineClass - nothing is written to disk. The resulting class is instantiated with a string argument containing basic host information, and its toString() output is returned in the HTTP response.
The response body is wrapped in fixed delimiters (3cd3d / e60537) and served as text/html, making it straightforward for automated tooling to parse.
For Base64 handling, the loader supports both java.util.Base64 (Java 8+) and sun.misc.BASE64Decoder, ensuring compatibility across older and newer JVM versions.
Before invoking the second-stage class, the loader gathers a small set of environment details: the application working directory (user.dir), filesystem root paths (drive letters on Windows, / on Unix), the OS name, and the running username. This information is passed to the second-stage class as a constructor argument - likely to help the operator orient themselves on the target at a later date.
This is the part that matters from a defensive standpoint. Across all observed telemetry, the loader was deployed and confirmed functional, but no follow-on requests were observed supplying a second-stage class to the trigger parameter. The access was established and then left dormant.
This is consistent with initial access broker (IAB) activity. The tooling is generic and container-agnostic - it’s built to work reliably across environments, not to perform any specific post-exploitation task. The operator deployed it at scale, verified it landed, and moved on. The likely purpose is to package confirmed, working access for handoff or sale to a separate party, who would activate it from different infrastructure at a later time.
That separation - one actor establishes access, another exploits it - is what makes this pattern difficult to detect in practice. There’s a gap between the initial compromise and eventual use where the telemetry trail simply goes quiet.
If you’re running Ivanti EPMM, the presence of any of this activity should be treated as evidence of compromise or attempted compromise - even without follow-on exploitation. Especially without follow-on exploitation. The absence of further activity doesn’t mean the access isn’t valuable; it may simply mean it hasn’t been activated yet.
Immediate actions: Patch Ivanti EPMM per vendor guidance immediately. Restart affected application servers to flush in-memory implants - this is critical, as the payload never touches disk. Then review access logs with the indicators below.
/mifs/403.jspyv66vg (the Base64 encoding of the CAFEBABE Java magic bytes)k0f53cf964d387 in request strings3cd3d or e60537ERROR:// (the loader’s error format)| Field | Value |
|---|---|
| Class Name | base.Info |
| Source File | Info.java |
| SHA-256 | 097b051c9c9138ada0d2a9fb4dfe463d358299d4bd0e81a1db2f69f32578747a |
| IP Address | Organization | ASN | Country |
|---|---|---|---|
| 104.219.171.96 | Datacamp Limited | AS212238 | 🇺🇸 |
| 108.64.229.100 | AT&T Enterprises, LLC | AS7018 | 🇺🇸 |
| 115.167.65.16 | NTT America, Inc. | AS2914 | 🇺🇸 |
| 138.36.92.162 | HOSTINGFOREX S.A. | AS265645 | 🇺🇸 |
| 146.103.53.35 | Datacamp Limited | AS212238 | 🇺🇸 |
| 148.135.183.63 | Datacamp Limited | AS212238 | 🇺🇸 |
| 151.247.221.59 | Datacamp Limited | AS212238 | 🇺🇸 |
| 166.0.83.171 | UK Dedicated Servers Limited | AS42831 | 🇬🇧 |
| 172.59.92.152 | T-Mobile USA, Inc. | AS21928 | 🇺🇸 |
| 185.240.120.91 | Datacamp Limited | AS212238 | 🇺🇸 |
| 185.239.140.40 | Datacamp Limited | AS212238 | 🇪🇸 |
| 194.35.226.128 | LeaseWeb Netherlands B.V. | AS60781 | 🇳🇱 |
| 193.41.68.58 | LeaseWeb Netherlands B.V. | AS60781 | 🇳🇱 |
| 77.78.79.243 | SPCom s.r.o. | AS204383 | 🇨🇿 |
| 62.84.168.208 | Hydra Communications Ltd | AS25369 | 🇬🇧 |
| 45.66.95.235 | Hydra Communications Ltd | AS25369 | 🇬🇧 |
| 46.34.44.66 | Liberty Global Europe Holding B.V. | AS6830 | 🇳🇱 |
More IOCs available in Defused telemetry.
The base.Info class was submitted to VirusTotal, where it received a hit from Nextron Systems’ THOR APT Scanner under a generic JSP webshell characteristics rule. Traditional AV engines largely miss the artifact - unsurprising for a payload that never touches disk - but heuristic engines correctly identify the behavioural signatures consistent with in-memory class loaders.
See the VirusTotal entry here.
There’s a tendency in incident response to prioritise the loud compromises - the ransomware detonations, the mass data exfiltration, the lateral movement storms that light up every detection rule in the stack. This campaign is a useful reminder that the most dangerous intrusions are often the ones that don’t do anything. Yet.
The use of undocumented JSP paths, in-memory Java loaders, and dormant execution triggers represents a meaningful step forward in post-exploitation tradecraft targeting enterprise mobility infrastructure. When the operator’s goal is to create inventory rather than cause immediate damage, all the usual urgency signals go quiet - and that’s exactly the point.
If you’re running Ivanti EPMM, don’t wait for the second act. Patch, restart, and hunt. The loader is patient. You don’t have to be.
Parts of the technical analysis above have been AI-assisted.
https://hub.ivanti.com/s/article/Security-Advisory-Ivanti-En...
Ivanti doesn't explain how this happened or what mistake led to this exploit being created.
If you ask me... both these companies should be treated similarly to misbehaving banks: banned from acquiring new customers, an external overseer installed, and only when the products do not pose a threat to the general public any more, they can acquire new customers again.
Semi-related: with the recent much-touted cybersecurity improvements of AI models (as well as the general recent increase in tensions and conflicts worldwide) I wonder just how much the pace of attacks will increase, and whether it’ll prove to be a benefit or a disadvantage over time. Government sponsored teams were already combing through every random weekend project and library that somehow ended in node or became moderately popular, but soon any dick and tom will be able to do it at scale for a few bucks. On the other hand, what’s being exploited tends to get patched in time - but this can take quite a while, especially when the target is some random side project on github last updated 4 years ago.
My gut feeling is that there will be a lot more exploitation everywhere, and not much upside for the end consumer (who didn’t care about state level actors anyway). Probably a good idea to firewall aggressively and minimize the surface area that can be attacked in the first place. The era of running any random vscode extension and trust-me-bro chrome extension is likely at an end. I’m also looking forward to being pwned by wifi enabled will-never-be-updated smart appliances that seem to multiply by the year.
“We are aware” and “very limited” are likely (in our opinion, this is probably not fact, etc, etc) to be doing a significant amount of lifting.
For avoidance of doubt, the following versions of Ivanti EPMM are patched:
None
----
Ah, this company is a security joke as most software security companies are.
Ivanti is a US company. But if you have never heard of them, the dragon-resembling creature in the illustration (representing the dormant backdoor?) makes it look like the incident is somehow related to China.
1. https://labs.watchtowr.com/someone-knows-bash-far-too-well-a...
Anyway, the image is just the end result of plugging the title into nano banana. You ought to address your complaints to Google :)
Actual cybersecurity isn't something you can just buy off-the-shelf and requires skill and making every single person in the org to give a shit about it, which is already hard to achieve, and even more so when you've tried for years to pay them as little as you can get away with.
Isn't most off-the-shelf software effectively always supplied without any kind of warranty? What grounds would the lawsuit have?
It's fine to say "Look this is bad, don't do" and "A patch was issued for this, you are responsible" but when some set of circumstances arises that has not been thought about before that cause a problem, then there's nothing that could have been done to stop it.
Note that the entire QA industry is explicitly geared to try and look at software being produced in a way that nobody else has thought to, in order to find if that software still behaves "correctly", and <some colour of hat> hackers are an extension of that - people looking at software in a way that developers and QA did not think of.. etc
"Hey it says we need to do mobile management and can't just let people manage their own phones. Looks like we'll buy Avanti mobile manager". Same conversation I've seen play out with generally secure routers being replaced with Fortigates that have major vulnerabilities every week because the checklist says you must be doing SSL interception.
Or just loads of other stuff that really only applies to large Fortune 500 size companies. My small startups certainly don’t have a network engineer on staff who has created a network topology graph and various policies pertaining to it, etc etc. the list goes on, I could name 100s of absurd requirements these insurance companies want that don’t actually add any level of security to the organization, and absolutely do not apply to small scale shops.
I can almost guarantee you that your ordinary feature developer working on a deadline is not thinking about that. They're thinking about how they can ship on time with the features that the salesguy has promised the client. Inverting that - and thinking about what "features" you're shipping that you haven't promised the client - costs a lot of money that isn't necessary for making the sale.
So when the reinsurance company mandates a checklist, they get a checklist, with all the boxes dutifully checked off. Any suitably diligent attacker will still be able to get in, but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down. The ecosystem settles into an equilibrium of parasites (hackers, who have silently pwned a wide variety of computer systems and can use that to setup systems for their advantage) and blowhards (executives who claim their software has security guarantees that it doesn't really).
You pay them a million per year, and fire them when a breach happens.
Way cheaper than improving security.
Not very long ago actual security existed basically nowhere (except air-gapping, most of the time ;)). And today it still mostly doesn't because we can't properly isolate software and system resources (and we're very far away from routinely proving actual security). Mobile is much better by default, but limited in other ways.
Heck, I could be infected with something nasty and never know about it: the surface to surveil is far too large and constantly changing. Gave up configuring SELinux years ago because it was too time-consuming.
I'll admit that much has changed since then and I want to give it a go again, maybe with a simpler solution to start with (e.g. never grant full filesystem access and network for anything).
We must gain sufficiently powerful (and comfortable...) tools for this. The script in question should never have had the kind of access it did.
I would argue the opposite is true. Insurance doesn’t pay out if you don’t self-report in time. Big data breaches usually get discovered when the hacker tries to peddle off the data in a darknet marketplace so not reporting is gambling that this won’t happen.
I can assure you that insurers don’t work like that.
If underwriting was as sloppy as you think it is insurance as a business model wouldn’t work.
Is it not possible to have secure software components that only work when assembled in secure ways? Why not?
Conversely, what security claims about a component can one rely upon, without verifying it oneself?
How would a non-professional verify claims of security professionals, who have a strong interest in people depending upon their work and not challenging its utility?
Note, that is not to say that cybersecurity insurance if fundamentally impossible, just that the current cost structure and risk mitigation structure is untenable and should not be pointed at as evidence of function.
I do not think we're at that stage of maturity. I think it would be hubris to imitate the practices of that stage of maturity, enshrining those practices in the eyes of insurance underwriters.