If we're looking beyond one decade, then:
- As with all other utility, the future must be discounted compared to the present.
- A language that might look sterling today might fall behind tomorrow.
- Something something AI by 2035.
> Rust is is not a "silver bullet" that can solve all security problems, but it sure helps out a lot and will cut out huge swatches of Linux kernel vulnerabilities as it gets used more widely in our codebase.
> That being said, we just assigned our first CVE for some Rust code in the kernel: https://lore.kernel.org/all/2025121614-CVE-2025-68260-558d@gregkh/ where the offending issue just causes a crash, not the ability to take advantage of the memory corruption, a much better thing overall.
> Note the other 159 kernel CVEs issued today for fixes in the C portion of the codebase, so as always, everyone should be upgrading to newer kernels to remain secure overall. Rust Binder contains the following unsafe operation:
// SAFETY: A `NodeDeath` is never inserted into the death list
// of any node other than its owner, so it is either in this
// death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
This operation is unsafe because when touching the prev/next pointers of
a list element, we have to ensure that no other thread is also touching
them in parallel. If the node is present in the list that `remove` is
called on, then that is fine because we have exclusive access to that
list. If the node is not in any list, then it's also ok. But if it's
present in a different list that may be accessed in parallel, then that
may be a data race on the prev/next pointers.
And unfortunately that is exactly what is happening here. In
Node::release, we:
1. Take the lock.
2. Move all items to a local list on the stack.
3. Drop the lock.
4. Iterate the local list on the stack.
Combined with threads using the unsafe remove method on the original
list, this leads to memory corruption of the prev/next pointers. This
leads to crashes like this one:Kernels - and especially the Linux kernel - are high-performance systems that require lots of shared mutable state. Every driver is a glorified while loop waiting for an IRQ so it can copy a chunk of data from one shared mutable buffer to another shared mutable buffer. So there will need to be some level of unsafe in the code.
There's a fallacy that if 95% of the code is safe, and 5% is unsafe, then that code is only 5% as likely to contain memory errors as a comparable C program. But, to reiterate what another commenter said, and something I've predicted for a long time, the tendency for the "unsafe block" to become instrumented by the "safe block" will always exist. People will loosen the API contract between the "safe" and "unsafe" sides until an error in the "safe" side kicks off an error in the "unsafe" side.
Oh no, what happened to Rust will save us from retarded legacy languages prone to memory corruption?
pub(crate) fn release(&self) {
let mut guard = self.owner.inner.lock();
while let Some(work) = self.inner.access_mut(&mut guard).oneway_todo.pop_front() {
drop(guard);
work.into_arc().cancel();
guard = self.owner.inner.lock();
}
- let death_list = core::mem::take(&mut self.inner.access_mut(&mut guard).death_list);
- drop(guard);
- for death in death_list {
+ while let Some(death) = self.inner.access_mut(&mut guard).death_list.pop_front() {
+ drop(guard);
death.into_arc().set_dead();
+ guard = self.owner.inner.lock();
}
}
And here is the unsafe block mentioned in the commit message with some more context [3]: fn set_cleared(self: &DArc<Self>, abort: bool) -> bool {
// <snip>
// Remove death notification from node.
if needs_removal {
let mut owner_inner = self.node.owner.inner.lock();
let node_inner = self.node.inner.access_mut(&mut owner_inner);
// SAFETY: A `NodeDeath` is never inserted into the death list of any node other than
// its owner, so it is either in this death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
}
needs_queueing
}
[0]: https://lore.kernel.org/linux-cve-announce/2025121614-CVE-20...[1]: https://github.com/torvalds/linux/commit/3e0ae02ba831da2b707...
[2]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...
[3]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...
> Since it was in an unsafe block, the error for sure was way easier to find within the codebase than in C. Everything that's not unsafe can be ruled out as a reason for race conditions and the usual memory handling mistakes - that's already a huge win.
The benefit of Rust is you can isolate the possible code that causes an XYZ to an unsafe block*. But that doesn't necessarily mean the error shown is directly related to the unsafe block. Like C++, triggering undefined behavior can in theory cause the program to do anything, including fail spectacularly within seemingly unrelated safe code.
* Excluding cases where safe things are actually possibly unsafe (like some incorrectly marked FFI)
This is so obviously false that I suspect there's the reason you don't see any Rust gurus agreeing with you.
Drivers do lots of resource and memory management, far more than just spinning on IRQs.
I believe their point was that they only needed to audit only the unsafe blocks to find the actual root cause of the bug once they had an idea of the problematic area.
The absence of UB, undefined behavior, everything-bad-can-happen, in an Rust-unsafe block can depend on Rust-not-unsafe code in the surrounding module. Thus, even a single block of unsafe can in theory require going through the whole module to figure out where it went wrong or to ensure correctness. And if access control was not properly used, possibly more than the module.
If you look at the mentioned patches, the fixes are to code outside the described unsafe block, in Rust-not-unsafe code. It is perfectly possible to introduce UB through changes to "safe" Rust, if those changes end up violating some assumptions in some Rust-unsafe block somewhere.
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
Another way to introduce UB in Rust-not-unsafe, is if no_std is used. In that case, a simple stack overflow can cause UB, no Rust-unsafe required.
Surprisingly many Rust developers do not understand these points. It may take some of the shine off of Rust, so some Rust fans refrain from explaining it properly. Which is not good.
That indicates that Greg Koah-Hartman has a very poor understanding of Rust and the _unsafe_ keyword. The bug can, in fact, exhibit undefined behavior and memory corruption.
His lack of understanding is unfortunate, to put it very mildly.
All bugs is typically a strawman typically only used by detractors. The correct claim is: safe Rust eliminates certain classes of bugs. I'd wager the design of std eliminates more (e.g. the different string types), but that doesn't really apply to the kernel.
Classic Motte and Bailey. Rust is often said "if it compiles it runs". When that is obviously not the case, Rust evangelicals claim nobody actually means that and that Rust just eliminates memory bugs. And when that isn't even true, they try to mischaracterize it as "all bugs" when, no, people are expecting it to eliminate all memory bugs because that's what Rust people claim.
The real question is "does it provide this greater value for _less_ effort?"
The answer seems to be: "No."
The author is thinking about "the error" as some source code that's incorrect. "Your error was not bringing gloves and a hat to the snowball fight" but you're thinking "the error" is some diagnostic result that shows there was a problem. "My error is that I'm covered in freezing snow".
Does that help?
When debugging, we care about where the assumptions we had were violated. Not where we observe a bad effect of these violated assumptions.
I think you get here yourself when you say:
> triggering undefined behavior can in theory cause the program to do anything, including fail spectacularly within seemingly unrelated safe code
The bug isn't where it failed spectacularly. It's where the C++ code triggered undefined behavior.
Put another way: if the undefined behavior _didn't_ cause a crash / corrupted data, the bug _still_ exists. We just haven't observed any bad effects from it.
Let's see -- this is one/the first CVE caused by a mistake made using unsafe Rust. It was revealed along with 159 new kernel CVEs found in C code.[0]
It may just be me, but it seems wildly myopic to draw conclusions about Rust, or even, unsafe Rust from one CVE. More CVEs will absolutely happen. But even true Rust haters have to recognize that tide of CVEs in kernel C code runs something like 19+ CVEs per day? What kind of case can you make that unsafe Rust is worse than that?
Or is this just a theoretical argument, "it is hypothetically possible to create a technically-spec-compliant Rust compiler that would compile this into dangerous machine code"? If so it should still be fixed of course, but if I'm patching my Linux kernel I'd rather know what the practical impact is.
The more useful question is, how many CVEs were prevented because unsafe {} blocks receive more caution and scrutiny?
Rust is written in Rust, and we still want to be able to e.g. call C code from Rust. (It used to be the case that external C code was not always marked unsafe, but this was fixed recently).
Thankfully, it doesn't. There are very few situations which require unsafe code, though a kernel is going to run into a lot of those by virtue of what it does. But the vast majority of the time, you can write Rust programs without ever once reaching for unsafe.
For this to be a "classic motte and bailey" you will need to point us to instances where _the original poster_ suggested these (the "bailey", which you characterize as "rust eliminates all bugs") things.
It instead appears that you are attributing _other comments_ to the OP. This is not a fair argumentation technique, and could easily be turned against you to make any of your comments into a "classic motte and bailey".
That claims is overly broad, but its a huge, huge part of it. There's no amount of computer science or verification that can prevent a human from writing the wrong software or specification (let plus_a_b = a - b or why did you give me an orange when I wanted an apple). Unsafe Rust is so markedly different than safe default Rust. This is akin to claiming that C is buggy or broken because people write broken inline ASM. If C can't deal with broken inline ASM, then why bother with C?
The relevant question is whether it results in fewer and less severe CVEs than code written in C. So far the answer seems to be a resounding yes
Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.
The short of it is that for fundamental computer science reasons the ability to always reject unsafe programs comes at the cost of sometimes being unable to verify that an actually-safe program is safe. You can deal with this either by accepting this tradeoff as it is and accepting that some actually-safe programs will be impossible to write, or you can add an escape hatch that the compiler is unable to check but allows you to write those unverifiable programs. Rust chose the latter approach.
> Kinda sounds a lock would make this safe?
There was a lock, but it looks like it didn't cover everything it needed to.
Which is either 1) not true as evidenced by this bug or 2) a tautology whereby Rust eliminates all bugs that it eliminates.
(The nuance being that sometimes there's a lot of unsafe Rust, because some domains - like kernel programming - necessitate it. But this is still a better state of affairs than having no code be correct by construction, which is the reality with C.)

The first CVE vulnerability has been assigned to a piece of the Linux kernel's Rust code.
Greg Kroah-Hartman announced that the first CVE has been assigned to a piece of Rust code within the mainline Linux kernel.
This first CVE for Rust code in the Linux kernel pertains to the Android Binder rewrite in Rust. There is a race condition that can occur due to some noted unsafe Rust code. That code can lead to memory corruption of the previous/next pointers and in turn cause a crash.
This CVE for the possible system crash is for Linux 6.18 and newer since the introduction of the Rust Binder driver. At least though it's just a possible system crash and not any more serious system compromise with remote code execution or other more severe issues.
More details on CVE-2025-68260 via the Linux CVE mailing list.
Multiple reasons:
1: Marketing and social media brigading, as Linus put it.
2: Pattern matching and enums, which are genuinely good.
3: Rust has some trade-offs that are closer to C++ than C, and those trade-offs have advantages and disadvantages.
4: Module system.
5: More modern macros.
6: Other advantages.
Rust also has a large heap of drawbacks.
As for why there is unsafe in the kernel? There are things, especially in a kernel, that cannot be expressed in safe Rust.
Still, having smaller sections of unsafe is a boon because you isolate these locations of elevated power, meaning they are auditable and obvious. Rust also excels at wrapping unsafe in safe abstractions that are impossible to misuse. A common comparison point is that in C your entire program is effectively unsafe, whereas in Rust it's a subset.
> 1) not true as evidenced by this bug
Code used unsafe, putting us out of "safe" rust.
So arguably both camps are correct. Those who advocate Rust rewrites, and those who are against it too.
If Rust doesn't live up to its lofty promises, then it changes the cost-benefit analysis. You might give up almost anything to eliminate all bugs, a lot to eliminate all memory bugs, but what would you give up to eliminate some bugs?
Then the safety comment can easily bias the reader into believing that the author has fully understood the problem and all edge cases.
/// Removes the provided item from this list and returns it.
///
/// This returns `None` if the item is not in the list. (Note that by the safety requirements,
/// this means that the item is not in any list.)
///
/// # Safety
///
/// `item` must not be in a different linked list (with the same id).
pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
I think it'd be tricky at best to make this particular API safe since doing so requires reasoning across arbitrary other List instances. At the very least I don't think locks would help here, since temporary exclusive access to a list won't stop you from adding the same element to multiple lists.[0]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...
On the other hand, Rust has some rules that are more difficult to uphold than C, for instance regarding aliasing. Especially in the Linux kernel, where it was decided that strict aliasing should be turned off when compiling C. When Rust-unsafe is used, the programmer has the responsibility of not violating those rules. That can be tricky, and many Rust developers either try to avoid using unsafe, or lean heavily on Miri, even though Miri can't handle everything.
Here's what `List::remove` says on its safety requirements [0]:
/// Removes the provided item from this list and returns it.
///
/// This returns `None` if the item is not in the list. (Note that by the safety requirements,
/// this means that the item is not in any list.)
///
/// # Safety
///
/// `item` must not be in a different linked list (with the same id).
pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
At least if I'm understanding things correctly, I don't think that that invariant is something that locks can protect in general. I can't say I'm familiar enough with the code to say whether some other code organization would have eliminated the need for the unsafe block in this specific case.[0]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...
(I think the underlying philosophical disagreement here is this: I think software is always going to have bugs, and that Rust can't - and doesn't promise - to perfectly eliminate them. Instead, what Rust does promise - and deliver on - is that the entire class of memory safety bugs can be eliminated by construction in safe Rust, and localized when present to errors in unsafe Rust. Insofar as that's the promise, Rust has delivered here.)
Ultimately every program depends on things beyond any compilers ability to verify, for example the calls to code not written in that language being correct, or even more fundamentally if you're writing some embedded program that literally has interfaces to foreign code at all the silicon (both that handles IO that which does the computation) being correct.
The promise of rust isn't that it can make this fundamentally non-compiler-verifiable (i.e. unsafe) dependency go away, it's that you can wrap the dependency in abstractions that make it safe for users of the dependency if the dependency is written correctly.
In most domains rust don't necessitate writing new unsafe code, you rely on the existing unsafe code in your dependencies that is shared, battle tested, and reasonably scoped. This is all rust, or any programming langauge, can promise. The demand that the dependency tree has no unsafe isn't the same as the domain necessitating no unsafe, it's the impossible demand that the domain of writing the low level abstractions that every domain relies on doesn't need unsafe.
But as the adjacent commenter notes: having unsafe is not inherently a problem. You need unsafe Rust to interact with C and C++, because they're not safe by construction. This is a good thing!
The cost-benefit argument for Rust has always been mediated by the fact that Rust will need to interact with (or include) unsafe code in some domains. Per above, that's an explicit goal of Rust: to provide sound abstractions over unsound primitives that can be used soundly by construction.
Safe Rust code is safe. You know where unsafe code is, because it's marked as unsafe. Yes, you will need some unsafe code in an notable project, but at least you know where it is. If you don't babysit your unsafe code, you get bad things. Someone didn't do the right thing here and I'm sure there will be a post-mortem and lessons learned.
To be comparable, imagine in C you had to mark potentially UB code with ub{} to compile. Until you get that, Rust is still a clear leader.
That's roughly 100% of unsafe code because a lint in the compiler asks for it.
Strict aliasing in C roughly means that if you initialize memory as a particular type, you can only access it as that type or one of a list of aliasable types look like char. Rust has no such restriction, and has no concept of strict aliasing like this. In Rust, "type aliasing" is allowed, so long as you respect size, alignment, and representability rules.
Aliasing safety in Rust roughly means that you can not have an exclusive reference to an object if any other reference is active for that reference (reality is a little bit more involved than that, but not a lot). C has no such rule.
It's very unfortunate that such similar names were given to these different concepts.
Do you agree or disagree with
https://www.reddit.com/r/rust/comments/16i8lo2/how_unpleasan...
> It is worse than writing equivalent unsafe code in, say, C. The main pain point is that Rust has a rather complicated relationship between raw pointers and non-raw pointers (mainly references and box). Namely is how they invalidate each other. A good rule of thumb is to mix them as little as possible. But eventually the raw-pointer-heavy unsafe code will have to interface with reference-heavy safe code, and that's where the tricky stuff is.
> Unsafe Rust is generally much harder to get right than unsafe C or unsafe C++. There's no sugar coating it. Rust relies on many more invariants and guarantees that you have to account for than C or C++ do, so there's much more mental overhead.
matthieum, a Rust reddit mod and fan of Rust, disagrees with those posts. Whether he is biased or not is up to others to decide.
What?
Even the Rust standard library has had UB.
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
And a bug in one crate can cause UB in another crate if that other crate is not designed well and correctly.
I have heard it and I've stated it before. It's never stated in absolute confidence. As I said in another thread, if it was actually true, then Rust wouldn't need an integrated unit testing framework.
It's referring to the experience that Rust learners have, especially when writing relatively simple code, that's it tends to be hard to misuse libraries in a way that looks correct and compiles but actually fails at runtime. Rust cannot actually provide this guarantee, it's impossible in any language. However there are a lot of common simple tasks (where there's not much complex internal logic that could be subtly incorrect) where the interfaces provided by libraries they're depending on are designed to leverage the type system such that it's difficult to accidentally misuse them.
Like something like not initializing a HTTP client properly. The interfaces make it impossible to obtain an improperly initialized client instance. This is an especially distinct feeling if you're used to dynamic languages where you often have no assurances at all that you didn't typo a field name.
I can't imagine anybody seriously making that claim as a property of the language.
In other words: unsafe Rust is harder, but only in an apples-and-oranges sense. If you compare it to the same diligence you'd need to exercise in writing safer C, it would be about the same.
Yes! Failure to uphold invariants of the underlying abstract model in an unsafe block breaks the surrounding code, including other crates! That's exactly consistent with what I said. There's nothing special about the stdlib. Like all software, it can have bugs.
What the proof states is that two independently correct blocks of unsafe code cannot, when used together, be incorrect. So the key value there is that you only have to reason about them in isolation, which is not true for C.
I sound like an apologist, but the Rust team stated that “memory safety is preserved as long as Rusts invariants are”. Feels really clear, people keep missing this point for some reason, almost as if its a gotcha that unsafe rust behaves in the same memory unsafe way as C/C++: when thats exactly the point.
Your verification surface is smaller and has a boundary.