NHacker Next
login
▲Bzip2 crate switches from C to 100% Rusttrifectatech.org
273 points by Bogdanp 14 hours ago | 117 comments
Loading comments...
dralley 13 hours ago [-]
How realistic is it for the Trifecta Tech implementation to start displacing the "official" implementation used by linux distros, which hasn't seen an upstream release since 2019?

Fedora recently swapped the original Adler zlib implementation with zlib-ng, so that sort of thing isn't impossible. You just need to provide a C ABI compatible with the original one.

kpcyrd 5 minutes ago [-]
I briefly looked a this and there's already cargo-c configuration, which is good, but it's currently namespaced differently, so it won't get automatically detected by C programs as `libbz2`:

https://github.com/trifectatechfoundation/libbzip2-rs/blob/8...

I'm not familiar enough with the symbols of bzip2 to say anything about ABI compatibility.

I have a toy project to explore things like that, but it's difficult to set aside the amount of time needed to maintain an implementation of the GNU operating system. I would welcome pull requests though:

https://github.com/kpcyrd/platypos

wmf 12 hours ago [-]
Ubuntu is using Rust sudo so it's definitely possible.
egorfine 37 minutes ago [-]
It's not. At least not yet. It's planned for 25.10, but thankfully sudo will be packaged and available for a few versions after that as promised [1].

[1] https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-i...

masfuerte 12 hours ago [-]
They do provide a compatible C ABI. Someone "just" needs to do the work to make it happen.
tiffanyh 10 hours ago [-]
I think that is the goal of uutils.

https://uutils.github.io/

cocoa19 8 hours ago [-]
I hope some are improved too.

The performance boost in tools like ripgrep and tokei is insane compared to the tools they replace (grep and cloc respectively).

egorfine 34 minutes ago [-]
I absolutely hate it when people call their tools a "replacement" for something that is part of core standards, something, that did just fine for decades.

ripgrep is an excellent tool. But it's not a grep replacement. And should not ever be.

dagw 42 seconds ago [-]
And should not ever be.

Those "core standards" that you talk about didn't spring fully formed from the earth. They came about from competition and beating out and replacing the old "core standards" that lots of people argued very strongly for should not ever be replaced. When I was starting out my career I was told by very experienced people that should not learn to rely on the GNU tool features, since they're far from ubiquitous and probably won't be installed on most systems I'll be working on.

mprovost 22 minutes ago [-]
The GNU utils were a replacement for the BSD utils which were a replacement for the original AT&T utils. Every replacement added new functionality and improvements, and every time someone complained that they didn't stick closer to the thing they replaced. Looking specifically at grep, there used to be new versions like egrep and fgrep that added functionalities beyond standard grep's, but those were eventually pulled into "standard" grep (GNU or BSD). If we stuck with standards we'd all still be using the Bourne shell. The GNU utilities have been around long enough that they feel like the standard now, but I'm glad that we're coming into a new phase of innovation in command-line utilities. And this didn't start with Rust - the new generation of search utilities started with ack (Perl) and then ag (C).
burntsushi 11 minutes ago [-]
I didn't call ripgrep a replacement. Other people do. Because it does actually replace their usage of grep in some or all cases, depending on their usage patterns.

https://github.com/BurntSushi/ripgrep/blob/master/FAQ.md#can...

rlpb 12 hours ago [-]
> You just need to provide a C ABI compatible with the original one.

How does this interact with dynamic linking? Doesn't the current Rust toolchain mandate static linking?

alxhill 9 hours ago [-]
The commenters below are confusing two things - Rust binaries can be dynamically linked, but because Rust doesn’t have a stable ABI you can’t do this across compiler versions the way you would with C. So in practice, everything is statically linked.
pjmlp 2 hours ago [-]
A culture isse, as in the C++ world, of Apple and Microsoft ecosystems, shipping binary C++ libraries is a common business, even it is compiler version dependent.

This is why Apple made such a big point of having a better ABI approach on Swift, after their experience with C++ and Objective-C.

While on Microsoft side, you will notice that all talks from Victor Ciura on Rust conferences have dealing with ABI as one of the key points Microsoft is dealing with in the context of Rust adoption.

connicpu 9 hours ago [-]
Specifically, the rust dependencies are statically linked. It's extremely easy to dynamically link anything that has a C ABI from rust.
eru 9 hours ago [-]
Static linking also produces smaller binaries and lets you do link-time-optimisation.
emidln 4 hours ago [-]
Static linking doesn't produce smaller binaries. You are literally adding the symbols from a library into your executable rather than simply mentioning them and letting the dynamic linker figure out how to map those symbols at runtime.

The sum size of a dynamic binary plus the dynamic libraries may be larger than one static linked binary, but whether that holds for more static binaries (2, 3, or 100s) depends on the surface area your application uses of those libraries. It's relatively common to see certain large libraries only dynamically linked, with the build going to great lengths to build certain libraries as shared objects with the executables linking them using a location-relative RPATH (using the $ORIGIN feature) to avoid the extra binary size bloat over large sets of binaries.

IshKebab 4 hours ago [-]
Static linking does produce smaller binaries when you bundle dependencies. You're conflating two things - static vs dynamic linking, and bundled vs shared dependencies.

They are often conflated because you can't have shared dependencies with static linking, and bundling dynamically linked libraries is uncommon in FOSS Linux software. It's very common on Windows or with commercial software on Linux though.

guappa 3 hours ago [-]
You know how the page cache works? Static linking makes it not work. So 3000 processes won't share the same pages for the libc but will have to load it 3000 times.
mandarax8 3 hours ago [-]
You can still statically link all your own code but dynamically link libc/other system dependencies.
guappa 2 hours ago [-]
Not with rust…
tialaramex 29 minutes ago [-]
I wonder what happens in the minds of people who just flatly contradict reality. Are they expecting others to go "OK, I guess you must be correct and the universe is wrong"? Are they just trying to devalue the entire concept of truth?

[In case anybody is confused by your utterance, yes of course this works in Rust]

guappa 3 hours ago [-]
Static linking produces huge binaries, it lets you do LTO but the amount of optimisation you can actually do is limited by your RAM. Static linking also causes the entire archive to need constant rebuilds.
quotemstr 6 hours ago [-]
C++ binaries should be doing the same. Externally, speak C ABI. Internally, statically link Rust stdlib or C++ stdlib.
pjc50 2 hours ago [-]
Exporting a C API from a C++ project to consume in another C++ project is really painful. This is how you get COM.

(which actually slightly pre-dates C++, I think?)

pjmlp 2 hours ago [-]
OWL, MFC, Qt, VCL, FireMonkey, AppFramework, PowerPlant...

Plenty do not, especially on Apple and Microsoft platforms because they always favoured other approaches to bare bones UNIX support on their dynamic linkers, and C++ compilers.

bluGill 10 hours ago [-]
Rust cannot dynamic link to rust. It can dynamic link to C and be dynamicly linked by C - if you combine the two you can cheat but it is still C that you are dealing with not rust even if rust is on both sides.
filmor 5 hours ago [-]
Rust can absolutely link to Rust libraries dynamically. There is no stable ABI, so it has to be the same compiler version, but it will still be dynamically linked.
mjevans 9 hours ago [-]
It might help to think of it as two IPC 'servers' written in rust that happen to have the C ABI interfaces as their communication protocol.
9 hours ago [-]
sedatk 12 hours ago [-]
No. https://doc.rust-lang.org/reference/linkage.html#r-link.dyli...
arcticbull 12 hours ago [-]
Rust lets you generate dynamic C-linkage libraries.

Use crate-type=["cdylib"]

nicoburns 12 hours ago [-]
Dynamic linking works fine if you target the C ABI.
conradev 10 hours ago [-]
Rust importing Rust must be statically linked, yes. You can statically link Rust into a dynamic library that other libraries link to, though!
timeon 11 hours ago [-]
You can use dynamic linking in Rust with C ABI. Which means going through `unsafe` keyword - also known as 'trust me bro'. Static linking directly to Rust source means it is checked by compiler so there is no need for unsafe.
deknos 4 hours ago [-]
i wait until they come to the hard stuff like awk, sed and grep.
GuB-42 2 hours ago [-]
ripgrep is one of the best grep replacement you can find, maybe even the best, and also one of the most famous Rust projects.

I don't know of a sed equivalent, but I guess that would be easy to implement as Rust has good regex support (see ripgrep), and 90%+ of sed usage is search-and-replace. The other commands don't look hard to implement and because they are not used as much, optimizing these is less of a priority.

I don't know about awk, it is a full programming language, but I guess it is far from an impossible task to implement.

Now the real hard part is making a true, bug-for-bug compatible replacement of the GNU version of these tools, but while good to have, it is not strictly necessary. For example, Busybox is very popular, maybe even more so than GNU in terms of number of devices, and it has its own (likely simplified) version of grep, sed and awk.

scns 22 minutes ago [-]
There is sd, not a drop in replacement though.

https://github.com/chmln/sd

egorfine 33 minutes ago [-]
What would be the point?
wiz21c 2 hours ago [-]
FTA:

> Why bother working on this algorithm from the 90s that sees very little use today?

What's in use nowadays ? zstd ?

ahh saw this: https://quixdb.github.io/squash-benchmark/

rwaksmunski 11 hours ago [-]
I use this crate to process 100s of TB of Common Crawl data, I appreciate the speedups.
viraptor 11 hours ago [-]
What's the reason for using bz2 here? Wouldn't it be faster to do a one off conversion to zstd? It beats bzip2 in every metric at higher compression levels as far as I know.
rwaksmunski 10 hours ago [-]
Common Crawl delivers the data as bz2. Indeed I store intermediate data in zstd with ZFS.
declan_roberts 11 hours ago [-]
That assumes you're processing the data more than once.
10 hours ago [-]
anon-3988 9 hours ago [-]
Is this data available as torrents?
malux85 11 hours ago [-]
Yeah came here to say a 14% speed up in compression is pretty good!
aidenn0 6 hours ago [-]
bzip2 (particularly parallel implementations thereof) are already relatively competitive for compression. The decompression time is where it lags behind because lz77 based algorithms can be incredibly fast at decompression.
koakuma-chan 10 hours ago [-]
It's blazingly fast
agumonkey 19 minutes ago [-]
rust aside, I really enjoy seeing all these different implementation benchmarks, very satisfying to read
Aissen 1 hours ago [-]
Does anyone know if it supports parallel decompression, lbzip2-style? (or just iterators doing pre-scanning for the block magic that allow doing parallel decompression on top).

Edit : it probably doesn't.

firesteelrain 12 hours ago [-]
Anyone know if this will by default resolve the 11 outstanding CVEs?

Ironically there is one CVE reported in the bzip2 crate

[1] https://app.opencve.io/cve/?product=bzip2&vendor=bzip2_proje...

tialaramex 11 hours ago [-]
There's certainly a contrast between the "Oops a huge file causes a runtime failure" reported for that crate and a bunch of "Oops we have bounds misses" in C. I wonder how hard anybody worked on trying to exploit the bounds misses to get code execution. It may or may not be impossible to achieve that escalation.
Philpax 12 hours ago [-]
> The bzip2 crate before 0.4.4

They're releasing 0.6.0 today :>

HackerThemAll 3 hours ago [-]
"NOTE: this is unrelated to the https://crates.io/crates/bzip2-rs product."

Reading to the last sentence is hard.

jorams 2 hours ago [-]
But it does apply to the bzip2 crate, which is the topic of discussion. Its new pure-rust implementation is libbz2-rs-sys, not bzip2-rs. The last sentence is irrelevant.
debugnik 2 hours ago [-]
This article is about the bzip2 crate, not the bzip2-rs crate, despite the repo for the former having the name of the latter.
conorjh 2 hours ago [-]
[dead]
a-dub 10 hours ago [-]
i'd be curious if they're using the same llvm codegen (with the same optimization) backend for the c and rust versions. if so, where the speedups are coming from?

(ie, is it some kind of rust auto-simd thing, did they use the opportunity to hand optimize other parts or is it making use of newer optimized libraries, or... other)

eru 9 hours ago [-]
Just speculating: Rust can hand over more hints to the code generator. Eg you don't have to worry about aliasing as much as with C pointers. See https://en.wikipedia.org/wiki/Aliasing_(computing)#Conflicts...
MBCook 7 hours ago [-]
This makes a lot of sense to me, though I don’t know the official answer so I’m just sort of guessing along too.

Linked from the article is another on how they used c2rust to do the initial translation.

https://trifectatech.org/blog/translating-bzip2-with-c2rust/

For our purposes, it points out places where the code isn’t very optimal because the C code has no guarantees on the ranges of variables, etc.

It also points out a lot of people just use ‘int’ even when the number will never be very big.

But with the proper type the Rust compiler can decide to do something else if it will perform better.

So I suspect your idea that it allows unlocking better optimizations though more knowledge is probably the right answer.

Too 4 hours ago [-]
Ergonomics of using the right data structures and algorithms can also play a big role. In C, everything beyond a basic array is too much hassle.
littlestymaar 3 hours ago [-]
Yeah, that was Brian Cantrill's realization when for the sake of learning he rewrote a part of dtrace in Rust and was shocked when he saw his naive reimplementation being significantly faster than his original code, and the answer boiled down to “I used a BTreeMap" in Rust because it's in std”.
WhereIsTheTruth 6 hours ago [-]
any rewrite, in X, Y, Z language gives you the opportunity to speed things up, there is nothing inherent to rust
adgjlsfhk1 5 hours ago [-]
C is honestly a pretty bad language for writing modern high performance code. Between C99 and C21, there was a ~20 year gap where the language just didn't add features needed to idiomatically target lots of the new instructions added (without inline asm). Just getting good abstract machine instructions for clz/popcnt/clmul/pdep etc helps a lot for writing this kind of code.
zzo38computer 5 hours ago [-]
Popcount, clz, and ctz are provided as nonstandard functions in GCC (and clang might also support them in GNU mode, but I don't know for sure). PDEP and PEXT do not seem to be, but I think they should be (and PEXT is something that INTERCAL already had, anyways) (although PDEP and PEXP can be used with -mbmi2 on x86, but are not available for general use). The MOR and MXOR of MMIX are also something that I would want to be available as built-in functions.
xvilka 7 hours ago [-]
I hope they or Prossimo will also look and reimplement in the similar fashion the core Internet protocols - BGP, OSPF and RIP, other routing implementations, DNS servers, and so on.
dataking 4 hours ago [-]
https://www.memorysafety.org/initiative/ this page mentions TLS and DNS which goes some way towards your suggestion.
broken_broken_ 4 hours ago [-]
About not having perf on macOS: you can get quite far with dtrace for profiling. That’s what the original flame graph script in Perl mentions using and what the flame graph Rust reimplementation also uses. It does not have some metrics like cache misses or micro instructions retired but still it can be very useful.
zoobab 3 hours ago [-]
Lbzip2 had much faster decompressing speed, using all available CPU cores.

It's 2025, and most programs like Python are stuck at one CPU core.

guappa 3 hours ago [-]
Thanks for showing us you have no understanding of python's situation.
tephra 2 hours ago [-]
I like Rust and have an ambition to learn it as well (I've had a few false starts...). One of my issues that I have is that every (slight exaggeration) library that I seem to come across is still at version 0.x.y. Take this library as an example. 0.1.0 was released in 2014 and it still hasn't had a 1.0.0 release, is there an aversion to get to 1.0.0 in the rust community?
liambigelow 2 hours ago [-]
https://0ver.org/#notable-zerover-projects
liambigelow 2 hours ago [-]
Serious answer: For some, they do change semi-often and don't feel compelled to declare stability. In other cases, it's a stable + widely used 0.x package, and bumping it to 1.0 usually implies _some_ kind of breaking change. (I don't know if that _should_ be the case, but I know that if I see a dependency has bumped from 0.x to 1.0 I'm going to be cautious and wait to update it until I have more time).

In general: People usually aren't too concerned about it.

sramsay64 48 minutes ago [-]
This list's Zig as an entry, despite the Zig project having very clear plans[0] for a 1.0 release. That's not 0ver, it's just the beta stage of semver.

[0] https://github.com/ziglang/zig/milestone/2

19 minutes ago [-]
anonnon 12 hours ago [-]
[flagged]
vlovich123 12 hours ago [-]
> After the uutils debacle

Which debacle?

anonnon 11 hours ago [-]
[flagged]
vlovich123 11 hours ago [-]
So what I’m getting is

1. The uutils project didn’t also make all locales cases for sort faster even though the majority of people will be using UTF-8, C or POSIX where it is indeed faster

2. There’s a lot of debating about different test cases which is a never ending quibble with sorting routines (go look at some of the cutting edge sort algorithm development).

This complaint is hyperfocusing on 1 of the many utilities they claim they’re faster on and quibbling about what to me are important but ultimately minor critiques. I really don’t see the debacle.

As for the license, that’s more your opinion. Rust as a language generally has dual licensed their code as MIT and Apache2 and most open source projects follow this tradition. I don’t see the conspiracy that you do. And just so I’m clear, the corporation your criticizing here as the amorphous evil entity funding this is Ubuntu right?

j16sdiz 6 hours ago [-]
>1. The uutils project didn’t also make all locales cases for sort faster even though the majority of people will be using UTF-8, C or POSIX where it is indeed faster

locale != encoding.

Try sort a phone book with tr_TR.UTF-8 vs en_US.UTF-8

vlovich123 6 hours ago [-]
I know. UTF-8, C and POSIX are locales (at least those are the locale strings)
0cf8612b2e1e 11 hours ago [-]
So what was I supposed to get from that 4chan wannabe site? That the project is not currently at fast as GNU? Where is the lying?
anonnon 11 hours ago [-]
[flagged]
hoseja 5 hours ago [-]
[flagged]
jeffbee 12 hours ago [-]
You should of course verify these results in your scenario. However, I somewhat doubt that the person exists who cares greatly about performance, and is still willing to consider bzip2. There isn't a point anywhere in the design space where bzip2 beats zstd. You can get smaller outputs from zstd in 1/20th the time for many common inputs, or you can spend the same amount of time and get a significantly smaller output, and zstd decompression is again 20-50x faster depending. So the speed of your bzip2 implementation hardly seems worth arguing over.
MBCook 7 hours ago [-]
Sure there is: someone provided you bzip2 files. Or required you give them files in that format.

Then you don’t have a choice.

And if you have to use it, 14% is a really nice speed up.

solarized 11 hours ago [-]
Do they use any llm to transpile the C to Rust ?
Twirrim 10 hours ago [-]
If you're going to use tools to transpile, don't use something that hallucinates. You want it to be precise.

https://github.com/immunant/c2rust reportedly works pretty well. Blog post from a few years ago of them transpiling quake3 to rust: https://immunant.com/blog/2020/01/quake3/. The rust produced ain't pretty, but you can then start cleaning it up and making it more "rusty"

dataking 9 hours ago [-]
They indeed used c2rust for the initial transpile according to https://trifectatech.org/blog/translating-bzip2-with-c2rust/
nightfly 11 hours ago [-]
Task that requires precision and potentially hard to audit? Exactly where I'd use an LLM /s
CGamesPlay 10 hours ago [-]
Without commenting on whether an LLM is the right approach, I don't think this task is particularly hard to audit. There is almost assuredly a huge test suite for bzip2 archives; fuzzing file formats is very easy; and you can restrict / audit the use of unsafe by the translator.
MBCook 7 hours ago [-]
You’re right, there is a large existing test suite. It’s mentioned in an article linked from this one.

https://trifectatech.org/blog/translating-bzip2-with-c2rust/

I suspect attempting to debug it would be a nightmare though. Given the LLM could hallucinate anything anywhere you’d likely waste a ton of time.

I suspect it would be faster to just try and write a new implementation based on the spec and debug that against the test suite. You’d likely be closer.

In fact, since they used c2rust, they had a perfectly working version from the start. From there they just had to clean up the Rust code and make sure it didn’t break anything. Clearly the best of the three options.

dale_huevo 12 hours ago [-]
A lot of this "rewrite X in Rust" stuff feels like burning your own house down so you can rebuild and paint it a different color.

Counting CPU cycles as if it's an accomplishment seems irrelevant in a world where 50% of modern CPU resources are allocated toward UI eye candy.

cornstalks 11 hours ago [-]
> Counting CPU cycles as if it's an accomplishment seems irrelevant in a world where 50% of modern CPU resources are allocated toward UI eye candy.

That's the kind of attitude that leads to 50% of modern CPU resources being allocated toward UI eye candy.

0cf8612b2e1e 12 hours ago [-]
Every cycle saved is longer battery life. Someone paid the one time cost of porting it, and now we can enjoy better performance forever.
dale_huevo 12 hours ago [-]
They kicked off the article saying that no one uses bzip2 anymore. A million cycles saved for something no one uses (according to them) is still 0% battery life saved.

If modern CPUs are so power efficient and have so many spare cycles to allocate to e.g. eye candy no one asked for, then no one is counting and the comparison is irrelevant.

yuriks 12 hours ago [-]
It sounds like the main motivation for the conversion was to simplify builds and reduce the chance of security issues. Old parts of protocols that no one pays much attention to anymore does seem to be a common place where those pop up. The performance gain looks more like just a nice side effect of the rewrite, I imagine they were at most targeting performance parity.
spartanatreyu 11 hours ago [-]
Exactly, even if we can't remove "that one dependency" (https://xkcd.com/2347/), we can reinforce everything that uses it.
jimktrains2 12 hours ago [-]
Isn't bzip used quite a bit, especially for tar files?
Philpax 12 hours ago [-]
The Wikipedia data dumps [0] are multistream bz2. This makes them relatively easy to partially ingest, and I'm happy to be able to remove the C dependency from the Rust code I have that deals with said dumps.

[0]: https://meta.wikimedia.org/wiki/Data_dump_torrents#English_W...

jeffbee 12 hours ago [-]
If so, only by misguided users. Why would anyone choose bz2 in 2025?
0x457 11 hours ago [-]
To unpack an archive made from the time when bz2 was used?
ben-schaaf 11 hours ago [-]
Of course no one uses systems, tools and files created before 2025!
jeffbee 11 hours ago [-]
bzip2 hasn't been the best at anything in at least 20 years.
appreciatorBus 10 hours ago [-]
The same could be said of many things that, nonetheless, are still used by many, and will continue to be used by many for decades to come. A thing does not need to be best to justify someone wanting to make it a bit better.
MBCook 7 hours ago [-]
I use plain old zip files almost every day.

“Best” is measured along a lot more axis than just performance. And you don’t always get to choose what format you use. It may be dictated to you by some 3rd party you can’t influence.

Twirrim 10 hours ago [-]
So? If I need to consume a resource compressed using bz2, I'm not just going to sit around and wait for them to use zstd. I'm going to break out bz2. If I can use a modern rewrite that's faster, I'll take every advantage I can get.
12 hours ago [-]
tcfhgj 11 hours ago [-]
> Counting CPU cycles as if it's an accomplishment seems irrelevant in a world where 50% of modern CPU resources are allocated toward UI eye candy.

Attitude which leads to electron apps replacing native ones, and I hate it. I am not buying better cpus and more ram just to have it wasted like this

stevefan1999 8 hours ago [-]
You know it is just Wirth's law in action: "Software gets slower faster than hardware gets faster." [^1]

In fact Jevons Paradox: When technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand - essentially, efficiency improvements can lead to increased consumption rather than the intended conservation. [^2][^3]

[^1]: https://www.comp.nus.edu.sg/~damithch/quotes/quote27.htm

[^2]: https://www.greenchoices.org/news/blog-posts/the-jevons-para...

[^3]: https://quickonomics.com/terms/jevons-paradox/

hyperman1 1 hours ago [-]
I think it goes deeper. There is a certain level of slowness that causes pain to users. When that level is hit, market forces cause attention to software efficiency.

Hardware efficiency just gives more room for software to bloat. The pain level is a human factor and stays the same.

So time to adapt Wirths law: Software gets slower >exactly as much< as hardware gets faster

Rucadi 12 hours ago [-]
I personally find a lot more relevant the part about "Enabling cross-compilation ", which in my opinion is important and a win.

The same about exported symbols and being able to compile to wasm easily.

egorfine 31 minutes ago [-]
I fully agree with you on the first statement and I am at loss of words at the second...
Terr_ 12 hours ago [-]
It seems to me like binary file format parsing (and construction) is probably a good place for using languages that aren't as prone to buffer-overflows and the like. Especially if it's for a common format and the code might be used in all sorts of security-contexts.
wahern 7 hours ago [-]
Buffer overflows are more a library problem, not a language problem, though for newer ecosystems like Rust the distinction is kind of lost on people. But point being, if you rewrote bzip2 using an equivalent to std::Vec, you'd end up in the same place. Unfortunately, the norm among C developers, especially in the past, was to open code most buffer manipulation, so you wind up with 1000 manually written overflow checks, some of which are wrong or outright missing, as opposed to a single check in a shared implementation. Indeed, even that Rust code had an off-by-one (in "safe" code), it just wasn't considered a security issue because it would result in data corruption, not an overflow.

What Rust-the-language does offer is temporal safety (i.e. the borrow checker), and there's no easy way to get that in C.

SpaceNugget 3 hours ago [-]
Pretty incredible for such a short argument to be so inconsistent with itself. Complaining about counting CPU cycles and actually measuring performance because... modern software development is bad and doesn't care about performance?
viraptor 11 hours ago [-]
Those cycles translate directly to $ saved in a few places. Mostly in places far away from having any UI at all.
Scuds 6 hours ago [-]
you're just an end user, you don't have to maintain the suite.

In OSS every hour of volunteer time is precious Manna from heaven, flavored with unicorn tears. So any way to remove Toil and introduce automation is gold.

Rust's strict compiler and an appropriate test suite guarantees a level of correctness far beyond C. There's less onus on the reviewer to ensure everything still works as expected when reviewing a pull request.

It's a win-win situation.

hoseja 5 hours ago [-]
It's like "adapting" Akallabêth so you can tell your own empowering story for modern audiences.
bitwize 9 hours ago [-]
It's a lot like X11 vs. Wayland. The current graphics developers, who trend younger, don't want to maintain the boomer-written C code in the X server. Too risky and time-consuming. So one of the goals of Wayland is to completely abolish X so it can be replaced with something more long-term maintainable. Turns out, current systems-level developers don't want to maintain boomer-written GNU code or any C code at all, really, for similar reasons. C is inherently problematic because even seasoned developers have trouble avoiding its footguns. So an unstated, but important, goal of Rust is to abolish all critical C code and replace it with Rust code. Ubuntu is on board with this.
Surac 2 hours ago [-]
So you say younger programmer have not the required coding kung fu to cope with c code? I hope you are wrong. The perspective to have rust like things on everydays devices realy frightens me. C is like a Lingua franca for computers. Nearly any hardware near person can READ it. I am one of this Boomers and i am not able to propper READ rust code, because the syntax is so academic. The fact that more and more code is written in rust, lessens the amount of people that can read programs
anonnon 12 hours ago [-]
> Counting CPU cycles

And that's assuming they aren't lying about the counting: https://desuarchive.org/g/thread/104831348/#q104831479

dwattttt 1 hours ago [-]
Do you have any reason to think their numbers are wrong, or is your argument "someone else once lied, maybe they are too"?
DaSHacka 11 hours ago [-]
Rust devs continuing to use misleading benchmarks? I, for one, am absolutely shocked. Flabbergasted, even.
jxjnskkzxxhx 11 hours ago [-]
> lot of this "rewrite X in Rust" stuff feels like

Indeed. You know the react-angular-vue nevermind is churn? It appears that the trend of people pushing stuff because it benefit their careers is coming to the low level world.

I for one still find it mistifying that Linus torvals let this people into the kernel. Linus, who famous banned c++ from the kernel not because of c++ in itself, but to ban c++ programmer culture.